url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://physics.stackexchange.com/questions/23615/why-are-alpha-particles-made-of-2-protons-and-neutrons
# Why are alpha particles made of 2 protons and neutrons? When experiencing alpha decay, atoms shed alpha particles made of 2 protons and 2 neutrons. Why can't we have other types of particles made of more or less protons? - 3 You can have other bits. Google "fission fragments" or "fissile fragments". If I find time maybe I'll write a real answer that talks about energy, stability, and phase space. If I get really ambitious I might say something about long range correlations in the nucleus. – dmckee♦ Apr 11 '12 at 22:12 @dmckee: But you don't have "Be" decay where a Be flies out of a nucleus. You have alpha decay. – Ron Maimon Apr 12 '12 at 2:05 – voix Apr 12 '12 at 4:55 ## 2 Answers The reason why alpha particles heavily dominate as the proton-neutron mix most likely to be emitted from most (not all!) radioactive components is the extreme stability of this particular combination. That same stability is also why helium dominates after hydrogen as the most common element in the universe, and why other higher elements had to be forged in the hearts and shells of supernovas in order to come into existence at all. Here's one way to think of it: You could in principle pop off something like helium-3 from an unstable nucleus - that's two protons and one neutron - and very likely give a net reduction in nuclear stress. But what would happen is this: The moment the trio started to depart, a neutron would come screaming in saying look how much better it would be if I joined you!! And the neutron would be correct: The total reduction in energy obtained by forming a helium-4 nucleus instead of helium-3 would in almost any instance be so superior that any self-respecting (and energy-respecting) nucleus would just have to go along with the idea. Now all of what I just said can (and in the right circumstances should) be said far more precisely in terms of issues such as tunneling probabilities, but it would not really change the message much: Helium-4 nuclei pop off preferentially because they are so hugely stable that it just makes sense from a stability viewpoint for them to do so. The next most likely candidates are isolated neutrons and protons, incidentally. Other mixed versions are rare until you get up into the fission range, in which case the whole nucleus is so unstable that it can rip apart in very creative ways (as aptly noted by the earlier comment). - 2 This is a good and accurate explanation, I don't know why you think it needs a lot more precision, but I think one should add that the extreme stability is due to the two spin states (up down) and two isospin states (p n) of the nucleon. This means that you can put exactly for nucleons at the same point without using higher orbits--- two protons of opposite spins and two neutrons of opposite spins, and this is the alpha particle. – Ron Maimon Apr 12 '12 at 2:07 Good point and well worth mentioning, thanks! – Terry Bollinger Apr 13 '12 at 19:10 $\alpha$ particles are really $He^4_2$ nucleus i.e made up of 2 neutron and 2 protons. As you can see in this graph, $He^4_2$ ion has a high binding energy per nucleon, i.e. it is highly stable among all the neighboring nuclei. This makes them easy to sustain their existence and makes it easier for the nuclei to emit them in radioactive decay thus making the resultant nuclei much more stable than if a $He_2^3$ would have escaped. - 2 Why not say it's an isospin and spin singlet? This is the reason for the abnormal stability, and the sudden drop at Li6. – Ron Maimon Apr 12 '12 at 7:17 1 Because "isospin" and "spin singlet" requires a good bit of background knowledge. This graph gets the point across much more effectively. – P O'Conbhui Apr 12 '12 at 16:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950951874256134, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/05/31/the-opposite-category/?like=1&source=post_flair&_wpnonce=420e8d64d7
# The Unapologetic Mathematician ## The Opposite Category One of the most interesting general facts about categories is how ubiquitous the notion of duality is. Pretty much everything has a “mirror image”, and the mirror is the opposite category. Given a category $\mathcal{C}$, with objects ${\rm Ob}(\mathcal{C})$ and morphisms ${\rm Mor}(\mathcal{C})$, we can construct the “opposite category” $\mathcal{C}^{\rm op}$ simply enough. In fact, it has the exact same objects and morphisms as $\mathcal{C}$ does. The difference comes in how they relate to each other. Remember that we had two functions, $s(m)$ and $t(m)$ assigning the “source” and “target” objects to any arrow. To get the opposite category we just swap them. Given a morphism, its source in $\mathcal{C}^{\rm op}$ is its target in $\mathcal{C}$, and vice versa. Of course, now we have to swap the order of composition. If we have $f:A\rightarrow B$ and $g:B\rightarrow C$ in $\mathcal{C}$, then we get $f:B\rightarrow A$ and $g:C\rightarrow B$ in $\mathcal{C}^{\rm op}$. In $\mathcal{C}$ the composition $g\circ f:A\rightarrow C$ is defined, but in $\mathcal{C}^{\rm op}$ the composition $f\circ g:C\rightarrow A$ is defined. Most definitions we give will automatically come with a “dual” definition, which we get by reversing all the arrows like this. For example, monos and epis are dual notions, as are subobjects and quotient objects. Just write down one definition in terms of morphisms, reverse all the morphisms (and the order of composition), and you get the other. Theorems work this way too. If you dualize the hypotheses and the conclusions, then you can dualize each step of the proof to prove the dual theorem. I can prove that any injection is monic, so it follows immediately by duality that any surjection is epic. Many texts actually completely omit even the statements of dual notions and theorems once they define the opposite category, but I’ll try to be explicit about the duals (though of course I won’t need to give the proofs). Another place duality comes up is in defining “contravariant” functors. This is just a functor $F:\mathcal{C}^{\rm op}\rightarrow\mathcal{D}$. It sends each object of $\mathcal{C}$ to an object of $\mathcal{D}$, and sends a morphism $f:A\rightarrow B$ in $\mathcal{C}$ to a morphism $F(f):F(B)\rightarrow F(A)$ in $\mathcal{D}$. See how the direction of the image morphism flipped? Early on, contravariant and regular (“covariant”) functors were treated somewhat separately, but really they’re just the same thing once you take the opposite category into account. Sometimes, though, it’s easier to think in terms of contravariant functors rather than mentally flipping all the arrows. I’ll close with an example of a contravariant functor we’ve seen before. Consider a ring $R$ with unit and a left module $M$ over $R$. That is, $M$ is an object in the category $R-{\bf mod}$. We can construct the dual module $M^*$, which is now an object in the category ${\bf mod}-R$ of right $R$-modules. I say that this is a contravariant functor. We’ve specified how the dual module construction behaves on objects, but we need to see how it behaves on morphisms. This is what makes it functorial. So let’s say we have two left $R$-modules $M$ and $N$, and an $R$-module homomorphism $f:M\rightarrow N$. Since we want this to be a contravariant functor we need to find a morphism $f^*:N^*\rightarrow M^*$. But notice that $M^*=\hom_{R-{\bf mod}}(M,R)$, and similarly for $N$. Then we have the composition of $R$-module homomorphisms $\circ:\hom_{R-{\bf mod}}(N,R)\otimes\hom_{R-{\bf mod}}(M,N)\rightarrow\hom_{R-{\bf mod}}(M,R)$. If $\nu$ is a linear functional on $N$, then we get $\nu\circ f:M\rightarrow R$ as a linear functional on $M$. We can define $f^*(\nu)=\nu\circ f$. Now, is this construction functorial? We have to check that it preserves identities and compositions. For identities it’s simple: $1_N^*(\nu)=\nu\circ1_N=\nu$, so every linear functional on $N$ gets sent back to itself. For compositions we have to be careful. The order has to switch around because this is a contravariant functor. We take $f:M\rightarrow N$ and $g:N\rightarrow L$ and check $(g\circ f)^*(\lambda)=\lambda\circ g\circ f=g^*(\lambda)\circ f=f^*(g^*(\lambda))=(f^*\circ g^*)(\lambda)$, as it should. ### Like this: Posted by John Armstrong | Category theory ## 7 Comments » 1. [...] I should have mentioned this before, but usually dual notions are marked by the premix “co-”. As an example, we have [...] Pingback by | May 31, 2007 | Reply 2. There’s something that disturbs me with dual categories. Consider the category of sets, with functions as arrows. What are arrows in the dual category? For instance, let f be the absolute value fonction, mapping from Z to N. What is f^op? The _same_ function, but with its arrow written backwards? Or a relation from N to Z? But in this case, which one is it? Indeed, to a relation from Z to N, one can associate many relations from N to Z… There is a “natural” one (obtained by reversing individual mappings), but why would f^op be this one? Which axiom of categroy theory does state that? Thanks in advance if you can give me a hint. It would really help me understand category theory. Comment by ChrisJ | January 14, 2009 | Reply 3. Well, usually we don’t talk about “the opposite arrow”, because the arrows in the opposite category are exactly the same. Look at it this way. You’re talking about a certain $f\in\hom_\mathbf{Set}(\mathbb{Z},\mathbb{N})$. What does this mean? I means we have $f\in\mathrm{Mor}(\mathbf{Set})$ with $s(f)=\mathbb{Z}$ and $t(f)=\mathbb{N}$. Now when we consider the opposite category, $\mathrm{Mor}(\mathbf{Set}^\mathrm{op})=\mathrm{Mor}(\mathbf{Set})$. The collection of all morphisms is the same as in $\mathbf{Set}$. And so we still have $f\in\mathrm{Mor}(\mathbf{Set}^\mathrm{op})$. So what’s the difference? Now we have $s(f)=\mathbb{N}$ and $t(f)=\mathbb{Z}$. We just swap which set we consider to be the source and which we consider to be the target. Comment by | January 14, 2009 | Reply 4. Thanks for the answer. Do you mean that in Set^op, we have s(f) = N and t(f) = Z (that is, with arrow notation, f: N -> Z), BUT f is still the same function from Z to N (that is, with functional notation, f: Z -> N)? Comment by ChrisJ | January 14, 2009 | Reply 5. Yes, that’s it. It’s the exact same function (morphism), but we swap which set (object) we think of as its source and target. Comment by | January 14, 2009 | Reply 6. Okay, so I was confused by the similarity of the “arrow” and “functional” notations. Thanks a lot! Comment by ChrisJ | January 14, 2009 | Reply 7. I was wondering about the question of finding a category equivalent to Set^op, but with its own meaning for the morphisms. From Johnstone’s Topos Theory, Set^op is equivalent to the category of complete atomic boolean algebras, abbreviated CABA, where the objects are the sets of Set and the morphisms are boolean homomorphisms that have the additional property that they preserve arbitrary meets and joins, called complete boolean homomorphisms. All this works because Set is a topos, and Set^op is therefore equivalent to the category of algebras over the Set’s powerset monad. I don’t know yet how the sub-structure of mappings in Set translates to describe the sub-structure of mappings in CABA, but it seems like an interesting approach for studying CABA. Comment by | August 22, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 63, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426491856575012, "perplexity_flag": "head"}
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_9&direction=prev&oldid=31624
# User:Michiexile/MATH198/Lecture 9 ### From HaskellWiki Revision as of 18:40, 17 November 2009 by Michiexile (Talk | contribs) IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ. ## Contents ### 1 Recursion patterns Meijer, Fokkinga & Patterson identified in the paper Functional programming with bananas, lenses, envelopes and barbed wire a number of generic patterns for recursive programming that they had observed, catalogued and systematized. The aim of that paper is to establish a number of rules for modifying and rewriting expressions involving these generic recursion patterns. As it turns out, these patterns are instances of the same phenomenon we saw last lecture: where the recursion comes from specifying a different algebra, and then take a uniquely existing morphism induced by initiality (or, as we shall see, finality). Before we go through the recursion patterns, we need to establish a few pieces of theoretical language, dualizing the Eilenberg-Moore algebra constructions from the last lecture. #### 1.1 Coalgebras for endofunctors Definition If $P: C\to C$ is an endofunctor, then a P-coalgebra on A is a morphism $a: A\to PA$. A morphism of coalgebras: $f: a\to b$ is some $f: A\to B$ such that the diagram commutes. Just as with algebras, we get a category of coalgebras. And the interesting objects here are the final coalgebras. Just as with algebras, we have Lemma (Lambek) If $a: A\to PA$ is a final coalgebra, it is an isomorphism. Finally, one thing that makes us care highly about these entities: in an appropriate category (such as ω − CPO), initial algebras and final coalgebras coincide, with the correspondence given by inverting the algebra/coalgebra morphism. In Haskell not quite true (specifically, the final coalgebra for the lists functor gives us streams...). Onwards to recursion schemes! We shall define a few specific morphisms we'll use repeatedly. This notation, introduced here, occurs all over the place in these corners of the literature, and are good to be aware of in general: • If $a: TA\to A$ is an initial algebra for T, we denote a = inA. • If $a: A\to TA$ is a final coalgebra for T, we denote a = outA. • We write μf for the fixed point operator `mu f = x where x = f x` We note that in the situation considered by MFP, inital algebras and final coalgebras coincide, and thus inA,outA are the pair of isomorphic maps induced by either the initial algebra- or the final coalgebra-structure. #### 1.2 Catamorphisms A catamorphism is the uniquely existing morphism from an initial algebra to a different algebra. We have to define maps down to the return value type for each of the constructors of the complex data type we're recursing over, and the catamorphism will deconstruct the structure (trees, lists, ...) and do a generalized fold over the structure at hand before returning the final value. The intuition is that for catamorphisms we start essentially structured, and dismantle the structure. Example: the length function from last lecture. This is the catamorphism for the functor $P_A(X) = 1 + A\times X$ given by the maps ```u :: Int u = 0   m :: (A, Int) -> Int m (a, n) = n+1``` MFP define the catamorphism by, supposing T is initial for the functor F : ```cata :: (F a b -> b) -> T a -> b cata phi = mu (\x -> phi . fmap x . outT)``` We can reframe the example above as a catamorphism by observing that here, ```data F a b = Nil | Cons a b deriving (Eq, Show) type T a = [a]   instance Functor (F a) where fmap _ Nil = Nil fmap f (Cons n a) = Cons n (f a)   outT :: T a -> F a (T a) outT [] = Nil outT (a:as) = Cons a as   lphi :: F a Int -> Int lphi Nil = 0 lphi (Cons a n) = n + 1   l = cata lphi``` where we observe that mu has a global definition for everything we do and out is defined once we settle on the functor F and its initial algebra. Thus, the definition of phi really is the only place that the recursion data shows up. #### 1.3 Anamorphisms An anamorphism is the categorical dual to the catamorphism. It is the canonical morphism from a coalgebra to the final coalgebra for that endofunctor. Here, we start unstructured, and erect a structure, induced by the coalgebra structures involved. Example: we can write a recursive function ```first :: Int -> [Int] first 1 = [1] first n = n : first (n - 1)``` This is an anamorphism from the coalgebra for $P_{\mathbb N}(X) = 1 + \mathbb N\times X$ on $\mathbb N$ generated by the two maps ```c 0 = Left () c n = Right (n, n-1)``` and we observe that we can chase through the diagram to conclude that therefore ```f 0 = [] f n = n : f (n - 1)``` which is exactly the recursion we wrote to begin with. MFP define the anamorphism by a fixpoint as well, namely: ```ana :: (b -> F a b) -> b -> T a ana psi = mu (\x -> inT . fmap x . psi)``` We can, again, recast our illustration above into a structural anamorphism, by: ```-- Reuse mu, F, T from above inT :: F a (T a) -> T a inT Nil = [] inT (Cons a as) = a:as   fpsi :: Int -> F Int Int fpsi 0 = Nil fpsi n = Cons n (n-1)``` Again, we can note that the implementation of fpsi here is exactly the c above, and the resulting function will - as we can verify by compiling and running - give us the same kind of reversed list of the n first integers as the first function above would. #### 1.4 Hylomorphisms The hylomorphisms capture one of the two possible compositions of anamorphisms and catamorphisms. Parametrized over an algebra $\phi: T A\to A$ and a coalgebra $\psi: B \to T B$ the hylomorphism is a recursion pattern that computes a value in A from a value in A by generating some sort of intermediate structure and then collapsing it again. It is, thus the composition of the uniquely existing morphism from a coalgebra to the final coalgebra for an endofunctor, followed by the uniquely existing morphism from the initial algebra to some other algebra. MFP define it, again, as a fix point: ```hylo :: (F a b2 -> b2) -> (b1 -> F a b1) -> b1 -> b2 hylo phi psi = mu (\x -> phi . fmap x . psi)``` First off, we can observe that by picking one or the other of inA,outA as a parameter, we can recover both the anamorphisms and the catamorphisms as hylomorphisms. As an example, we'll compute the factorial function using a hylomorphism: ```phi :: F Int Int -> Int phi Nil = 1 phi (Cons n m) = n*m   psi :: Int -> F Int Int psi 0 = Int psi n = Cons n (n-1)   factorial = hylo phi psi``` #### 1.5 Metamorphisms The metamorphism is the other composition of an anamorphism with a catamorphism. It takes some structure, deconstructs it, and then reconstructs a new structure from it. As a recursion pattern, it's kinda boring - it'll take an interesting structure, deconstruct it into a scalar value, and then reconstruct some structure from that scalar. As such, it won't even capture the richness of hom(Fx,Gy), since any morphism expressed as a metamorphism will factor through a map $x\to y$. #### 1.6 Paramorphisms Paramorphisms were discussed in the MFP paper as a way to extend the catamorphisms so that the operating function can access its arguments in computation as well as in recursion. We gave the factorial above as a hylomorphism instead of a catamorphism precisely because no simple enough catamorphic structure exists. #### 1.7 Apomorphisms The apomorphism is the dual of the paramorphism - it does with retention of values along the way what anamorphisms do compared to catamorphisms. ### 2 Further reading • Erik Meijer, Maarten Fokkinga, Ross Paterson: Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire [1] • L. Augusteijn: Sorting morphisms [2] ### 3 Further properties of adjunctions #### 3.1 RAPL Proposition If F is a right adjoint, thus if F has a left adjoint, then F preserves limits in the sense that $F(\lim_{\leftarrow} A_i) = \lim_{\leftarrow} F(A_i)$. Example: $(\lim_{\leftarrow_i} A_i)\times X = \lim_{\leftarrow_i} A_i\times X$. We can use this to prove that things cannot be adjoints - since all right adjoints preserve limits, if a functor G doesn't preserve limits, then it doesn't have a left adjoint. Similarly, and dually, left adjoints preserve colimits. Thus if a functor doesn't preserve colimits, it cannot be a left adjoint, thus cannot have a right adjoint. The proof of these statements build on the Yoneda lemma: Lemma If C is a locally small category (i.e. all hom-sets are sets). Then for any $c\in C_0$ and any functor $F: C^{op}\to Sets$ there is an isomorphism $hom_{hom_{Sets^{C^{op}}}}(yC, F) = FC$ where we define $yC = d\mapsto hom_C(d,c) : C^{op}\to Sets$. The Yoneda lemma has one important corollary: Corollary If yA = yB then A = B. Which, in turn has a number of important corollaries: Corollary $(A^B)^C = A^{B\times C}$ Corollary Adjoints are unique up to isomorphism - in particular, if $F: C\to D$ is a functor with right adjoints $U, V: D\to C$, then U = V. Proof homC(C,UD) = homD(FC,D) = homC(C,VD), and thus by the corollary to the Yoneda lemma, UD = VD, natural in D. #### 3.2 Functors that are adjoints • The functor $X\mapsto X\times A$ has right adjoint $Y\mapsto Y^A$. The universal mapping property of the exponentials follows from the adjointness property. • The functor $\Delta: C\to C\times C, c\mapsto (c,c)$ has a left adjoint given by the coproduct $(X,Y)\mapsto X + Y$ and right adjoint the product $(X,Y)\mapsto X\times Y$. • More generally, the functor $C\to C^J$ that takes c to the constant functor constc(j) = c,constc(f) = 1c has left andright adjoints given by colimits and limits: $\lim_\rightarrow -| \Delta -| \lim_\leftarrow$ • Pointed rings are pairs $(R, r\in R)$ of rings and one element singled out for attention. Homomorphisms of pointed rings need to take the distinguished point to the distinguished point. There is an obvious forgetful functor $U: Rings_* \to Rings$, and this has a left adjoint - a free ring functor that adjoins a new indeterminate $R\mapsto (R[x], x)$. This gives a formal definition of what we mean by formal polynomial expressions et.c. • Given sets A,B, we can consider the powersets P(A),P(B) containing, as elements, all subsets of A,B respectively. Suppose $f:A\to B$ is a function, then $f^{-1}: P(B)\to P(A)$ takes subsets of B to subsets of A. Viewing P(A) and P(B) as partially ordered sets by the inclusion operations, and then as categories induced by the partial order, f − 1 turns into a functor between partial orders. And it turns out f − 1 has a left adjoint given by the operation im(f) taking a subset to the set of images under the function f. And it has a right adjoint $f_*(U) = \{b\in B: f^{-1}(b)\subseteq U\}$ • We can introduce a categorical structure to logic. We let L be a formal language, say of predicate logic. Then for any list x = x1,x2,...,xn of variables, we have a preorder Form(x) of formulas with no free variables not occuring in x. The preorder on Form(x) comes from the entailment operation - f | − g if in every interpretation of the language, $f \Rightarrow g$. We can build an operation on these preorders - a functor on the underlying categories - by adjoining a single new variable: $*: Form(x) \to Form(x, y)$, sending each form to itself. Obviously, if f | − g with x the source of free variables, if we introduce a new allowable free variable, but don't actually change the formulas, the entailment stays the same. It turns out that there is a right adjoint to * given by $f\mapsto \forall y. f$. And a left adjoint to * given by $f\mapsto \exists y. f$. Adjointness properties give us classical deduction rules from logic. ### 4 Homework 1. Write a fold for the data type data T a = L a | B a a | C a a a and demonstrate how this can be written as a catamorphism by giving the algebra it maps to. 2. Write the fibonacci function as a hylomorphism. 3. Write the Towers of Hanoi as a hylomorphism. You'll probably want to use binary trees as the intermediate data structure. 4. Write a prime numbers generator as an anamorphism. 5. * The integers have a partial order induced by the divisibility relation. We can thus take any integer and arrange all its divisors in a tree by having an edge $n \to d$ if d | n and d doesn't divide any other divisor of n. Write an anamorphic function that will generate this tree for a given starting integer. Demonstrate how this function is an anamorphism by giving the algebra it maps from. Hint: You will be helped by having a function to generate a list of all primes. One suggestion is: ```primes :: [Integer] primes = sieve [2..] where sieve (p:xs) = p : sieve [x|x <- xs, x `mod` p > 0]``` Hint: A good data structure to use is; with expected output of running the algorithm: ```data Tree = Leaf Integer | Node Integer [Tree]   divisionTree 60 = Node 60 [ Node 30 [ Node 15 [ Leaf 5, Leaf 3], Node 10 [ Leaf 5, Leaf 2], Node 6 [ Leaf 3, Leaf 2]], Node 20 [ Node 10 [ Leaf 5, Leaf 2], Node 4 [ Leaf 2]], Node 12 [ Node 6 [ Leaf 3, Leaf 2], Node 4 [ Leaf 2]]]```
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8919119238853455, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/2009/02/16/a-sharp-inverse-littlewood-offord-theorem/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # A sharp inverse Littlewood-Offord theorem 16 February, 2009 in math.CO, math.PR, paper | Tags: generalised arithmetic progressions, Littlewood-Offord theorems, random walks, Van Vu Van Vu and I have just uploaded to the arXiv our preprint “A sharp inverse Littlewood-Offord theorem“, which we have submitted to Random Structures and Algorithms.  This paper gives a solution to the (inverse) Littlewood-Offord problem of understanding when random walks are concentrated in the case when the concentration is of polynomial size in the length $n$ of the walk; our description is sharp up to epsilon powers of $n$.  The theory of inverse Littlewood-Offord problems and related topics has been of importance in recent developments in the spectral theory of discrete random matrices (e.g. a “robust” variant of these theorems was crucial in our work on the circular law). For simplicity I will restrict attention to the Bernoulli random walk.  Given $n$ real numbers $v_1,\ldots,v_n$, one can form the random variable $S := \epsilon_1 v_1 + \ldots + \epsilon_n v_n$ where $\epsilon_1,\ldots,\epsilon_n \in \{-1,+1\}$ are iid random signs (with either sign +1, -1 chosen with probability 1/2).  This is a discrete random variable which typically takes $2^n$ values.  However, if there are various arithmetic relations between the step sizes $v_1,\ldots,v_n$, then many of the $2^n$ possible sums collide, and certain values may then arise with much higher probability.  To measure this, define the concentration probability $p(v_1,\ldots,v_n)$ by the formula $p(v_1,\ldots,v_n) = \sup_x {\Bbb P}(S=x)$. Intuitively, this probability measures the amount of additive structure present between the $v_1,\ldots,v_n$.  There are two (opposing) problems in the subject: • (Forward Littlewood-Offord problem) Given some structural assumptions on $v_1,\ldots,v_n$, what bounds can one place on $p(v_1,\ldots,v_n)$? • (Inverse Littlewood-Offord problem) Given some bounds on $p(v_1,\ldots,v_n)$, what structural assumptions can one then conclude about $v_1,\ldots,v_n$? Ideally one would like answers to both of these problems which come close to inverting each other, and this is the guiding motivation for our paper. One of the first forward Littlewood-Offord theorems was by Erdős, who showed Theorem 1. If at least k of the $v_1,\ldots,v_n$ are non-zero, then $p(v_1,\ldots,v_n) \leq O(k^{-1/2})$. In fact the sharp bound was computed by Erdős as \$\binom{k}{\lfloor k/2\rfloor}/2^k\$; the proof relies, incidentally, on Sperner’s theorem, which is of relevance to the ongoing polymath1 project.  (An earlier result of Littlewood and Offord gave a weaker bound of $O( k^{-1/2} \log k )$.)  Taking contrapositives, we obtain an inverse Littlewood-Offord theorem: Corollary 1. If $p(v_1,\ldots,v_n) \geq p$, then at most $O(1/p^2)$ of the $v_1,\ldots,v_n$ are non-zero. The bound is sharp in the sense that it is attained in the case when all the $v_i$ are equal.  However, the theorem is far from sharp in other cases; if k of the $v_1,\ldots,v_n$ are non-zero, then $p(v_1,\ldots,v_n)$ can be as small as $2^{-k}$. Another forward Littlewood-Offord theorem in a similar spirit is by Särkőzy and Szemerédi, who showed Theorem 2. If at least k of the $v_1,\ldots,v_n$ are distinct, then $p(v_1,\ldots,v_n) \leq O(k^{-3/2})$. (The slightly weaker bound of $O( k^{-3/2} \log k )$ was obtained by Erdős and Moser.)  Again, it has a contrapositive: Corollary 2. If $p(v_1,\ldots,v_n) \geq p$, then the $v_1,\ldots,v_n$ take on at most $O( p^{-2/3} )$ distinct values. Again, Theorem 2 is optimal in the sense that the bound is attained when $v_1,\ldots,v_n$ lie in an arithmetic progression (e.g. $v_i = i$), but is not always sharp otherwise.  There are many further forward and inverse Littlewood-Offord results; see our paper for a discussion of some of these. In recent years it has become clearer that the concentration probability is connected to the extent to which the $v_1,\ldots,v_n$ lie in a generalised arithmetic progression (GAP) $P = \{ n_1 w_1 + \ldots + n_d w_d: -N_i \leq n_i \leq N_i \hbox{ for } i=1,\ldots,d \}$; the quantity d is known as the rank of the GAP.  For instance, an easy computation gives Theorem 3. (Forward Littlewood-Offord)  If $v_1,\ldots,v_n$ lie in a GAP P of rank d, then $p(n_1,\ldots,n_d) \ll_d n^{-d/2} |P|^{-1}$. Intuitively, a random sum of n vectors in P should lie in a dilate $O(\sqrt{n}) P$ of P, which has size about $n^{d/2} |P|$, which leads to the stated concentration result. One of our main results is a sort of converse to the above claim, in the regime where the concentration probability is of polynomial size: Theorem 4. (Inverse Littlewood-Offord)  If $p(v_1,\ldots,v_n) \geq n^{-A}$, and $\varepsilon > 0$, then there exists a GAP P of rank d at most 2A that contains all but $O_{A,\varepsilon}(n^{1-\varepsilon})$ of the $v_1,\ldots,v_n$, and such that $p(n_1,\ldots,n_d) \ll_{A,\varepsilon} n^{-d/2+\varepsilon} |P|^{-1}$. Thus, Theorem 3 is sharp up to epsilon losses, and up to throwing away a small number of vectors.  One can use this theorem to deduce several earlier theorems, such as Theorem 1 and Theorem 2, except for some epsilon losses.  (It is certainly of interest to try to see how one can remove these epsilon losses; I know this problem is currently being looked at.) In an earlier paper we had proven a weaker version of Theorem 4, in which the rank d was only bounded by $O_A(1)$ instead of the optimal value of 2A, and the nearly-sharp $n^{-d/2+\varepsilon}$ factor was replaced by $n^{-O_A(1)}$. Our methods are purely combinatorial, based on “growing” the GAP P by a greedy algorithm.  Roughly speaking, the idea is as follows: 1. Start with the trivial progression $Q = \{0\}$.  Also, pick an integer k a little bit smaller than $\sqrt{n}$. 2. Call an element x of Q “bad” if adding the progression $\{ -kx, \ldots, -x,0,x,\ldots, kx\}$ to Q significantly increases the size of Q, and “good” otherwise. 3. If only a few of the $v_1,\ldots,v_n$ are bad, STOP. 4. Otherwise, if there are a lot of bad $v_i$, there is a way to use Hölder’s inequality to find a bad x with the property that S is much more likely to fall into a translate of $Q + \{-kx,\ldots,kx\}$ than it is to Q.    Replace Q with this larger GAP and return to step 2. This algorithm turns out to terminate in a bounded number of steps if the concentration probability of S was initially of polynomial size, and will trap most of the $v_1,\ldots,v_n$ in the set of good points relating to Q, which turns out to essentially be a GAP P of size about $n^{d/2}$ times bigger than that of Q, where d is the rank of Q. ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 56, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144031405448914, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/119142/tubular-neighborhoods-of-chains/119234
## Tubular neighborhoods of chains ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A positive answer to the following question would be very helpful in understanding the evaluation of differential cohomology classes on chains. Let $M$ be a smooth manifold and $c$ be a smooth $p$-chain in $M$, i.e. a finite linear combination of smoothly parametrized singular $p$ simplices. Let $|c|\subseteq M$ denote the support of the chain defined as a union of the simplices. Here is the question: Does there exists an open subset $U\subseteq M$ such that $|c|\subseteq U$ and $H^{q}(U,\mathbb{Z})=0$ for all $q\ge p+1$ (real coefficients would be sufficient in the application. - Is a "smoothly parametrized singular $p$ simplex" the same as a smooth map $\Delta^p\to M$? – Mark Grant Jan 17 at 8:07 This is exactly what I have in mind. – ubunke Jan 17 at 9:50 @Ulrich: is it enough for your purposes to have this for a smooth chain $c'$ that is homologous to your given chain $c$? – John Klein Jan 18 at 3:46 I need the property for the given chain. – ubunke Jan 18 at 22:17 I might have totally misunderstood your question, but doesn't Fact 2.1 in the paper by Simons and Sullivan (arxiv.org/abs/math/0701077) prove precisely what you want? – Dmitri Pavlov Jan 25 at 8:38 show 1 more comment ## 5 Answers Here is an excerpt from a paper by Simons and Sullivan (Axiomatic Characterization of Ordinary Differential Cohomology), which seems to answer the question: Fact 2.1: Let K in M denote the compact image of a smooth singular k-chain in M. Then every neighborhood of K contains a smaller neighborhood whose integral cohomology vanishes above k. (We call these k-good neighborhoods.) - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here's an approach which might work (I'm not sure about the correctness of this.) 1) Assume $M$ is closed. Choose a triangulation $T$ of $M$. If the support of $c$ is contained inside the $p$-skeleton of this triangulation, then we take $U$ to be a small regular neighborhood of $p$-skeleton. This will do the job in this case. 2) If not, let's consider the dual triangulation $T^\ast$. We can ask whether or not $c$ meets the $(n-p-1)$-skeleton of $T^\ast$. If it doesn't we can let $U$ be the effect of deleting the $(n-p-1)$-skeleton of $T^\ast$ from $M$. Then it seems to me that $U$ has the correct property in this case. 3 ) More generally, we can ask whether there exists a triangulation $T$ satisfying the (2). It seems to me that it should be possible to slightly modify $T$, say by general position, so that (2) holds. Remark: I originally conceived of a version of this using Morse theory, but then realized that I had to retract it because I got confused. Perhaps it's possible to find a Morse function $f\: M \to \Bbb R$ such that $c$ misses the $(n-p-1)$-skeleton of the handlebody defined by $-f$? If so, we can define $U$ to be $M$ with this skeleton removed. - As the question was asked the answer seems to be: no. Consider R^n and a sequence of non-zero points converging to zero. Choose around each point in this sequence a small ball which does not meet the other points in the sequence and remove these closed balls. This is our manifold $M$. Now consider the 0-cycle given by the point 0. All neighbourhoods of 0 contain infinitely many holes and so $H^{n-1}(M)$ is non-zero. If you ask the question, whether a given homology class has a representative with this property I agree with John that the answer should be yes. But I have not thought about a detailed argument. Matthias Kreck - Matthias, $M$ in your example is not a manifold. There is no chart at zero. – ubunke Jan 18 at 22:10 Have you looked at the deformation theorem for rectifiable currents? This essentially states that any integral current $S$ can be approximated by a polyhedral current situated not very far from $S$. Your smooth chain defines an integral current. A good place to look for more details is Frank Morgan's Geometric Measure theory. A Beginner's Guide, Section 5.1. I believe that the strategy used in the proof of the deformation theorem could be useful for your problem too, or at least the weaker version suggested by Matthias Kreck. More precisely, the deformation theorem indicates that your chain $c$ can be approximated (in various norms on the space of currents) by a nice polyhedral chain $c'$, whose support can be chosen in an arbitrarily small neighborhood of the support of $c$. In particular $c'$ is homologous to $c$, if $c$ is closed. - I tried similar ideas. Of course I could approximate my $c$ by a very regular chain or current $c^{\prime}$ which then has a neighbourhood as required. But this might be very small and may not contain the original $c$. – ubunke Jan 18 at 22:14 The homology analogue of this question is answered affirmatively by Proposition 2.1 in math/0605535. The proof provides $U$ with $H_p(U;{\mathbb Z})$ torsion-free and so implies the affirmative answer to the original question by the Universal Coefficient Theorem. The proof of Proposition 2.1 is a version of the argument John Klein described, phrased in terms of transverse triangulations of Lemma 2.3 (which is proved in math/1012.3979, after Matthias Kreck had pointed out that this had not been in the literature). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353458285331726, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/57631/list
## Return to Question 3 added 532 characters in body A naive question. Let $S$ be a set and let $[0,1]^S$ the set of functions from $S$ to the closed interval $[0,1]$. Suppose given some function $P \colon [0,1]^S \to [0,1]$ satisfying the following three conditions: 1. If $f \geq g$ everywhere on $S$, then $P(f) \geq P(g)$; 2. $P(\min(f,g)) \geq \min(P(f),P(g))$; 3. $P(1-f) = 1 - P(f)$. This is supposed to model a situation each point in $S$ has a "degree of belief" in some proposition, which yields a function $f$ in $[0,1]^S$; then $P$ is a process which takes all these degrees of belief and aggregates them into a "consensus" degree of belief $P(f)$. Of course, this is meant to mimic the definition of an ultrafilter, which I think is given by the above definition with [0,1] replaced by {0,1}. Certainly you have "principal" $P$, which just evaluate $f$ at some point $s$ of $S$. I suppose you could get other $P$ by sending $f$ to its limit with respect to some non-principal ultrafilter. Is that it? Added: Actually, the second condition above is perhaps too strong. I don't see an option for "hide question until I've thought about a bit more about what the best version of the question is" so I will just append this remark. Added: Thanks, guys, for all the great answers. I now think the formulation of (2) was misguided (at least if the definition is meant to model consensus about degrees of belief) and I don't know what the "right" formulation is. One might well, for instance, want P to behave well when f and g refer to independent propositions; that would ask that P(fg) = P(f)P(g), which in the case of {0,1}-valued functions again agrees with the ultrafilter definition. This rules out averages but leaves in evaluation at ultrafilters. 2 texified A naive question. Let S $S$ be a set and let [0,1]^S $[0,1]^S$ the set of functions from S $S$ to the closed interval [0,1].$[0,1]$. Suppose given some function $P : \colon [0,1]^S -> \to [0,1] 0,1]$ satisfying the following three conditions: 1. If $f >= g \geq g$ everywhere on S, $S$, then P(f) >= P(g); 2. P(min(f,g)) >= min(P(f),P(g))$P(f) \geq P(g)$; 3. P(1-f) 4. $P(\min(f,g)) \geq \min(P(f),P(g))$; 5. $P(1-f) = 1 - P(f)P(f)$. This is supposed to model a situation each point in S $S$ has a "degree of belief" in some proposition, which yields a function f $f$ in [0,1]^S; $[0,1]^S$; then P $P$ is a process which takes all these degrees of belief and aggregates them into a "consensus" degree of belief P(f).$P(f)$. Of course, this is meant to mimic the definition of an ultrafilter, which I think is given by the above definition with [0,1] replaced by {0,1}. Certainly you have "principal" P, $P$, which just evaluate f $f$ at some point s $s$ of S. $S$. I suppose you could get other P $P$ by sending f $f$ to its limit with respect to some non-principal ultrafilter. Is that it? Added: Actually, the second condition above is perhaps too strong. I don't see an option for "hide question until I've thought about a bit more about what the best version of the question is" so I will just append this remark. 1 # "Probabilistic ultrafilters?" A naive question. Let S be a set and let [0,1]^S the set of functions from S to the closed interval [0,1]. Suppose given some function P: [0,1]^S -> [0,1] satisfying the following three conditions: 1. If f >= g everywhere on S, then P(f) >= P(g); 2. P(min(f,g)) >= min(P(f),P(g)); 3. P(1-f) = 1 - P(f). This is supposed to model a situation each point in S has a "degree of belief" in some proposition, which yields a function f in [0,1]^S; then P is a process which takes all these degrees of belief and aggregates them into a "consensus" degree of belief P(f). Of course, this is meant to mimic the definition of an ultrafilter, which I think is given by the above definition with [0,1] replaced by {0,1}. Certainly you have "principal" P, which just evaluate f at some point s of S. I suppose you could get other P by sending f to its limit with respect to some non-principal ultrafilter. Is that it? Added: Actually, the second condition above is perhaps too strong. I don't see an option for "hide question until I've thought about a bit more about what the best version of the question is" so I will just append this remark.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9623385071754456, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/10496/what-can-the-d-wave-quantum-computer-do/14604
# What can the D-Wave quantum computer do? The media are reporting the commercially sold 128-bit quantum computer from D-Wave which of course sounds amazing. The gadget is described as something capable of doing quantum annealing http://en.wikipedia.org/wiki/Quantum_annealing which looks less convincing. I want to ask you what classes of problems the D-Wave computer can actually solve or perform. It can't run Shor's algorithm on 128 qubits, can it? - 1 – Mark Betnel May 27 '11 at 19:13 this means this is the end of all existing public key cryptography as forecasted? – lurscher May 27 '11 at 19:32 5 – Lagerbaer May 27 '11 at 20:26 Oh, and it's definitely not a universal quantum computer, i.e. it won't run Shor's algorithm afaik – Lagerbaer May 27 '11 at 20:27 1 – Alex 'qubeat' May 28 '11 at 12:50 show 5 more comments ## 2 Answers The DWave machine stirred up quite an amount on controversy in the community when it was first announces. The machine basically attempts to solve an NP-complete optimization problem (MAX-2SAT) by encoding it as a ground state of a Hamiltonian, and tries to reach this ground state by moving adiabatically to it from the ground state of an efficiently coolable Hamiltonian. In general, the adiabatic algorithm is not known to be able to find ground states efficiently as the proximity of low lying levels to the ground state means that the transition between Hamiltonians has to be performed slowly, and the speed at which this can occur is governed by the gap between the ground state and the lowest excited levels. It is commonly believed within the community, but not proved, that no quantum algorithm can efficiently solve NP-complete problems. In general the ground state of a Hamiltonian can be used to encode a wider variety of problems than NP (know QMA-complete problems), and so decision to focus on NP optimization problems has led to restrictions which prevent the device from being used for general purpose quantum computing (even if noise was not an issue). Thus you can't run Shor's algorithm on the device. Further, you -can- factor any number that you could fit on a 128 qubit device by classical means. The general number field sieve puts 128 bits within reach of modern personal computers. Noise is a real issue with DWave's device, and although there have been a number of technical papers from them playing down the issue and trying to demonstrate quantum effects, the coherence times for the individual qubits are much shorter than the time scale for the algorithm. Thus the common view within the community seems to be that this is basically an expensive classical special purpose computer. There is an interesting subtlety as regards noise: If you add noise to the adiabatic algorithm, it degrades gracefully into one of the best classical algorithms for the same problem. Thus you can obtain the same result either way, and the only difference is in the assymptotics for large systems (which are obviously not observables). Thus even if they produce a valid answer for every problem you throw at such a device, this is not enough information to determine if you are actually performing quantum computation. Let me add that the adiabatic model can encode universal quantum computation, however the limitations of DWave's implementation means that specific machine cannot. - 2 AFAIK Factoring is not NP complete – Lagerbaer Sep 12 '11 at 22:24 3 @Lagerbaer: Factoring is not known to be NP-complete, but this can't be proven without first proving that P$\neq$NP. – Joe Fitzsimons Sep 13 '11 at 3:18 2 @Jus12: Solving an NP-complete problem would mean that it could solve any problem in NP (drop the -complete), and as factoring is in NP, you are correct that it could solve it. However, you will notice that nowhere in my answer do I say it could sole NP-complete problems in polynomial time, and DWave has backed away from making such claims. Thus, even if it works as advertised, there is no reason to believe it would be could at factoring. There is a generic polynomial speed-up from quantum mechanics, and that is what they are counting on, even for exponential time algorithms. – Joe Fitzsimons Sep 13 '11 at 3:23 1 @joe Fitzsimons: true. My bad. I meant NP. Though, the first part of the claim is not entirely wrong. NP complete is part of NP. – Jus12 Sep 13 '11 at 4:26 1 @Jus12: Yes, I know. I just thought I should point out that the statement needed is slightly stronger. – Joe Fitzsimons Sep 13 '11 at 4:30 show 4 more comments It mentions: Finding a global minimum where the function of jumping from one minimum to another is handled by quantum tunneling. I have the feeling that in order to just get an idea of what it can practically do one could look at the mentioned example of spin glasses. In other words, the spin coupling physics close to the actual hardware implementation itself http://en.wikipedia.org/wiki/Spin_glass. Relevant may be the work of Giorgio Parisi (Yes, the one of the Altarelli–Parisi parton evolution equations) and his co-workers Mezard and Virasoro. See the text of the Boltzmann 1992 Medal: Parisi's deepest contribution concerns the solution of the Sherrington-Kirkpatrick mean field model for spin glasses. After the crisis caused by the unacceptable properties of the simple solutions, which used the "replication trick", Parisi proposed his replica symmetry breaking solution, which seems to be exact, although much more complex than anticipated. Later, Parisi and co-workers Mezard and Virasoro clarified greatly the physical meaning of the mysterious mathematics involved in this scheme, in terms of the probability distribution of overlaps and the ultrametric structure of the configuration space. This achievement forms one of the most important breakthroughs in the history of disordered systems. This discovery opened the doors to vast areas of application. e.g., in optimization problems and in neural network theories. http://en.wikipedia.org/wiki/Giorgio_Parisi#Awards -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9474555253982544, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/52101/longest-coinciding-pair-of-integer-sequences-known/52121
## Longest coinciding pair of integer sequences known ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) There are arbitrarily many pairs of integer sequences (of arbitrary origins) that coincide upto an $N$ but differ for an $n > N$. I assume, the coincidence will be considered accidentally then by default, but I may be mistaken about that. One is disadviced to draw any conclusions from coincidences of integer sequences unless its proven, that they coincide for all $n$. (Even then there may be no sensible conclusions, as I have learned here: Equivalence of families of objects with the same counting function.) In any case, it is hard not to be entrapped to draw a conclusion when $N$ is very large. But what is "very large"? Thus my question: What is the largest $N$ with two known integer sequences coinciding upto $N$ but differing for an $n > N$? (Can this information be captured from OEIS by an intelligent query?) (I am aware of the fact that one can trivially define pairs of integer sequences which conincide for all $n$ but a single and arbitrarily large one. It should be clear that I am not interested in those but in pairs that are not adjusted to each other this way.) - 3 Your question strongly reminds me of a very similar earlier question on the phenomenon of eventual counterexamples. mathoverflow.net/questions/15444/… – Willie Wong Jan 14 2011 at 19:13 I was also thinking of the eventual counterexamples thread, but couldn't recall the name. – S. Sra Jan 14 2011 at 19:27 1 It's actually a trivial variant on the same question isn't it? – Daniel Mehkeri Jan 16 2011 at 7:08 ## 11 Answers The number of divisions of $\mathbb{R}^3$ by $k \ge 0$ planes in general position starts 1,2,4,8, then 15, etc. For $\mathbb{R}^6$ it is 1,2,4,8,16,32,64 then 127. In general for $\mathbb{R}^N$ it is the sum of the binomial coefficients from $\binom{k}{0}$ up to $\binom{k}{N}$ and hence it agrees with $2^k$ for terms 0,1,2, up to N before starting to fall off. other answers Of course for prime p, $2^{p-1}=1 \mod{p}$ but there are only 2 known cases $p=1093$ and $3511$ where $2^{p-1}=1 \mod{p^2}$. SO primes and primes with $2^{p-1} \ne 1 \mod{ p^2}$ agree for the first 182 primes. For "listed in the OEIS" there are a couple which go from 1 to 99 then skip 100: undulating numbers in base 10 and cents you can have in US coins without having change for a dollar (the latter being 1-99 along with $105, 106, 107, 108, 109, 115, 116, 117, 118, 119$.) - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The positive odd integers $n$ which pass the Euler-Jacobi primality test to base $2,$ $2^{(n-1)/2} \equiv \big(\frac2n\big) \mod n$ where the RHS is the Jacobi symbol, agree with the odd primes up to the inclusion of $561$. So, these sequences $3, 5, 7, 11, 13, ..., 557, 561, 563, ...$ and $3, 5, 7, 11, 13, ..., 557, 563, ...$ agree for $101$ terms. - 3 Hans, I don't understand what criteria you are using to judge these answers. I could turn the observations in math.sjsu.edu/~hsu/courses/126/… into answers where the first differing index is huge, e.g. #13 leads to two sequences where the first index which differs is n = 4700063497. – Qiaochu Yuan Jan 15 2011 at 1:39 There are many natural examples of a sequence $a_{n,k}$ of two parameters such that $a_{n,k}$ approximates a sequence $a_n$ as $k \to \infty$ in the sense that the first $k$ terms of $a_n$ and $a_{n,k}$ agree. Aaron Meyerowitz gives a good one; another example is the "partial Catalan" sequence $C_{n,k}$ of all parenthesizations using $n$ pairs of parentheses with parenthetical depth at most $k$. So I don't think this is quite the questions you meant to ask. (A nice commonality between Aaron Meyerowitz's example and this one is that for fixed $k$ the approximating sequences $a_{n,k}$ are regular, so their generating functions are rational. So one can think of these generating functions as "rational approximations" to the generating function of $a_n$, which can in some cases be obtained by truncating a continued fraction expansion. This is the case with my example; see this blog post.) - For what it's worth, the OEIS has 99 sequences containing the string 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35, which is all I had the patience to type in. A153671, Minimal exponents m such that the fractional part of $(101/100)^m$ obtains a maximum (when starting with $m=1$), continues the pattern up to 69, then goes 110, 180, .... - Thanks for the patience! – Hans Stricker Jan 14 2011 at 21:03 It is a known example: sequence 1,2,3,5,7,11,13,$\dots$ of non-composite numbers coincides with the sequence of orders of finite simple groups until 60 appears (in this last sequence). - Not really the most natural sequences, but the sequences $a_n^{(k)}$ of positive integers which have at most $k$ distinct prime factors coincide a lot (among themselves and with the integers). The first term not in $a_n^{(k)}$ is $p_1\cdot p_2 .... \cdot p_{k+1}$. - +1. How many reps do you need? For the future, your comment could be simply turned into an answer by giving the explicit coincidence interval for some $k$ which beats others' records. $k=4$ would be more than sufficient. :-) – Wadim Zudilin Jan 14 2011 at 23:09 true, but once $k=4$ becomes a record, $k=5$ beats it and so on :D – N S Jan 15 2011 at 2:45 If you define the function $f$ of $h$ and $x$ by $f(1,x) = 1+ x$ and $f(h+1,x) = (1+x) ^ {f(h,x)}$ then the leading coefficents at the powers of x in the formal powerseries up to an index k=1 ... h are equal for f(h+j,x) and j>0 f(0,x) = 1 + x f(1,x) = 1 + x + x^2 + 1/2*x^3 + 1/3*x^4 + 1/12*x^5 + 3/40*x^6 - ... f(2,x) = 1 + x + x^2 + 3/2*x^3 + 4/3*x^4 + 3/2*x^5 + 53/40*x^6 + ... f(3,x) = 1 + x + x^2 + 3/2*x^3 + 7/3*x^4 + 3*x^5 + 163/40*x^6 + .... f(4,x) = 1 + x + x^2 + 3/2*x^3 + 7/3*x^4 + 4*x^5 + 243/40*x^6 + ... So if we use the coefficients of the powerseries of f(N,x) and f(N+1,x) the sought N can be arbitrarily high . - @Gottfried, isn't it a subanswer of Qiaochu's answer? – Wadim Zudilin Jan 15 2011 at 6:00 @Wadim: true, now that you mention this. I didn't have my example in mind in terms of general sequences, but rather just as an isolated, curious observation in the context of iteration of functions. Sorry - didn't decode the more general formulation of Qiaochu in time... – Gottfried Helms Jan 15 2011 at 6:41 @Gottfried, +1 for your honesty! – Wadim Zudilin Jan 15 2011 at 7:51 @Wadim: thanks, Wadim, that's very kind. And with this... I've got some vague idea to begin to understand better one of the rationale of the scheme of this site... Not a bad idea, indeed ;-) – Gottfried Helms Jan 16 2011 at 0:03 By searching OEIS for the list 1,2,4,8,16, I found the following examples of sequences which start out as 1, 2, 4, 8, 16, but do not equal the sequence of powers of 2. 1. For $n \geq 1$, mark $n$ equally spaced points around a circle and draw a line connecting each of those points to all the rest. Consider the number of regions thus formed inside the circle. This sequence begins $$1, 2, 4, 8, 16, 30, 57, 88, 163, 230$$ and is Sloane's A006533. (A general formula for the $n$th term is given in Poonen and Rubinstein's paper "The number of intersection points made by the diagonal of a regular polygon" and depends on $n$ mod 2520. 2. For $n \geq 1$, mark $n$ points around a circle in general position and draw a line connecting each of those points to all the rest. The number of regions thus formed inside the circle. begins $$1, 2, 4, 8, 16, 31, 57, 99, 163, 256$$ and is Sloane's A000127. A general formula for the $n$th term is $1 + \binom{n}{2} + \binom{n}{4}$. 3. The number of positive divisors of $n!$ for $n \geq 1$ begins $$1, 2, 4, 8, 16, 30, 60, 96, 160, 270$$ and is Sloane's A027423. 4. The set of $n \geq 1$ such that $3^n \equiv 1 \bmod n$ begins $$1, 2, 4, 8, 16, 20, 32, 40, 64, 80$$ and is Sloane's A067945. The powers of 2 are a subsequence. 5. For $n \geq 0$, the smallest positive integer that needs $n$ steps to reach 1 in the $3x+1$ problem begins $$1, 2, 4, 8, 16, 5, 10, 3, 6, 12$$ and is Sloane's A033491. Here we need to be careful to call this a sequence and not a set since it is not increasing. (I think everyone understands what I am trying to say in the previous sentence.) 6. For $n \geq 1$, the number of different products of distinct numbers in ${1,2,\dots,n}$ begins $$1, 2, 4, 8, 16, 26, 52, 88, 152, 238$$ and is Sloane's A060957. The reason we don't get 32 different products when $n = 6$ is due to duplicate products like $2\cdot 3 = 6$ and $2 \cdot 6 = 3 \cdot 4$. 7. For each odd integer $n \geq 1$ (admittedly restricting to odd $n$ may make the result look rigged) the number of partitions of $n$ into an odd number of parts (e.g., 5 can be written in 4 such ways, with 1 part as 5, with 3 parts as 1+2+2 and 1+1+3, and with 5 parts as 1+1+1+1+1) begins $$1, 2, 4, 8, 16, 29, 52, 90, 151, 248$$ and is Sloane's A160786. I realize this doesn't strictly answer the original question (give very large $N$ where two sequences differ for the first time), but it seems close in spirit (giving many sequences which start out with the same first 5 terms and eventually look different). If someone knows a better MO question for which this would be a good answer, make a comment. I had known about the first two examples above for quite a few years and about a month or so ago some answer on MO led me to learn about the last example in a paper by Arnold. Then I just typed 1,2,4,8,16 into OEIS and found the other examples. There are more 1,2,4,8,16 examples in OEIS but many of them seemed much less interesting to me. If you know how to do web links in MO answers, feel free to make a link for each Sloane number above to the corresponding page on OEIS and then delete this sentence. - I know I am cheating :-) A) $a_n = n + C \lfloor \frac{n}{N}\rfloor$ B) Integers of form $x+\prod_{1 \leq k \leq N}{(x-k)}$ EDIT: up to $N$ A and B coincide with $\mathbb{N}$ so it is a triple in a sense. - There is also, of course, what comes out of the answer which was given to this question: http://mathoverflow.net/questions/11517/computer-algebra-errors - 1 @Laurent, The answer? 23 answers were given to that question. Which one did you have in mind? – Gerry Myerson Jul 3 2011 at 23:36 I meant the first one, sorry! – Laurent Berger Jul 4 2011 at 7:01 or another cheating example: positive integers and remainders of positive integers modulo 100000000. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 85, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374672770500183, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=276515
Physics Forums Thread Closed Page 1 of 3 1 2 3 > ## atom self capacitance can electron energy levels just be considered the self capacitance of an atom? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> Nanocrystals grow from liquid interface>> New insights into how materials transfer heat could lead to improved electronics Looking at the definition of capacitance, the ratio between the charge and de potencial in a system of charges, I would say that in some sense your guess is right. If I understand correctly, the capacitance gives you the charge that you have to introduce in a conductor to increase the voltage up to 1V. In the case of self capacitance the reference for that voltage is an esphere of infinite radius. But this is a classical definition, and usually is used in macroscopid systems because the energy of the conductor is given by this easy relation $$E=\frac{Q^2}{2C}$$ But in a quantum system this rule is not true. So I would say that you can look at the energy levels as a self capacitance, but this is not useful because all the information is already in the energy levels, why to introduce another parameter?. Hope this helps. I just thought it was an interesting concept, and was curious if was useful... Using the Bohr hydrogen model with the capacitance equations: C = 4 * pi * electric_constant * bohr_radius = 5.8878e-21 F $$E=\frac{Q^2}{2C}$$ E = (1.60217646e-19 C)^2 / (2* 5.8878e-21 F) = 13.60 eV ## atom self capacitance Seems to work with inductance and LC resonance as well. I have to say that this coincidence in the energies is strange. Anyway to consider a single hydrogen nucleus as a conductor sphere is a very rough approximation, isn't it? Another point is that assuming your arguments as valid would imply that to introduce a second electron in the hydrogen atom you have to to provide again the same energy, this is not correct, as far as I know. Regarding your second post, I don't understand what you try to say. I'm not sure if this is correct. I used the following equation to find the inductance of ground state H: ((bohr_radius^2) * electron_mass) / (elementary_charge^2) = 9.93734e-14 H The units seem to work. Then I tried using the L and C variables with the LC resonance equation: w = sqrt(1/LC) To get w = 4.1341e16 rad/s or f = 6.57968e15 hz Then checked the "orbital frequency" of hydrogen with: f = v / wavelength Assumed the wavelength was equal to (2*pi*bohr_radius), and velocity of hydrogen electron (a * c) f = (a*c) / (2*pi*bohr_radius) = 6.57968e15 hz Mentor Quote by cubeleg I have to say that this coincidence in the energies is strange. It's not a coincidence. You put it in to get the Bohr radius, and then you get it out again. Recognitions: Gold Member Quote by nuby I'm not sure if this is correct. I used the following equation to find the inductance of ground state H: ((bohr_radius^2) * electron_mass) / (elementary_charge^2) = 9.93734e-14 H The units seem to work. Then I tried using the L and C variables with the LC resonance equation: w = sqrt(1/LC) To get w = 4.1341e16 rad/s or f = 6.57968e15 hz Then checked the "orbital frequency" of hydrogen with: f = v / wavelength Assumed the wavelength was equal to (2*pi*bohr_radius), and velocity of hydrogen electron (a * c) f = (a*c) / (2*pi*bohr_radius) = 6.57968e15 hz Interesting topic, hope this isn't a dumb question, but how do you know the "velocity of hydrogen electron (a * c)" and what are a and c? a = fine structure constant c = speed of light (a*c) is the velocity of a ground state hydrogen electron according to the Bohr model. Vanadium 50, or anyone else, What do you make of this? Mentor Like a capacitor, an atom stores energy in electric fields, and I suppose one can calculate an "equivalent capacitance". I'm not sure there's much physical insight to be gained here, as you're not going to plug one into a circuit. Like an inductor, some atoms also store energy in magnetic fields, and I suppose one can calculate an "equivalent inductance". Here, though, you've gone astray and assumed all of the energy is stored in the magnetic field. That's not the case. An LC circuit moves energy back and forth between the capacitor and the inductor. This is not what happens in the atom. The reason why you got the Rydberg constant out was that you put it in, in the form of the Bohr radius. Like a capacitor, an atom stores energy in electric fields, and I suppose one can calculate an "equivalent capacitance". I'm not sure there's much physical insight to be gained here, as you're not going to plug one into a circuit. Like an inductor, some atoms also store energy in magnetic fields, and I suppose one can calculate an "equivalent inductance". Here, though, you've gone astray and assumed all of the energy is stored in the magnetic field. That's not the case. An LC circuit moves energy back and forth between the capacitor and the inductor. This is not what happens in the atom. The reason why you got the Rydberg constant out was that you put it in, in the form of the Bohr radius. I have to agree with you that the coincidence is an artifact introduced in the "model". About the physical sense of the capacitance is exactly what I try to say in my first post. Regarding the LC, in my opinion, although the analogy is not very useful, the bohr model obtain those number assuming that the speed and potencial energy are equilibrated in fixed levels. This can be seen as current-voltage exchange, which essentialy is what you see in the LC circuit. But I insist that this a quite artificial point of view and all number are there as Vanadium50 correctly said, so is not surprising that it "works". Recognitions: Gold Member Quote by nuby Vanadium 50, or anyone else, What do you make of this? The picture of a proton that this draws for me is an empty 3d shell 53 picometers in radius, much like you see in molecule pictures using the space filling options. There is no center to the proton, it is just a shell of charge, much like a Van der Graaf generator. The "Shell theorem" from Newtons time suggests that the electron would feel no forces inside this shell. It also means protons and electrons dont crash into each other as electrons can simply pass right through protons (the proton is not a point charge)... and lots of other neat things... Recognitions: Gold Member Quote by cubeleg I have to say that this coincidence in the energies is strange. Anyway to consider a single hydrogen nucleus as a conductor sphere is a very rough approximation, isn't it? Another point is that assuming your arguments as valid would imply that to introduce a second electron in the hydrogen atom you have to to provide again the same energy, this is not correct, as far as I know. Regarding your second post, I don't understand what you try to say. In this kind of model, less energy would be needed for the second electron as one electron is inside the sphere providing a push against the second electron. I just found a document regarding the "LC Bohr atom model", this guy did the same thing but took it a few steps further. http://www.scielo.cl/pdf/ingeniare/v...cial/art03.pdf Mentor That "model" is wrong. Equation 6 has him saying a given quantity of energy is in two places at once: in kinetic energy and in a magnetic field. The rest of the paper has him rediscovering the Bohr-Sommerfeld model of the atom, albeit with less rigor, less generality, less motivation and less clarity, but keeping all the problems. Had he written it in 1911, it might have been interesting. Oh well.. It was the closest thing I could find to the above so I figured I'd post it. I had a feeling it would have some issues. Thanks for checking it out. Thread Closed Page 1 of 3 1 2 3 > Thread Tools | | | | |--------------------------------------------|-------------------------------|---------| | Similar Threads for: atom self capacitance | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 6 | | | Introductory Physics Homework | 2 | | | General Physics | 1 | | | Introductory Physics Homework | 0 | | | Classical Physics | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445552229881287, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/1721/what-do-gerbes-and-complex-powers-of-line-bundles-have-to-do-with-each-other/1728
## What do gerbes and complex powers of line bundles have to do with each other? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) We all know how to take integer tensor powers of line bundles. I claim that one should be able to also take fractional or even complex powers of line bundles. These might not be line bundles, but they have some geometric life. They have Chern classes, and one can twist differential operators by them. How should I think about these? What do they have to do with gerbes? - ## 5 Answers Complex powers of line bundles are classes in $H^{1,1}$, or equivalently sheaves of twisted differential operators (TDO) (let's work in the complex topology). This maps to $H^2$ with $\mathbb{C}$ coefficients, or modding out by $\mathbb{Z}$-cohomology, to $H^2$ with $\mathbb{C}^\times$ coefficients. The latter classifies $\mathbb{C}^\times$ gerbes, ie gerbes with a flat connection (usual gerbes can be described by $H^2(X,\mathcal{O}^\times)$). Note that honest line bundles give the trivial gerbe. In fact the category of modules over a TDO only depends on the TDO up to tensoring with line bundles --- ie it only depends on the underlying gerbe, and can be described as ordinary $\mathcal{D}$-modules on the gerbe. Or if you prefer, regular holonomic modules over a TDO are the same as perverse sheaves on the underlying gerbe. This is explained eg in the encyclopedic Chapter 7 of Beilinson-Drinfeld's Quantization of Hitchin Hamiltonians document, or I think also in a paper of Kashiwara eg in the 3-volume Asterisque on singularities and rep theory (and maybe even his recent $\mathcal{D}$-modules book). B&D talk in terms of crystalline $\mathcal{O}^\times$ gerbes rather than $\mathbb{C}^\times$ gerbes but the story is the same. - (for the BD reference, you want 7.10 I think, the crystalline theory of D-modules on stacks). – David Ben-Zvi Oct 21 2009 at 21:44 4 I took the liberty of TeXifying this answer. – José Figueroa-O'Farrill Jul 13 2010 at 17:32 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. To any homomorphism of Lie groups $\phi: G \to H$, and any principal $H$-bundle $P$ over a space $X$, you can associate what I like to call the "gerbe of liftings," namely the stack over $X$ whose objects over $f: Y \to X$ consist of a principal $G$-bundle $Q$ together with an isomorphism $Q_\phi \cong f^* P$. Here $Q_\phi$ denotes the associated $H$-bundle. This stack is nothing but $BG \times_{BH} X$. (Note that it carries a tautological $G$-bundle, pulled back from $BG$.) If $\phi$ is surjective with kernel $K$, this is a $K$-gerbe with trivial band. Morally speaking, its isomorphism class comes from the image in the connecting homomorphism $H^1(X,H) \to H^2(X,K)$, and this is literally true if $K$ is abelian. To relate this to the question, consider the case where $G = H = {\bf C}^\times$ and $\phi$ takes the $k$th power. Then $K = \mu_k$, a cyclic group of order $k$. The gerbe of liftings measures the failure of a given line bundle on $X$ to be a $k$th power. (For example, if it is not, then there will be no objects over $f =$ id.) It coincides with the root stack introduced by Cadman. It is tempting to regard the $k$th root gerbe $B$ of a line bundle $L$ as $L^{1/k}$. But it is a gerbe with structure group $\mu_k$, not a bundle with structure group ${\bf C}^\times$. Topologists should think of it as like an orbifold over $X$, but with orbifold structure spread out all over. How, then, is $B$ related to a $k$th root of $L$? As noted above, $B$ carries a tautological line bundle --- and its $k$th power is the pullback of $L$ from $X$! Moral: fractional line bundles on $X$ can be realized as bona fide line bundles on $\mu_k$-gerbes over $X$. - If L is any line bundle on a space (scheme, whatever) X, A is any (additive) abelian group, and a an element of A, there is a natural construction of an A-gerbe $L^a$ as follows. By definition, $L^a$ should be a "sheaf of categories", or stack (not algebraic) on X, and here are its categories of sections. Identify L with its total space, which is a $\mathbb{G}_m$-bundle on X, and for any open set U in X, let $L^a(U)$ be the category of all A-torsors on $L|_U$ whose monodromy about each fiber of $L|_U \to U$ is a. One can check that this really is a gerbe: it is locally nonempty, since if L is trivial over U, you can write $L|_U = \mathbb{G}_m \times U$ and then pull back the unique A-torsor on $\mathbb{G}_m$ with monodromy a. It has a natural action of A-torsors on X, given by pulling up along the bundle map $L \to X$ and tensoring. And this action is free and transitive, since the difference of two a-monodromic torsors on $L|_U$ has trivial monodromy on each fiber and therefore descends to X. Why do I call this $L^a$? Suppose that $L = \mathcal{O}_X(D)$ for a divisor D, where for simplicity let's say that D is irreducible of degree n; then L gets a natural trivialization on $U = X \setminus D$ having a pole of order n along D. As shown above, this induces a trivialization $\phi$ of $L^a$ on U, and if we pick a small open set V intersecting D and such that D is actually defined by an equation f of degree n, then we get a second (noncanonical) trivialization $\psi$ of $L^a$ on V. You can check that the difference $\psi^{-1} \phi$, which is an automorphism of the trivial gerbe on $U \cap V$, is in fact described by the A-torsor $\mathcal{T} = f^{-1}(\mathcal{L}_a)$, where $f \colon U \cap V \to \mathbb{G}_m$ and $\mathcal{L}_a$ is the A-torsor of monodromy a. Since f has degree n, $\mathcal{T}$ has monodromy na about D. Thus, it is only reasonable to say that the natural trivialization $\phi$ has a pole of order na, which is consistent with the behavior of the trivialization of L itself on U, when raising to integer powers. What does this have to do with twisting of differential operators? Suppose we have some kind of sheaves (D-modules, locally constant sheaves, perverse sheaves; technically, they should form a stack admitting an action of A-torsors). On the one hand, one could mimic the above construction of $L^a$ to describe a-monodromic sheaves on L, and this is what is often called twisting. On the other hand, there is a natural way to directly twist sheaves by the gerbe $L^a$ without mentioning L at all (that is, you can twist by any A-gerbe). The procedure is as follows: a twisted sheaf is the assignment, to every open set U in X, of a collection of sheaves on U parameterized by the sections of $L^a(U)$, and compatible with tensoring by A-torsors. Of course, since if $L^a(U)$ is nonempty this is the same as giving just one sheaf, this is sort of overkill, but the choice of just one such sheaf is noncanonical whereas this description is canonical. These collections should be compatible with the restriction functors $L^a(U) \to L^a(V)$ when $V \subset U$. It is an exercise to reader to check that this is the same as the other definition of twisting :) - 1 If I'm not mistaken this gerbe is the pushforward of the Chern class $c_1(L)\in H^2(X,Z)$ to $H^2(X,A)$ under the homomorphism $a:Z\to A$. – David Ben-Zvi Jul 13 2010 at 19:21 Yes, that's right. There are a couple of ways to describe it but this one is the most directly related to the definition (that I knew) of twisting. – Ryan Reich Jul 13 2010 at 21:34 Where by "at the right time" you mean eight months ago? Really, it's Michael Thaddeus who added a new answer at the right time. – Ben Webster♦ Jul 13 2010 at 23:57 1 Ah, I see. Actually, eight months ago was good too :) – Ryan Reich Jul 14 2010 at 2:24 Is this anything more than thinking about the Chern class of the line bundle in H^1(\Omega^1), which can of course be multiplied by any complex number? For example, if you have an elliptic curve E, a complex number alpha and two line bundles L and L' of the same degree, is there any difference between alpha x L and alpha x L'? - Cadman defined a root stack for line bundles on a scheme, and variations on that theme such as Cartier divisors, in Cadman, Charles, Using stacks to impose tangency conditions on curves. Amer. J. Math. 129 (2007), no. 2, 405--427. Maybe this construction was known as folklore beforehand, in any case for a line bundle it gives you a flat gerbe, and might do the job for you for rational powers. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419723153114319, "perplexity_flag": "head"}
http://mathoverflow.net/questions/13305?sort=votes
## Hilbert scheme of points on a complex surface ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I don't know about schemes and every definition of a Hilbert scheme (quite naturally!) involves schemes. But, the Hilbert scheme of points on a complex surface is known to be smooth (Fogarty). So is there a concrete description of it as a complex manifold? (For instance in the case of n=2 it is a blowup of XxX along the diagonal) - ## 4 Answers Given a codimension $d$ ideal $I$ in $R = {\mathbb C}[x,y]$, the quotient ring $R/I$ can be thought of as a $d$-dimensional vector space with actions of two commuting operators $x,y$ and a "cyclic" vector $1$ that generates it as an $R$-module. Consequently, if your surface is the plane you can think of the Hilbert scheme as the space of pairs of commuting matrices on $d$-space, but then take the (open) set in there of pairs $(x,y)$ that admit a cyclic vector, and then divide that variety by the conjugation action of $GL(d)$. To see smoothness, you might first show this open set is smooth, then that the $GL(d)$ action is free and proper. All this is spelled out in Nakajima's book (which I'm guessing is the one Andrea Ferretti meant to reference), except I think he shows smoothness by analyzing the tangent spaces. - Thanks. The description in terms of matrices was concrete indeed But, how does one describe the hilbert scheme of X where X is a general complex 2-manifold? I mean, in Gottsche's slides (for a talk) it is written that atleast as a set it is the collection {(x1,Q1),(x2,Q2),....(xk,Qk)} where where xk is a point on X and Qk is "the quotient ring of holomorphic functions at xk" (any idea as to what this means?). – Vamsi Jan 29 2010 at 5:36 I'm guessing that the n points are sitting at k spots, so one can think of it as a disjoint union of k fat points, and Q_i is the coordinate ring of the ith fat point. So yeah, that's as a set; it would be some work from that definition even to understand the topology of what happens when points collide. – Allen Knutson Jan 29 2010 at 12:23 Is there a natural map from Hilb_n(C^2) to $(\mathbb{C}^2)^n/S_n$ (or maybe to $((\mathbb{C}^\times)^2)^n/S_n$)? Something like (pair of matrices) goes to (their eigenvalues)? I've heard there's a map like this that's supposed to be a resolution of singularities, but I don't know the details, or what this means exactly... – Peter Samuelson Feb 23 2010 at 19:17 @Peter: It's called the "Hilbert-Chow morphism", and more generally goes from Hilbert schemes to Chow varieties. This particular instance is described in detail in e.g. Brion & Kumar's book on Frobenius splitting, where they use the fact that it's a crepant resolution of singularities to show that the Hilbert scheme is Frobenius split. – Allen Knutson Feb 24 2010 at 3:30 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is a geometric description in the case of $H_n(\mathbb{C}^2)$. This is meant to be a geometric rewrite of Proposition 2.6 in Mark Haiman's "(t,q)-Catalan numbers and the Hilbert scheme", Discrete Math. 193 (1998), 201-224. Let $S= (\mathbb{C}^2)^n/S_n$; notice that this is an orbifold. Let $S_0$ be the open dense set where the $n$ points are distinct. For $D$ an $n$-element subset of `$\mathbb{Z}_{\geq 0}^2$`, let $A_{D}$ be the polynomial $\det( x_i^{a} y_i^{b})$, where $(a, b)$ ranges over the elements of $D$ and $i$ runs from $1$ to $n$. For any $D$ and $D'$, the ratio $A_D/A_{D'}$ is a meromorphic function on $S$, and is well defined on $S_0$. Map $S_0$ into $S_0 \times \mathbb{CP}^{\infty}$ where the homogenous coordinates on $\mathbb{CP}^{\infty}$ are the $A_{D}$'s. (Only finitely many of the $A_D$'s are needed, but it would be a little time consuming to say which ones.) The Hilbert scheme is the closure of $S_0$ in $S \times \mathbb{CP}^{\infty}$. Algebraically, we can describe this as the blow up of $S$ along the ideal generated by all products $A_D A_{D'}$. Haiman points out that the reduction of this ideal is the locus where two of the points collide and speculates that this ideal may be reduced. If his speculation is correct, then we can describe $H_n(\mathbb{C}^2)$ geometrically as the blow up of $(\mathbb{C}^2)^n/S_n$ along the reduced locus where at least two of the points are equal. - I just made some edits to this answer for clarity. In particular, I added the last sentence. – David Speyer Feb 23 2010 at 13:57 1 Haiman proves, in his mammoth paper arxiv.org/abs/math.AG/0010246 , that this speculation is correct. – David Speyer Mar 14 2010 at 20:43 Actually, the construction of the Hilbert scheme doesn't have to involve schemes at all. For $\mathbb{C}^2$, it's just the space of ideals of codimension $d$ in $\mathbb{C}[x,y]$. That's enough to say what it is as a topological space as a subspace of all codimension $d$ subspaces. - There is a "concrete" description of the Hilbert scheme of points on surfaces in the book of Nakajima. As far as I remember it doesn't really use the general machinery of Hilbert schemes. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.944175124168396, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1638/compressing-ec-private-keys
# Compressing EC private keys For reasonable security, EC private keys are typically 256-bits. Shorter EC private keys are not sufficiently secure. However, shorter symmetric keys (128-bits, for example) are comparably secure. I have a case where I need to regenerate an EC private key (that can be constructed by a special method) with as little stored information as possible. Fewer than 128-bits is not possible without compromising security against an exhaustive search. I'm curious if I can use the following method: 1) Generate a random 128-bit value. This is the value I would store to recreate the private key. 2) Use a 256-bit hash, say SHA256, of the random value as the private key. (I have a method to do this that doesn't bias any keys over any others.) 3) The corresponding public key would be made public. Thus, I can regenerate the private key from the 128-bit value. It seems to me that the private key and 128-bit value should be just as secure as using a random 256-bit value for the private key and storing that. Exhaustive search is clearly impractical, and the properties of the EC key shouldn't be capable of being walked backwards either from the public key or through the hash. In principle, the EC search space is halved, but it doesn't seem like there would be any practical way to take advantage of this. Is there anything I'm missing? Is there any reason this wouldn't be just as secure as storing the full 256-bit key? Assume the public key and the algorithm are known to potential adversaries. The adversary's goal is to get the private key. It seems to me that this is obviously as secure as the underlying algorithms, but I know enough not to trust myself. - – fgrieu Jan 17 '12 at 10:03 ## 1 Answer What you suggest is valid. Here is a way to show it: In a fully implemented signature system (things are similar for asymmetric encryption), there are three modules: • a key pair generator, which produces a pseudo-random key pair; • a signature generator, which uses the private key to produce a signature over some piece of data; • a signature verifier, which verifies a signature over some piece of data, using the public key. It is acceptable that the key pair generator is a deterministic algorithm, provided that it is "cryptographically strong" and works over a "long enough" secret seed. "Long enough" means: a string of length at least $t$ bits if a security level of $2^t$ must be achieved. What you suggest is simply storing the PRNG seed instead of storing the output of the key pair generator. You run the PRNG again each time you want to use the private key. Since the PRNG is deterministic, this yields the exact same signatures that you would have obtained if you had stored the private key, so, from the outside, this is indistinguishable from the "normal" setup. Hence the security. The concept is applicable to any asymmetric algorithm, not just EC-based algorithms. You could do so with, e.g., RSA, using the PRNG to regenerates the primes $p$ and $q$. However, for RSA, the cost would be high (generating a private key is considerably more expensive than actually computing a signature) and the generation process is susceptible to partial leakage through timing attacks, so this would be a bit delicate. For algorithms such as DSA, Diffie-Hellman or ElGamal, or their EC variants, a private key is just a random value modulo a given $q$ (the group order), so that's fast. The only tricky point is to show that "one SHA-256 invocation" is a suitable, cryptographically strong PRNG, when you only want 256 bits of pseudo-alea. In the random oracle model, no problem. In a practical world of standard compliance and administrative acceptance, you might want to use an Approved PRNG, in particular HMAC_DRBG (that's the one NIST considers to be "the strongest"). Note: it is not strictly necessary to have an unbiased selection of the private key. For (EC)DSA, there is a private key $x$, and, for each signature, a new random $k$ modulo $q$ must be generated. It is crucial that $k$ is negligibly biased; but for $x$, you can be a bit more lenient. For instance, for a curve where the subgroup order $q$ has size 256 bits or more, it suffices to generate a single 256-bit integer, and reduce it modulo $q$. Some values may have a twice higher chance of being selected than others, or, if the size of $q$ is greater than 256 bits, some values have probability zero of being selected; but this is not an issue for $x$. For $k$, this would be a very serious problem. - What is "pseudo-alea"? I've seen you use the term before but haven't encountered it used otherwise. – PulpSpy Jan 12 '12 at 16:41 @PulpSpy: it might be a gallicism (i.e. a French term which I did not care to translate to English). "Alea" itself is latin for "fate" or "randomness". "Pseudo-alea" means "looks like random bits, but are generated through a deterministic process with an unknown seed". – Thomas Pornin Jan 12 '12 at 17:06 1 The standard English term would be "pseudorandom". – David Schwartz Jan 12 '12 at 19:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358690977096558, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Permutation
# Permutation For other uses, see Permutation (disambiguation). The 6 permutations of 3 balls In mathematics, the notion of permutation is used with several slightly different meanings, all related to the act of permuting (rearranging) objects or values. Informally, a permutation of a set of objects is an arrangement of those objects into a particular order. For example, there are six permutations of the set {1,2,3}, namely (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2), and (3,2,1). For example, an anagram of a word is a permutation of its letters. The study of permutations in this sense generally belongs to the field of combinatorics. The number of permutations of n distinct objects is n×(n − 1)×(n − 2)×⋯×2×1, which is commonly denoted as "n factorial" and written "n!". Permutations occur, in more or less prominent ways, in almost every domain of mathematics. They often arise when different orderings on certain finite sets are considered, possibly only because one wants to ignore such orderings and needs to know how many configurations are thus identified. For similar reasons permutations arise in the study of sorting algorithms in computer science. In algebra and particularly in group theory, a permutation of a set S is defined as a bijection from S to itself (i.e., a map S → S for which every element of S occurs exactly once as image value). This is related to the rearrangement of S in which each element s takes the place of the corresponding f(s). The collection of such permutations form a symmetric group. The key to its structure is the possibility to compose permutations: performing two given rearrangements in succession defines a third rearrangement, the composition. Permutations may act on composite objects by rearranging their components, or by certain replacements (substitutions) of symbols. In elementary combinatorics, the k-permutations, or partial permutations, are the sequences of k distinct elements selected from a set. When k is equal to the size of the set, these are the permutations of the set. ## History The rule to determine the number of permutations of n objects was known in Indian culture at least as early as around 1150: the Lilavati by the Indian mathematician Bhaskara II contains a passage that translates to The product of multiplication of the arithmetical series beginning and increasing by unity and continued to the number of places, will be the variations of number with specific figures.[1] A first case in which seemingly unrelated mathematical questions were studied with the help of permutations occurred around 1770, when Joseph Louis Lagrange, in the study of polynomial equations, observed that properties of the permutations of the roots of an equation are related to the possibilities to solve it. This line of work ultimately resulted, through the work of Évariste Galois, in Galois theory, which gives a complete description of what is possible and impossible with respect to solving polynomial equations (in one unknown) by radicals. In modern mathematics there are many similar situations in which understanding a problem requires studying certain permutations related to it. ## Generalities The notion of permutation is used in the following contexts. ### In group theory In group theory and related areas, one considers permutations of arbitrary sets, even infinite ones. A permutation of a set S is a bijection from S to itself. This allows for permutations to be composed, which allows the definition of groups of permutations. If S is a finite set of n elements, then there are n! permutations of S. ### In combinatorics Permutations of multisets In combinatorics, a permutation is usually understood to be a sequence containing each element from a finite set once, and only once. The concept of sequence is distinct from that of a set, in that the elements of a sequence appear in some order: the sequence has a first element (unless it is empty), a second element (unless its length is less than 2), and so on. In contrast, the elements in a set have no order; {1, 2, 3} and {3, 2, 1} are different ways to denote the same set. In this sense a permutation of a finite set S of n elements is equivalent to a bijection from {1, 2, ..., n} to S (in which any i is mapped to the i-th element of the sequence), or to a choice of a total ordering on S (for which x < y if x comes before y in the sequence). There are n! permutations of S. There is also a weaker meaning of the term "permutation" that is sometimes used in elementary combinatorics texts, designating those sequences in which no element occurs more than once, but without the requirement to use all elements from a given set. Indeed this use often involves considering sequences of a fixed length k of elements taken from a given set of size n. These objects are also known as partial permutations or as sequences without repetition, terms that avoids confusion with the other, more common, meanings of "permutation". The number of such k-permutations of n is denoted variously by such symbols as n Pk, nPk, Pn,k, or P(n,k), and its value is given by the product[2] $n\cdot(n-1)\cdot(n-2)\cdots(n-k+1)$ which is 0 when k > n, and otherwise is equal to $\frac{n!}{(n-k)!}.$ The product is well defined without the assumption that n is a non-negative integer and is of importance outside combinatorics as well; it is known as the Pochhammer symbol (n)k or as the k-th falling factorial power nk of n. If M is a finite multiset, then a multiset permutation is a sequence of elements of M in which each element appears exactly as often as is its multiplicity in M. If the multiplicities of the elements of M (taken in some order) are $m_1$, $m_2$, ..., $m_l$ and their sum (i.e., the size of M) is n, then the number of multiset permutations of M is given by the multinomial coefficient ${n \choose m_1,m_2,\ldots,m_l} = \frac{n!}{m_1!\,m_2!\, \cdots\,m_l!}.$ ## Permutations in group theory Main article: Symmetric group In group theory, the term permutation of a set means a bijective map, or bijection, from that set onto itself. The set of all permutations of any given set S forms a group, with composition of maps as product and the identity as neutral element. This is the symmetric group of S. Up to isomorphism, this symmetric group only depends on the cardinality of the set, so the nature of elements of S is irrelevant for the structure of the group. Symmetric groups have been studied most in the case of a finite sets, in which case one can assume without loss of generality that S={1,2,...,n} for some natural number n, which defines the symmetric group of degree n, written Sn. Any subgroup of a symmetric group is called a permutation group. In fact by Cayley's theorem any group is isomorphic to some permutation group, and every finite group to a subgroup of some finite symmetric group. However, permutation groups have more structure than abstract groups, allowing for instance to define the cycle type of an element of a permutation group; different realizations of a group as a permutation group need not be equivalent for this additional structure. For instance S3 is naturally a permutation group, in which any transposition has cycle type (2,1), but the proof of Cayley's theorem realizes S3 as a subgroup of S6 (namely the permutations of the 6 elements of S3 itself), in which permutation group transpositions get cycle type (2,2,2). So in spite of Cayley's theorem, the study of permutation groups differs from the study of abstract groups. ### Notation There are three main notations for permutations of a finite set S. In Cauchy's two-line notation,[3] one lists the elements of S in the first row, and for each one its image under the permutation below it in the second row. For instance, a particular permutation of the set {1,2,3,4,5} can be written as: $\sigma=\begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 2 & 5 & 4 & 3 & 1\end{pmatrix};$ this means that σ satisfies σ(1)=2, σ(2)=5, σ(3)=4, σ(4)=3, and σ(5)=1. In one-line notation, one gives only the second row of this array, so the one-line notation for the permutation above is 25431. (It is typical to use commas to separate these entries only if some have two or more digits.) Cycle notation, the third method of notation, focuses on the effect of successively applying the permutation. It expresses the permutation as a product of cycles corresponding to the orbits (with at least two elements) of the permutation; since distinct orbits are disjoint, this is loosely referred to as "the decomposition into disjoint cycles" of the permutation. It works as follows: starting from some element x of S with σ(x) ≠ x, one writes the sequence (x σ(x) σ(σ(x)) ...) of successive images under σ, until the image would be x, at which point one instead closes the parenthesis. The set of values written down forms the orbit (under σ) of x, and the parenthesized expression gives the corresponding cycle of σ. One then continues choosing an element y of S that is not in the orbit already written down, and such that σ(y) ≠ y, and writes down the corresponding cycle, and so on until all elements of S either belong to a cycle written down or are fixed points of σ. Since for every new cycle the starting point can be chosen in different ways, there are in general many different cycle notations for the same permutation; for the example above one has for instance $\begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 2 & 5 & 4 & 3 & 1\end{pmatrix}=\begin{pmatrix}1 & 2 & 5 \end{pmatrix} \begin{pmatrix}3 & 4 \end{pmatrix} = \begin{pmatrix}3 & 4 \end{pmatrix} \begin{pmatrix}1 & 2 & 5 \end{pmatrix} = \begin{pmatrix}3 & 4 \end{pmatrix} \begin{pmatrix}5 & 1 & 2 \end{pmatrix}.$ Each cycle (x1 x2 ... xl) of σ denotes a permutation in its own right, namely the one that takes the same values as σ on this orbit (so it maps xi to xi+1 for i < l, and xl to x1), while mapping all other elements of S to themselves. The size l of the orbit is called the length of the cycle. Distinct orbits of σ are by definition disjoint, so the corresponding cycles are easily seen to commute, and σ is the product of its cycles (taken in any order). Therefore the concatenation of cycles in the cycle notation can be interpreted as denoting composition of permutations, whence the name "decomposition" of the permutation. This decomposition is essentially unique: apart from the reordering the cycles in the product, there are no other ways to write σ as a product of cycles (possibly unrelated to the cycles of σ) that have disjoint orbits. The cycle notation is less unique, since each individual cycle can be written in different ways, as in the example above where (5 1 2) denotes the same cycle as (1 2 5) (but (5 2 1) would denote a different permutation). An orbit of size 1 (a fixed point x in S) has no corresponding cycle, since that permutation would fix x as well as every other element of S, in other words it would be the identity, independently of x. It is possible to include (x) in the cycle notation for σ to stress that σ fixes x (and this is even standard in combinatorics, as described in cycles and fixed points), but this does not correspond to a factor in the (group theoretic) decomposition of σ. If the notion of "cycle" were taken to include the identity permutation, then this would spoil the uniqueness (up to order) of the decomposition of a permutation into disjoint cycles. The decomposition into disjoint cycles of the identity permutation is an empty product; its cycle notation would be empty, so some other notation like e is usually used instead. Cycles of length two are called transpositions; such permutations merely exchange the place of two elements. ### Group structure Main article: Symmetric group #### Product and inverse The product of two permutations is defined as their composition as functions, in other words σ·π is the function that maps any element x of the set to σ(π(x)). Note that the rightmost permutation is applied to the argument first, because of the way function application is written. Some authors prefer the leftmost factor acting first, but to that end permutations must be written to the right of their argument, for instance as an exponent, where σ acting on x is written xσ; then the product is defined by xσ·π=(xσ)π. However this gives a different rule for multiplying permutations; this article uses the definition where the rightmost permutation is applied first. Since the composition of two bijections always gives another bijection, the product of two permutations is again a permutation. Since function composition is associative, so is the product operation on permutations: (σ·π)·ρ=σ·(π·ρ). Therefore, products of more than two permutations are usually written without adding parentheses to express grouping; they are also usually written without a dot or other sign to indicate multiplication. The identity permutation, which maps every element of the set to itself, is the neutral element for this product. In two-line notation, the identity is $\begin{pmatrix}1 & 2 & 3 & \cdots & n \\ 1 & 2 & 3 & \cdots & n\end{pmatrix}.$ Since bijections have inverses, so do permutations, and the inverse σ−1 of σ is again a permutation. Explicitly, whenever σ(x)=y one also has σ−1(y)=x. In two-line notation the inverse can be obtained by interchanging the two lines (and sorting the columns if one wishes the first line to be in a given order). For instance $\begin{pmatrix}1 & 2 & 3 & 4 & 5 \\ 2 & 5 & 4 & 3 & 1\end{pmatrix}^{-1} =\begin{pmatrix}2 & 5 & 4 & 3 & 1\\ 1 & 2 & 3 & 4 & 5 \end{pmatrix} =\begin{pmatrix}1 & 2 & 3 & 4 & 5 \\ 5 & 1 & 4 & 3 & 2\end{pmatrix}.$ In cycle notation one can reverse the order of the elements in each cycle to obtain a cycle notation for its inverse. Having an associative product, a neutral element, and inverses for all its elements, makes the set of all permutations of S into a group, called the symmetric group of S. #### Properties Every permutation of a finite set can be expressed as the product of transpositions. Moreover, although many such expressions for a given permutation may exist, there can never be among them both expressions with an even number and expressions with an odd number of transpositions. All permutations are then classified as even or odd, according to the parity of the transpositions in any such expression. Composition of permutations corresponds to multiplication of permutation matrices. Multiplying permutations written in cycle notation follows no easily described pattern, and the cycles of the product can be entirely different from those of the permutations being composed. However the cycle structure is preserved in the special case of conjugating a permutation σ by another permutation π, which means forming the product π·σ·π−1. Here the cycle notation of the result can be obtained by taking the cycle notation for σ and applying π to all the entries in it.[4] #### Matrix representation One can represent a permutation of {1, 2, ..., n} as an n×n matrix. There are two natural ways to do so, but only one for which multiplications of matrices corresponds to multiplication of permutations in the same order: this is the one that associates to σ the matrix M whose entry Mi,j is 1 if i = σ(j), and 0 otherwise. The resulting matrix has exactly one entry 1 in each column and in each row, and is called a permutation matrix. Here (file) is a list of these matrices for permutations of 4 elements. The Cayley table on the right shows these matrices for permutations of 3 elements. #### Permutation of components of a sequence As with any group, one can consider actions of a symmetric group on a set, and there are many ways in which such an action can be defined. For the symmetric group of {1, 2, ..., n} there is one particularly natural action, namely the action by permutation on the set Xn of sequences of n symbols taken from some set X. Like for the matrix representation, there are two natural ways in which the result of permuting a sequence (x1,x2,...,xn) by σ can be defined, but only one is compatible with the multiplication of permutations (so as to give a left action of the symmetric group on Xn); with the multiplication rule used in this article this is the one given by $\sigma\cdot(x_1,\ldots,x_n) = (x_{\sigma^{-1}(1)},\ldots,x_{\sigma^{-1}(n)}).$ This means that each component xi ends up at position σ(i) in the sequence permutate by σ. ## Permutations in combinatorics In combinatorics a permutation of a set S with n elements is a listing of the elements of S in some order (each element occurring exactly once). This can be defined formally as a bijection from the set { 1, 2, ..., n } to S. Note that if S equals { 1, 2, ..., n }, then this definition coincides with the definition in group theory. More generally one could use instead of { 1, 2, ..., n } any set equipped with a total ordering of its elements. One combinatorial property that is related to the group theoretic interpretation of permutations, and can be defined without using a total ordering of S, is the cycle structure of a permutation σ. It is the partition of n describing the lengths of the cycles of σ. Here there is a part "1" in the partition for every fixed point of σ. A permutation that has no fixed point is called a derangement. Other combinatorial properties however are directly related to the ordering of S, and to the way the permutation relates to it. Here are a number of such properties. ### Ascents, descents and runs An ascent of a permutation σ of n is any position i < n where the following value is bigger than the current one. That is, if σ = σ1σ2...σn, then i is an ascent if σi < σi+1. For example, the permutation 3452167 has ascents (at positions) 1,2,5,6. Similarly, a descent is a position i < n with σi > σi+1, so every i with $1\leq i<n$ either is an ascent or is a descent of σ. The number of permutations of n with k ascents is the Eulerian number $\textstyle\left\langle{n\atop k}\right\rangle$; this is also the number of permutations of n with k descents.[5] An ascending run of a permutation is a nonempty increasing contiguous subsequence of the permutation that cannot be extended at either end; it corresponds to a maximal sequence of successive ascents (the latter may be empty: between two successive descents there is still an ascending run of length 1). By contrast an increasing subsequence of a permutation is not necessarily contiguous: it is an increasing sequence of elements obtained from the permutation by omitting the values at some positions. For example, the permutation 2453167 has the ascending runs 245, 3, and 167, while it has an increasing subsequence 2367. If a permutation has k − 1 descents, then it must be the union of k ascending runs. Hence, the number of permutations of n with k ascending runs is the same as the number $\textstyle\left\langle{n\atop k-1}\right\rangle$ of permutations with k − 1 descents.[6] ### Inversions Main article: Inversion (discrete mathematics) An inversion of a permutation σ is a pair (i,j) of positions where the entries of a permutation are in the opposite order: $i<j$ and $\sigma_i>\sigma_j$.[7] So a descent is just an inversion at two adjacent positions. For example, the permutation σ = 23154 has three inversions: (1,3), (2,3), (4,5), for the pairs of entries (2,1), (3,1), (5,4). Sometimes an inversion is defined as the pair of values (σi,σj) itself whose order is reversed; this makes no difference for the number of inversions, and this pair (reversed) is also an inversion in the above sense for the inverse permutation σ−1. The number of inversions is an important measure for the degree to which the entries of a permutation are out of order; it is the same for σ and for σ−1. To bring a permutation with k inversions into order (i.e., transform it into the identity permutation), by successively applying (right-multiplication by) adjacent transpositions, is always possible and requires a sequence of k such operations. Moreover any reasonable choice for the adjacent transpositions will work: it suffices to choose at each step a transposition of i and i + 1 where i is a descent of the permutation as modified so far (so that the transposition will remove this particular descent, although it might create other descents). This is so because applying such a transposition reduces the number of inversions by 1; also note that as long as this number is not zero, the permutation is not the identity, so it has at least one descent. Bubble sort and insertion sort can be interpreted as particular instances of this procedure to put a sequence into order. Incidentally this procedure proves that any permutation σ can be written as a product of adjacent transpositions; for this one may simply reverse any sequence of such transpositions that transforms σ into the identity. In fact, by enumerating all sequences of adjacent transpositions that would transform σ into the identity, one obtains (after reversal) a complete list of all expressions of minimal length writing σ as a product of adjacent transpositions. The number of permutations of n with k inversions is expressed by a Mahonian number,[8] it is the coefficient of Xk in the expansion of the product $\prod_{m=1}^n\sum_{i=0}^{m-1}X^i=1(1+X)(1+X+X^2)\cdots(1+X+X^2+\cdots+X^{n-1}),$ which is also known (with q substituted for X) as the q-factorial [n]q! . The expansion of the product appears in Necklace (combinatorics). ### Counting sequences without repetition In this section, a k-permutation of a set S is an ordered sequence of k distinct elements of S. For example, given the set of letters {C, E, G, I, N, R}, the sequence ICE is a 3-permutation, RING and RICE are 4-permutations, NICER and REIGN are 5-permutations, and CRINGE is a 6-permutation; since the latter uses all letters, it is a permutation of the given set in the ordinary combinatorial sense. ENGINE on the other hand is not a permutation, because of the repetitions: it uses the elements E and N twice. Let n be the size of S, the number of elements available for selection. In constructing a k-permutation, there are n  possible choices for the first element of the sequence, and this is then number of 1-permutations. Once it has been chosen, there are n − 1 elements of S left to choose from, so a second element can be chosen in n − 1 ways, giving a total n × (n − 1) possible 2-permutations. For each successive element of the sequence, the number of possibilities decreases by 1  which leads to the number of n × (n − 1) × (n − 2) ... × (n − k + 1) possible k-permutations. This gives in particular the number of n-permutations (which contain all elements of S once, and are therefore simply permutations of S): n × (n − 1) × (n − 2) × ... × 2 × 1, a number that occurs so frequently in mathematics that it is given a compact notation "n!", and is called "n factorial". These n-permutations are the longest sequences without repetition of elements of S, which is reflected by the fact that the above formula for the number of k-permutations gives zero whenever k > n. The number of k-permutations of a set of n elements is sometimes denoted by P(n,k) or a similar notation (usually accompanied by a notation for the number of k-combinations of a set of n elements in which the "P" is replaced by "C"). That notation is rarely used in other contexts than that of counting k-permutations, but the expression for the number does arise in many other situations. Being a product of k factors starting at n and decreasing by unit steps, it is called the k-th falling factorial power of n: $n^{\underline k}=n\times(n-1)\times(n-2)\times\cdots\times(n-k+1),$ though many other names and notations are in use, as detailed at Pochhammer symbol. When k ≤ n the factorial power can be completed by additional factors: nk × (n − k)! = n!, which allows writing $n^{\underline k}=\frac{n!}{(n-k)!}.$ The right hand side is often given as expression for the number of k-permutations, but its main merit is using the compact factorial notation. Expressing a product of k factors as a quotient of potentially much larger products, where all factors in the denominator are also explicitly present in the numerator, is not particularly efficient; as a method of computation there is the additional danger of overflow or rounding errors. It should also be noted that the expression is undefined when k > n, whereas in those cases the number nk of k-permutations is just 0. ## Permutations in computing ### Numbering permutations One way to represent permutations of n is by an integer N with 0 ≤ N < n!, provided convenient methods are given to convert between the number and the usual representation of a permutation as a sequence. This gives the most compact representation of arbitrary permutations, and in computing is particularly attractive when n is small enough that N can be held in a machine word; for 32-bit words this means n ≤ 12, and for 64-bit words this means n ≤ 20. The conversion can be done via the intermediate form of a sequence of numbers dn, dn−1, ..., d2, d1, where di is a non-negative integer less than i (one may omit d1, as it is always 0, but its presence makes the subsequent conversion to a permutation easier to describe). The first step then is simply expression of N in the factorial number system, which is just a particular mixed radix representation, where for numbers up to n! the bases for successive digits are n, n − 1, ..., 2, 1. The second step interprets this sequence as a Lehmer code or (almost equivalently) as an inversion table. Rothe diagram for $\sigma=(6,3,8,1,4,9,7,2,5)$ i  \ σi 1 2 3 4 5 6 7 8 9 Lehmer code 1 × × × × × • d9 = 5 2 × × • d8 = 2 3 × × × × × • d7 = 5 4 • d6 = 0 5 × • d5 = 1 6 × × × • d4 = 3 7 × × • d3 = 2 8 • d2 = 0 9 • d1 = 0 inversion table 3 6 1 2 4 0 2 0 0 In the Lehmer code for a permutation σ, the number dn represents the choice made for the first term σ1, the number dn−1 represents the choice made for the second term σ2 among the remaining n − 1 elements of the set, and so forth. More precisely, each dn+1−i gives the number of remaining elements strictly less than the term σi. Since those remaining elements are bound to turn up as some later term σj, the digit dn+1−i counts the inversions (i,j) involving i as smaller index (the number of values j for which i < j and σi > σj). The inversion table for σ is quite similar, but here dn+1−k counts the number of inversions (i,j) where k = σj occurs as the smaller of the two values appearing in inverted order.[9] Both encodings can be visualized by an n by n Rothe diagram[10] (named after Heinrich August Rothe) in which dots at (i,σi) mark the entries of the permutation, and a cross at (i,σj) marks the inversion (i,j); by the definition of inversions a cross appears in any square that comes both before the dot (j,σj) in its column, and before the dot (i,σi) in its row. The Lehmer code lists the numbers of crosses in successive rows, while the inversion table lists the numbers of crosses in successive columns; it is just the Lehmer code for the inverse permutation, and vice versa. To effectively convert a Lehmer code dn, dn−1, ..., d2, d1 into a permutation of an ordered set S, one can start with a list of the elements of S in increasing order, and for i increasing from 1 to n set σi to the element in the list that is preceded by dn+1−i other ones, and remove that element from the list. To convert an inversion table dn, dn−1, ..., d2, d1 into the corresponding permutation, one can traverse the numbers from d1 to dn while inserting the elements of S from largest to smallest into an initially empty sequence; at the step using the number d from the inversion table, the element from S inserted into the sequence at the point where it is preceded by d elements already present. Alternatively one could process the numbers from the inversion table and the elements of S both in the opposite order, starting with a row of n empty slots, and at each step place the element from S into the empty slot that is preceded by d other empty slots. Converting successive natural numbers to the factorial number system produces those sequences in lexicographic order (as is the case with any mixed radix number system), and further converting them to permutations preserves the lexicographic ordering, provided the Lehmer code interpretation is used (using inversion tables, one gets a different ordering, where one starts by comparing permutations by the place of their entries 1 rather than by the value of their first entries). The sum of the numbers in the factorial number system representation gives the number of inversions of the permutation, and the parity of that sum gives the signature of the permutation. Moreover the positions of the zeroes in the inversion table give the values of left-to-right maxima of the permutation (in the example 6, 8, 9) while the positions of the zeroes in the Lehmer code are the positions of the right-to-left minima (in the example positions the 4, 8, 9 of the values 1, 2, 5); this allows computing the distribution of such extrema among all permutations. A permutation with Lehmer code dn, dn−1, ..., d2, d1 has an ascent n − i if and only if di ≥ di+1. ### Algorithms to generate permutations In computing it may be required to generate permutations of a given sequence of values. The methods best adapted to do this depend on whether one wants some randomly chosen permutations, or all permutations, and in the latter case if a specific ordering is required. Another question is whether possible equality among entries in the given sequence is to be taken into account; if so, one should only generate distinct multiset permutations of the sequence. An obvious way to generate permutations of n is to generate values for the Lehmer code (possibly using the factorial number system representation of integers up to n!), and convert those into the corresponding permutations. However, the latter step, while straightforward, is hard to implement efficiently, because it requires n operations each of selection from a sequence and deletion from it, at an arbitrary position; of the obvious representations of the sequence as an array or a linked list, both require (for different reasons) about n2/4 operations to perform the conversion. With n likely to be rather small (especially if generation of all permutations is needed) that is not too much of a problem, but it turns out that both for random and for systematic generation there are simple alternatives that do considerably better. For this reason it does not seem useful, although certainly possible, to employ a special data structure that would allow performing the conversion from Lehmer code to permutation in O(n log n) time. #### Random generation of permutations Main article: Fisher–Yates shuffle For generating random permutations of a given sequence of n values, it makes no difference whether one means apply a randomly selected permutation of n to the sequence, or choose a random element from the set of distinct (multiset) permutations of the sequence. This is because, even though in case of repeated values there can be many distinct permutations of n that result in the same permuted sequence, the number of such permutations is the same for each possible result. Unlike for systematic generation, which becomes unfeasible for large n due to the growth of the number n!, there is no reason to assume that n will be small for random generation. The basic idea to generate a random permutation is to generate at random one of the n! sequences of integers d1,d2,...,dn satisfying 0 ≤ di < i (since d1 is always zero it may be omitted) and to convert it to a permutation through a bijective correspondence. For the latter correspondence one could interpret the (reverse) sequence as a Lehmer code, and this gives a generation method first published in 1938 by Ronald A. Fisher and Frank Yates.[11] While at the time computer implementation was not an issue, this method suffers from the difficulty sketched above to convert from Lehmer code to permutation efficiently. This can be remedied by using a different bijective correspondence: after using di to select an element among i remaining elements of the sequence (for decreasing values of i), rather than removing the element and compacting the sequence by shifting down further elements one place, one swaps the element with the final remaining element. Thus the elements remaining for selection form a consecutive range at each point in time, even though they may not occur in the same order as they did in the original sequence. The mapping from sequence of integers to permutations is somewhat complicated, but it can be seen to produce each permutation in exactly one way, by an immediate induction. When the selected element happens to be the final remaining element, the swap operation can be omitted. This does not occur sufficiently often to warrant testing for the condition, but the final element must be included among the candidates of the selection, to guarantee that all permutations can be generated. The resulting algorithm for generating a random permutation of a[0], a[1], ..., a[n − 1] can be described as follows in pseudocode: for i from n downto 2 do   di ← random element of { 0, ..., i − 1 } swap a[di] and a[i − 1] This can be combined with the initialization of the array a[i] = i as follows: for i from 0 to n−1 do   di+1 ← random element of { 0, ..., i } a[i] ← a[di+1] a[di+1] ← i If di+1 = i, the first assignment will copy an uninitialized value, but the second will overwrite it with the correct value i. #### Generation in lexicographic order There are many ways to systematically generate all permutations of a given sequence.[12] One classical algorithm, which is both simple and flexible, is based on finding the next permutation in lexicographic ordering, if it exists. It can handle repeated values, for which case it generates the distinct multiset permutations each once. Even for ordinary permutations it is significantly more efficient than generating values for the Lehmer code in lexicographic order (possibly using the factorial number system) and converting those to permutations. To use it, one starts by sorting the sequence in (weakly) increasing order (which gives its lexicographically minimal permutation), and then repeats advancing to the next permutation as long as one is found. The method goes back to Narayana Pandita in 14th century India, and has been frequently rediscovered ever since.[13] The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place. 1. Find the largest index k such that a[k] < a[k + 1]. If no such index exists, the permutation is the last permutation. 2. Find the largest index l such that a[k] < a[l]. Since k + 1 is such an index, l is well defined and satisfies k < l. 3. Swap a[k] with a[l]. 4. Reverse the sequence from a[k + 1] up to and including the final element a[n]. After step 1, one knows that all of the elements strictly after position k form a weakly decreasing sequence, so no permutation of these elements will make it advance in lexicographic order; to advance one must increase a[k]. Step 2 finds the smallest value a[l] to replace a[k] by, and swapping them in step 3 leaves the sequence after position k in weakly decreasing order. Reversing this sequence in step 4 then produces its lexicographically minimal permutation, and the lexicographic successor of the initial state for the whole sequence. #### Generation with minimal changes Main article: Steinhaus–Johnson–Trotter algorithm An alternative to the above algorithm, the Steinhaus–Johnson–Trotter algorithm, generates an ordering on all the permutations of a given sequence with the property that any two consecutive permutations in its output differ by swapping two adjacent values. This ordering on the permutations was known to 17th-century English bell ringers, among whom it was known as "plain changes". One advantage of this method is that the small amount of change from one permutation to the next allows the method to be implemented in constant time per permutation. The same can also easily generate the subset of even permutations, again in constant time per permutation, by skipping every other output permutation.[13] ### Software implementations #### Calculator functions Many scientific calculators and computing software have a built-in function for calculating the number of k-permutations of n. • Casio and TI calculators: nPr • HP calculators: PERM[14] • Mathematica: FallingFactorial #### Spreadsheet functions Most spreadsheet software also provides a built-in function for calculating the number of k-permutations of n, called PERMUT in many popular spreadsheets. ### Applications Permutations are used in the interleaver component of the error detection and correction algorithms, such as turbo codes, for example 3GPP Long Term Evolution mobile telecommunication standard uses these ideas (see 3GPP technical specification 36.212 [15]). Such applications raise the question of fast generation of permutations satisfying certain desirable properties. One of the methods is based on the permutation polynomials. ## Notes 1. N. L. Biggs, The roots of combinatorics, Historia Math. 6 (1979) 109−136 2. Charalambides, Ch A. (2002). Enumerative Combinatorics. CRC Press. p. 42. ISBN 978-1-58488-290-9. 3. Wussing, Hans (2007). The Genesis of the Abstract Group Concept: A Contribution to the History of the Origin of Abstract Group Theory. Courier Dover Publications. p. 94. ISBN 9780486458687. "Cauchy used his permutation notation—in which the arrangements are written one below the other and both are enclosed in parentheses—for the first time in 1815" . 4. ^ a b D. E. Knuth, The Art of Computer Programming, Vol 3, Sorting and Searching, Addison-Wesley (1973), p. 12. This book mentions the Lehmer code (without using that name) as a variant C1,...,Cn of inversion tables in exercise 5.1.1−7 (p. 19), together with two other variants. 5. H. A. Rothe, Sammlung combinatorisch-analytischer Abhandlungen 2 (Leipzig, 1800), 263−305. Cited in, p. 14. 6. Fisher, R.A.; Yates, F. (1948) [1938]. Statistical tables for biological, agricultural and medical research (3rd ed.). London: Oliver & Boyd. pp. 26–27. OCLC 14222135. 7. Sedgewick, R (1977). "Permutation generation methods". Computing Surveys 9: 137–164. 8. ^ a b Knuth, D. E. (2005). "Generating All Tuples and Permutations". The Art of Computer Programming. 4, Fascicle 2. Addison-Wesley. pp. 1–26. ISBN 0-201-85393-0. ## References • Miklós Bóna. "Combinatorics of Permutations", Chapman Hall-CRC, 2004. ISBN 1-58488-434-7. • Donald Knuth. The Art of Computer Programming, Volume 4: Generating All Tuples and Permutations, Fascicle 2, first printing. Addison-Wesley, 2005. ISBN 0-201-85393-0. • Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Second Edition. Addison-Wesley, 1998. ISBN 0-201-89685-0. Section 5.1: Combinatorial Properties of Permutations, pp. 11–72. • Humphreys, J. F.. A course in group theory. Oxford University Press, 1996. ISBN 978-0-19-853459-4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9066581130027771, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/39017/list
## Return to Answer 1 [made Community Wiki] An expansion on Timothy Chow's example of Grandi's series $1 - 1 + 1 - 1 \pm ... = \frac{1}{2}$. It is possible to interpret the left hand side as computing the Euler characteristic of infinite real projective space $\mathbb{R}P^{\infty}$, which is a $K(\mathbb{Z}/2\mathbb{Z}, 1)$ and therefore rightfully has orbifold Euler characteristic $\frac{1}{2}$! I think I learned this example from somewhere on Wikipedia.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9160142540931702, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/20365/what-are-the-odds-of-never-losing-in-a-loaded-coin-game
# What are the odds of never losing in a loaded coin game? Let's consider this simple dice game: A coin is faked so it has `p` chance to land on heads, and `1-p` chance to land on tails. Every round costs `$1`, and gives you `$2` if you win (for a total of `+$1`). Assume you're starting with `$n`. What are your odds to "go infinite" - be able to play the game forever? This sounds like Markov Chains 101, it's just been ages since I read anything about Markov Chains. Also - given any constant `m`, what are the odds of ever reaching `$m` in this game? - I can answer the second question: if $m < n$ then the probability is between $0$ and $1$, if $m \geq n$ then the probability is $1$. – Yuval Filmus Feb 4 '11 at 7:26 @Yuval: The probability is most certainly not 1. Imagine you start out with \$1 and lose the first round... – BlueRaja - Danny Pflughoeft Jun 24 '11 at 23:23 ## 1 Answer Define $f(n)$ as the probability of playing forever when starting out with n coins. Also, assume that the probability p of winning a round is bigger than $\frac{1}{2}$ (otherwise, the probability of playing forever is 0). Then, we get the recurrence relation $$f(n) = p f(n + 1) + (1-p) f(n-1)$$ with the boundary conditions $f(0) = 0$ and $\lim_{n \to \infty} f(n) = 1$. The general solution of the recurrence relation is $$f(n) = a + b \left( \frac{1-p}{p} \right)^n$$ and from the boundary conditions we get $a = -b = 1$. So, the probability to play infinitely is $$f(n) = 1 - \left( \frac{1-p}{p} \right)^n .$$ The second question can be answered just as the first one, but with different boundary conditions: $f(0) = 0$ and $f(m) = 1$. This leads to the probability of reaching m as $$\frac{1 - \left( \frac{1-p}{p} \right)^n}{1 - \left( \frac{1-p}{p} \right)^m} .$$ By the way, the keyword for this kind of problems is 'Random Walk'. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9509855508804321, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/158720-geometric-construction.html
# Thread: 1. ## Geometric construction Suppose that $a,b,c \in \mathbb{R}$ satisfy: $0<a<b+c \ \ \ \ \ 0<b<a+c \ \ \ \ \ 0<c<a+b$ i). Give a geometric construction to prove there exists a triangle in the Euclidean plane $\mathbb{E}^2$ (that is $\mathbb{R}^2$ with the usual metric) with sides of length equal to $a,b,c$. I'm not really sure if I got this, but I chose a 5,12,13 triangle. It satisfies the inequalities, but i'm not sure if they want a more general answer. ii). What special thing happens in your construction if one of the inequalities becomes equality, say, $a=b+c$? With equality (I chose b=c), we get $0<a<2c$ and $0<c<a+c$. The second inequality seems a bit redundant since $a>0$. I don't really see the special thing that happens. My 5,12,13 triangle still works! 2. It seems to me that you have to prove that a triangle exists for arbitrary a, b, and c (subject to the constraints). Maybe you can consider a line segment of length a and two circles whose centers are the ends of the segment and whose radiii are b and c. Then show that the circles intersect when the inequalities hold.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433946013450623, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/18801/why-doesnt-an-electron-accelerate-in-a-circuit
# Why doesn't an electron accelerate in a circuit? Why don't electrons accelerate when a voltage is applied between two points in in a circuit? All the textbooks I've referred conveyed the meaning that when an electron traveled from negative potential to positive potential, the velocity of the electron is a constant. - You are asking why is there no force on an electron from a voltage difference, i.e. from the electrostatic potential between the two terminals of your circuit. Any resulting force is proportional to the gradient of the potential. – Chris Gerig Dec 27 '11 at 4:22 ## 5 Answers Electrons are accelerated by the constant applied electric field that comes from the external potential difference between two points, but are decelerated by the intense internal electric fields from the material atoms that makes up the circuit. This effect is modeled as resistance. - Yes, the electron is accelerated by the external electric field $E$, but at the same time it is "decelerated" with collisions with obstacles. These collisions are modelled as a "friction" force proportional to the electron velocity, something like this: $$m_e\frac{dv}{dt} = eE-r\cdot v$$ This equation has a quasi-stationary solution when the dragging force cannot exceed the resistance force: $$eE=r\cdot v$$ This gives a constant (average or drift) velocity. This picture is literally applicable to the gas discharge (current in a gas) where the electrons are particles accelerated between collisions with atoms. - And one can add that they are accelerated if the circuit element and the voltage difference is the one applied on a vacuum tube, the simplest particle accelerator. In the vacuum there is no resistance and statistical transfer of energy to other electrons. - You may consider the relation: $$i=nAev_d$$ $$=>v_d=\frac{i}{nAe}$$ $A$ : Cross -Sectional Area $n$ : Concentarion of electrons $v_d$ : Drift velocity $i$ : Current If a constant current flows through a conductor of varying cross section the drift velocity will change In fact we have the relation:$$j=\sigma E$$ If the cross section changes[current remaining constant ] the current density,$j$ will change. Consequently E will change if the conductivity $\sigma$ is constant[for a homogeneous material with constant values for n and $\sigma$]. - At the classical level the explanation provided in the previous answers is known as the Drude model. There is additional info on Wikipedia: http://en.wikipedia.org/wiki/Drude_model -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9118624925613403, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/tagged/diophantine-equations?sort=faq&pagesize=15
# Tagged Questions Questions on the use of Mathematica to find integer/rational solutions to equations. 2answers 323 views ### Solving/Reducing equations in $\mathbb{Z}/p\mathbb{Z}$ I was trying to find all the numbers $n$ for which $2^n=n\mod 10^k$ using Mathematica. My first try: Reduce[2^n == n, n, Modulus -> 100] However, I receive ... 3answers 530 views ### Finding the number of solutions to a diophantine equation I want to count total number of the natural solutions (different from 0) of the equation $2x + 3y + z = 100$, but don't know how. How can I calculate it using Mathematica? I tried: ... 2answers 227 views ### Is there a way to use functions like Prime[n] within Solve[]? I'm trying to see if a number can be written as the sum of two prime numbers. Ideally, I would like to use Solve[Prime[n] + Prime[m] == 100, {n, m}] But that ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415910243988037, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/tagged/probability?page=1&sort=votes&pagesize=30
Tagged Questions Questions about performing probabilistic calculations, especially those concerned with the Mathematica commands Probability, Expectation and the various functions related to distributions such as NormalDistribution, CDF, PDF etc. 1answer 707 views RandomVariate from 2-dimensional probability distribution A probability distribution can be created in Mathematica (I am using 8.0.1) with e.g. ... 6answers 968 views Efficient way to generate random points with a predefined lower bound on their pairwise Euclidean distance Using Mathematica what is an efficient way to generate a list of $n$ random two dimensional points $\{x_i,y_i\}$ where $i=1,...,n$ so that no two points $p_1$ and $p_2$ in the list has an Euclidean ... 1answer 288 views RandomVariate returns values outside the support of a PDF Let $X$ be a random variable with pdf: dist = ProbabilityDistribution[1/(Abs[x]*Log[Abs[x]]^2), {x, -E^-2, E^-2}] Here are some pseudo-random drawings from it: ... 3answers 585 views What is the average of rolling two dice and only taking the value of the higher dice roll? I am referencing to this question on mathunderflow: What is the average of rolling two dice and only taking the value of the higher dice roll? Now I tried: ... 3answers 300 views Simulate a simple spinner I'm teaching some simple ideas in probability to students in grade 7. The "spinner" below works, but I'm wondering how I could make it just a bit more realistic by having it actually "spin" around a ... 1answer 386 views How to compute the inverse CDF properly? I want to compute the CDF and inverse CDF of the hyperbolic distribution: α = 2; β = 3/2; x = -3; u = N[CDF[HyperbolicDistribution[α, β, 1, 0], x], 30] The ... 2answers 506 views Obtaining joint distributions and conditional distributions using Mathematica I have two multivariate Gaussian distributions $p(x)$ and $p(z)$ with mean vectors $m_x$ and $m_z$, and covariance matrices $\Sigma_x$ and $\Sigma_z$. my model is a simple linear model $x = W z+n$ ... 6answers 853 views 2D Gaussian distribution of squares coordinates I would like to imitate the structure of this great painting from Ellsworth Kelly in Mathematica. Yet with all the colored squares in Black and the beige one in white. Below is what I have wrote to ... 4answers 366 views How do I solve this probability problem with Probability? Consider a stick of length 1. Pick two points uniformly at random on the stick, and break the stick at those points. What is the probability that the three segments obtained in this way form a ... 4answers 292 views Extracting equations from Piecewise expressions Say I have a PDF: PDF[LogNormalDistribution[1.75, 0.65], x] Calculating it, Mathematica gives me an expression that looks like this: I want to extract the ... 2answers 815 views How to calculate the mode of a probability distribution I was wondering if there is any command to get the mode of the probability distributions. Calculating manually, the mode is (alpha-1)/(alpha+beta-2) While the mean, median or variance of the beta ... 1answer 126 views NExpectation behaves oddly with EmpiricalDistribution My question concerns the usage of NExpectation and Expectation and why I see the behavior I see in the following example. First ... 3answers 614 views How to generate a RandomVariate of a custom distribution? I'm trying to generate a pseudorandom variate out of a custom distribution. Suppose I want define a custom distribution, and for the sake of simplicity I define a Poisson distribution (the ... 1answer 331 views Recommended book on random processes to understand new functionality in Mathematica 9? I am interested in exploring the new functionality on random processes available in Mathematica 9, but I am not familiar with all of the underlying mathematics. Could you recommend a book that ... 3answers 450 views Probability problem — Rube Goldberg solution? A user posted this question on StackOverflow which was closed as off topic: 3 people are playing a game with a standard 52 card deck. Each player is given 2 cards each, possible cards and their ... 2answers 337 views Expected Value - Mathematica I would appreciate if one could help me with the following Mathematica problem: Assume $X$ is an exponential random variable with unit mean ($f_X(x)=e^{-x}$, $x>0$). I want to calculate expected ... 1answer 302 views Probability and distribution from actual data Let's say I have some data from a real world system: ... 1answer 235 views Random variables with transformed discrete distributions cannot be applied to Probability[] function? I have two discrete uniform distributions over same support: Dx = DiscreteUniformDistribution[{-8, 8}]; Dy = DiscreteUniformDistribution[{-8, 8}]; I use ... 1answer 333 views How do I calculate the probability of reaching mean residual life I need to understand the Survival Analysis concept of "mean residual life (MRL)" and calculating probabilities for reaching it. From another discussion: The MRL at time t is the mean additional ... 2answers 1k views Function to compute the probability of exactly one event occurring out of N independent events Is there a built in function (or a function in one of the standard packages) that allows you to compute the probability of exactly one event occurring out of some known set of probabilities for N ... 1answer 352 views Mathematica Package for Bayesian Networks Are there any packages that allow the simulation of Bayesian Networks with Mathematica? I found what seemed to be a promising package (Dynamics) on a Brown University URL, ... 4answers 248 views Probability: proportion of 1000 random lists for x that contain the same nrs Give a function which calculates for each x a random list of birthdays (which can be x=1 to x=365) birthdays[x_] := RandomInteger[{1, 365}, x] is my answer. ... 2answers 130 views How to find the variance under the assumption that x follows some probability distribution I am aware that we can find the expectation under the assumption that x follows some probability distribution, something like this: ... 3answers 290 views Graphical Plots of PDF and CDF I have a joint density and distribution function that I want to plot in a meaningful way, (i.e., want to be able to see how the functions behaves as changes in x and y happen simultaneously. If ... 1answer 151 views Reproducing a graphic on Jensen's inequality for the probabilistic case I am trying to reproduce the following graph on Jensen's inequality for the probabilistic case (source: wikipedia) but with the exponential function as Y, the normal distribution on the X-axis and ... 1answer 116 views Combinations which do not have elements in common I can choose 2 letters from the four letters $\{A,B,C,D\}$ in 6 combinations using the combination formula $$\frac{n!}{ r! (n-r)!}$$ ... 1answer 224 views Different results for MaximumLikelihood depending on method? When I used this command: ... 0answers 95 views Speeding up multilinear PRA branch-and-bound algorithm with worst-case exponential time scenario with respect to basic events The algorithm is a branch-and-bound algorithm that calculates dominances for PRA, probalistic risk assessment. The task was to find faster ways to do it in numerical software such as Matlab but we ... 2answers 153 views Empirical Cumulative Distribution Function I have a data set of the form $d=\{(y_1,x_1),(y_2,x_2)...(y_n,x_n)\}$ for a large $n$. A non-parametric plot of this data (a scatter plot where all observations are sorted along $x$ and joined with a ... 1answer 118 views Expectation of CauchyDistribution I've noticed this strange behavior and I'm wondering if it's a bug. I define a Cauchy distribution: c = CauchyDistribution[0, 1]; If I evaluate ... 0answers 83 views shortest paths in probabilistically weighted edges? [duplicate] I understand Mathematica can find shortest paths. However, is it also able to find the most probable paths? I mean this is a case where edges are given probabilistic wights (from 0 to 1) and the most ... 0answers 85 views ProbabilityDistribution PDF on boundary points I would like to evaluate the PDF of a custom ProbabilityDistribution at the min and max values. However, PDF[...] returns 0 at ... 4answers 120 views Descriptive statistics of two events I am trying to use the descriptive statistics feature of Mathematica to answer the following question: suppose I have two events, A and B, whose occurrence is described by a normal distribution around ... 3answers 411 views Some questions about random numbers How can I get non-repeating random numbers from Mathematica 8? How can I know which distribution the numbers I get are? Can I choose the distribution I want together with the non-repeating random ... 1answer 123 views My Data has negative and positive numbers, can I still use it to get the PDF? I have data that reflects the arrival time of a flight. {3, -2, 16, 13, -6, 5, 4, -7, 7, 0……..n} this is for 8:45am and has 150 elements "arrivals" The positive values represent the time in minutes ... 2answers 295 views Total Variation Distance of probability matrix How can I calculate the Total Variation Distance of a transition Matrix? is there any built in function? I've searched all documentation and haven;t found anything. ** More information: Let me try ... 1answer 143 views Calculate variance of random walk? How can I symbolically calculate the variance of the following random walk in Mathematica? Given several discrete random variables such that $p(Z_i=1-2k)=p$, where $k$ is a small real number, and ... 1answer 98 views Safe values of $\mu$ and $\sigma$ when randomly sampling from a Log-Normal Distribution? I believe I'm obtaining overflow errors when randomly sampling from a log-normal distribution with the command: RandomVariate[LogNormalDistribution[μ, σ], 1] ... 1answer 99 views PDF on TransformedDistribution of two BinomialDistribution too slow I'd been doing my own convolutions of distributions for some calculations, decided to use built-ins. With ... 0answers 53 views integral of inverse distribution function Can Mathematica solve the following problem? The parameter of interests is $\alpha$. Let $G(y)$ be the distribution function of $y$. (e.g. $G$ is a lognormal distribution function with given ... 1answer 155 views Trying to plot this probability Can anyone help me plot this? log P(X >= x) = alpha logx x=0.001 + k(0.001) k= 0, ..., 100 I can't figure out the coding for this.. I've been trying this for a while, and can't seem to figure it ... 1answer 218 views Solving derivative of cumulative normal distribution log likelihood I am a newbie and I'm trying to use Mathematica to obtain the symbolic Maximum Likelihood Estimation for a cumulative normal distribution. So far I have reached the step where I have the derivative of ... 1answer 218 views How can an InverseQuantile or an interval valued Quantile function be implemented in Mathematica? Quantile[] is the workhorse method in robust data analysis and statistics (see, eg Koenker's Quantile Regression). However, it should be complemented by an ... 0answers 22 views Probability of a deck of cards of 8 blue and 5 white [migrated] A deck of cards consists of 8 blue cards and 5 white cards. A simple random sample (random draws without replacement) of 6 cards is selected. What is the chance that one of the colors appears twice as ... 2answers 174 views Calculate one-tailed critical point from a probability distribution I would like to calculate the numerical value of $\xi\in\mathbb{R}_{\geq0}$$$P(|U|<\xi)=0.95$$ where$U\$ is standard-normal distributed. How may I do that in Mathematica?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9004145860671997, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/41473?sort=newest
## Bounding the roots of the sum of two polynomials ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose I have two polynomials with real coefficients. Suppose I can perform any sort of preprocessing on them I want. I want to be able to pre-emptively say that the sum of the polynomials doesn't have any roots inside a given interval without doing any explicit calculations on the sum itself. False positives (that is, saying there aren't any roots when there are some) would be deal-breaking, but false negatives (reporting there might be roots when there aren't) would be acceptable. Or to put it more explicitly: All functions $p_x(t)$ have a form like: $p_x(t) = a_{n,x} * t^n + a_{n-1, x} * t^{n-1} + ... + a_{1,x} * t + a_{0,x}$ We can define $p_3(t) = p_1(t) + p_2(t)$ I want to determine if $p_3(t)$ might have any roots inside a given interval $[t_{min}, t_{max}]$. But I want to do it only using properties of $p_1(t)$ and $p_2(t)$, their roots, etc. and not anything that would need me to calculate anything for $p_3(t)$, its roots, etc. Any ideas on how to approach the problem? EDIT: So some motivation of what I'm doing: I have a large set of polynomials that are related to the path of a point through space over time. I want to find polynomials that intersect sometime in the "near" future, but I don't want to have to do all $\frac{n*(n-1)}{2}$ polynomial-to-polynomial evaluations. So I'm trying to build a "broad phase" that only offers up pairs of polynomials to be solved in a "narrow phase" (ie: actual root finding) if they're "pretty close" to colliding. Whatever the algorithm for the broad phase is, it can't involve iterating over all the polynomial pairs or it defeats the point. One sort of square-peg-round-hole solution would be to use something like bounding boxes around the polynomials and use a spatial partitioning tree to find where boxes overlap, and then do the root finding on those. But it doesn't handle cases very well where the time interval of interest is quite large, or especially if one of the interval ends is infinity or negative infinity. So I wanted to explore it from another direction and see if I can come up with something that works better. - Just to echo the comments in the answers of Thierry Zell and drvitek: this seems at first glance like an odd question, and so some hint of the motivation might help people to see where you are coming from – Yemon Choi Oct 8 2010 at 2:11 Yes, your problem is a tad too general. Isn't there anything "special" about your polynomials? – J. M. Oct 8 2010 at 2:25 A somewhat related question mathoverflow.net/questions/30072/… – jc Oct 8 2010 at 2:41 ## 6 Answers drvitek's solution actually answers to your question, but the computations migth not be as simple as you hope. All you have to do is find all the roots $x_1,...,x_n$ of the derivative $p'_1$ and the roots $y_1,.., y_m$ of $p_2'$. If the set ${ p_1(x_i)+p_2(y_j) }$ takes both positive and negative values there could be roots (could be false positive). If the set ${ p_1(x_i)+p_2(y_j) }$ takes only positive or only negative values there cannot be any root. But the problem is too vague, you need to provide more details. By the way the following "solution" actually satistfies all your requerements (but it is definitelly not what your are looking for): Blockquote Solution: No matter what $p_1, p_2$ are always report "there migth be roots". This is either true, or false negative - Your blockquote solution would be the naive way, yeah. Anything to improve on that would be good. I'll play with the non-blockquote solution you gave and see if it works for what I have in mind. – Jay Lemmon Oct 8 2010 at 17:18 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The situation doesn't seem so perplexing to me. Suppose we have N polynomials $p_1, \dots, p_N$ and we want to find pairs of polynomials $p_i + p_j$ which have roots in the interval. We don't want to test all $O(N^2)$ pairs of polynomials; instead we want a simple criterion which can reject most of these pairs, hopefully in $< N^2$ work. Even if the criterion requires testing all $N^2$ pairs of polynomials, we may at least be able to do something more simple than finding roots. If the test admits false positives, that is OK, because any pairs $p_i + p_j$ which pass the first test may be tested individually. - Yep, you guessed the motivation. Cookie for you ;) – Jay Lemmon Oct 8 2010 at 17:02 One of the simplest tests to use is to compare the sign (using preprocessed values) of the quantity p_1(tmin) + p_2(tmin) with the sign of the quantity p_1(tmax) + p_2(tmax). If the signs are different, you have a root of p_3 in the interval. Variations on this involve essentially lower order approximations to p_1 and p_2 and essentially yield only true positives, not true negatives. The suggestions of other posters most likely give better guarantees against false negatives. The restrictions as well as the notion of preprocessing on p_1 and p_2 suggest to me one of the following: This is an interesting homework problem, and you want us to do the hard thinking for you; You are given p_1 and p_2 to such precision that roundoff error is a significant factor, and thus adding coefficients will introduce too much inaccuracy for you to get a good result; You are going to do this repeatedly, and thus adding the polynomials will be more expensive than encoding them and working with the encodings; You are actually trying to solve a much harder problem, and you think that solving this one will be useful in tackling the harder problem. There are other scenarios that I could suggest. If you are serious about receiving help on this problem, you should say why you cannot compute the sum and use that to determine the roots, while being allowed presumably unbounded preprocessing on the summands. While we wait for more information, a suggestion offered half in jest. Compute exp(p_1) and exp(p_2) and multiply them, then take the log of the result. Or is that what you are actually trying to do, invert some transform? - A sufficient condition for a polynomial $P(t)$ of degree $n$ to have no roots in $[c-a,c+a],$ is that $Q(t):=t^n P(1/t-c)$ has all roots with $|t|<1/a,$ which is ensured by a condition on the coefficients of $Q$ (like here). Of course, if $P=p_1+p_2$ one can easily write the condition in terms of the coefficients of $p_1$ and $p_2.$ However, I share the very same feelings of slight perplexity as the other people who already answered. - This looks interesting, but I don't follow well enough. Using something like Cauchy bounds you always end up with an upper bound on the absolute value of possible roots $|t|$ that is greater than 1. So it can't ever be $<1/a$. And I definitely don't see where you're building $Q(t)$ from. – Jay Lemmon Oct 8 2010 at 17:40 You could test if $\min{|p_1(t)|} > \max{|p_2(t)|}$ (or the reverse) over the interval $[t_{min},t_{max}]$, but this looks to be harder than computing $p_3$. This is an odd question... - That's a strange question. The standard procedure for counting real roots in an interval is the Sturm Sequence, which can be performed on $p_1$ and $p_2$ and it will be exact; but it involves derivatives, so if computing $p_3$ is a deal-breaker, I'm not sure if derivatives would be acceptable for you. It would really help if you explained your motivations, because I feel I'm just guessing here. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448102712631226, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/253609/a-question-about-lefschetz-fixed-point-theorem
# A question about Lefschetz fixed-point theorem. I know that the Lefschetz number of the identity map is the Euler character. A map $f$ which is isotopic to $id$ will have the same number. And then we use the theorem to find fixed points. My question is whether we can request the orbits of these fixed points under the isotopy to be trivial in $\pi_1(M)$. Denote the isotopy as $f_t$ and let $x$ be a fixed point of $f$, then the orbit of $x$ under this isotopy is the loop $f_t(x),t\in [0,1]$. Thanks to Qiaochu to point this out. - I don't understand what you mean by an orbit of a point under an isotopy. – Qiaochu Yuan Dec 8 '12 at 10:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258342981338501, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/reversibility
Tagged Questions The reversibility tag has no wiki summary. 0answers 93 views How can dynamics be reversible if inflation-style baby universe spawning is allowed? I just finished readying Sean Carroll's book, "From Eternity to Here", and have a question about reversibility and inflation: Assume inflation allows random quantum fluctuations to produce high ... 1answer 49 views Reversible gates Is it possible to make any gate reversible merely by retaining the input bits in the output and introducing ancilla bits as necessary? That is, given an irreversible gate with $k$ inputs and $l$ ... 2answers 89 views What is evidence for an irreversible change? Knowing some about thermodynamics and reactions, I do understand how it can be shown that a change is reversible. But irreversible? Why can't it be that a change that was deemed irreversible thousands ... 2answers 119 views What is the nature of the correspondence between unitary operators and reversible change? Why does the formalism of QM represent reversible changes (eg the time evolution operator, quantum gates, etc) with unitary operators? To put it another way, can it be shown that unitary ... 3answers 322 views Thermodynamically reversed black holes, firewalls, Casimir effect, null energy condition violations Scott Aaronson asked a very deep question at Hawking radiation and reversibility about what happens if black hole evolution is reversed thermodynamically. Most of the commenters missed his point ... 2answers 924 views Why do reversible processes not increase the entropy of the universe infinitesimally? The book Commonly Asked Questions in Thermodynamics states: When we refer to the passage of the system through a sequence of internal equilibrium states without the establishment of equilibrium ... 3answers 327 views Hawking radiation and reversibility It's often said that, as long as the information that fell into a black hole comes out eventually in the Hawking radiation (by whatever means), pure states remain pure rather than evolving into mixed ... 2answers 56 views Heuristics for definitions of open and closed quantum dynamics I've been reading some of the literature on "open quantum systems" and it looks like the following physical interpretations are made: Reversible dynamics of a closed quantum system are represented ... 2answers 124 views Is there any scientific evidence that demonstrates why time passes? Is there any scientific evidence that demonstrates why time passes? Or is it just an opened question? 4answers 799 views How slow is a reversible adiabatic expansion of an ideal gas? A truly reversible thermodynamic process needs to be infinitesimally displaced from equilibrium at all times and therefore takes infinite time to complete. However, if I execute the process slowly, I ... 4answers 332 views Irreversible expansion and time reversal symmetry Suppose there are N non-interacting classical particles in a box, so their state can be described by the $\{\mathbf{x}_i(t), \mathbf{p}_i(t) \}$. If the particles are initially at the left of the box, ... 3answers 336 views Intuitively, why is a reversible process one in which the system is always at equilibrium? A process is reversible if and only if it's always at equilibrium during the process. Why? I have heard several specific example of this, such as adding weight gradually to a piston to compress the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246282577514648, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/normal-distribution+pi
# Tagged Questions 8answers 2k views ### What do $\pi$ and $e$ stand for in the normal distribution formula? I'm a very beginner in mathematics and there is one thing I've been wondering recently. The formula for the normal distribution is: ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920565664768219, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/158798-automorphism-subgroup-abelian-group.html
# Thread: 1. ## Automorphism of a subgroup of an abelian group Let $A \cong C_n^{m}$ be an abelian group consisting of the direct product of $C_n$ $m$-times for $m \in \mathbb{N} \cup \{\infty\}$. Let $H \leq A$. My question is this: if $\phi$ is an automorphism of $H$ then does $\phi$ extend to an automorphism of $A$? If not in general, what if $n$ is prime?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8083115220069885, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/01/17/tychonoffs-theorem-i/?like=1&source=post_flair&_wpnonce=78d053c4d8
# The Unapologetic Mathematician ## Tychonoff’s Theorem One of the biggest results in point-set topology is Tychonoff’s Theorem: the fact that the product of any family $\{X_i\}_{i\in\mathcal{I}}$ of compact spaces is again compact. Unsurprisingly, the really tough bit comes in when we look at an infinite product. Our approach will use the dual definition of compactness. Let’s say that a collection $\mathcal{F}$ of closed sets has the finite intersection hypothesis if all finite intersections of members of the collection are nonempty, so compactness says that any collection satisfying the finite intersection hypothesis has nonempty intersection. We can then form the collection $\Omega=\{\mathcal{F}\}$ of all collections of sets satisfying the finite intersection hypothesis. This can be partially ordered by containment — $\mathcal{F}'\leq\mathcal{F}$ if every set in $\mathcal{F}'$ is also in $\mathcal{F}$. Given any particular collection $\mathcal{F}$ we can find a maximal collection containing it by finding the longest increasing chain in $\Omega$ starting at $\mathcal{F}$. Then we simply take the union of all these collections to find the collection at its top. This is almost exactly the same thing as we did back when we showed that every vector space is a free module! And just like then, we need Zorn’s lemma to tell us that we can manage the trick in general, but if we look closely at how we’re going to use it we’ll see that we can get away without Zorn’s lemma for finite products. Anyhow, this maximal collection $\mathcal{F}$ has two nice properties: it contains all of its own finite intersections, and it contains any set which intersects each set in $\mathcal{F}$. These are both true because if $\mathcal{F}$ didn’t contain one of these sets we could throw it in, make $\mathcal{F}$ strictly larger, and still satisfy the finite intersection hypothesis. Now let’s assume that $\mathcal{F}$ is a collection of closed subsets of $\prod\limits_{i\in\mathcal{I}}X_i$ satisfying the finite intersection hypothesis. We can then get a maximal collection $\mathcal{G}$ containing $\mathcal{F}$. Then given an index $i\in\mathcal{I}$ we can consider the collection $\{\overline{\pi_i(G)}\}_{G\in\mathcal{G}}$ of closed subsets of $X_i$ and see that it, too, satisfies the finite intersection hypothesis. Thus by compactness of $X_i$ the intersection of this collection is nonempty. Letting $U_i$ be a closed set containing one of these intersection points $x_i$, we see that the preimage $\pi_i^{-1}(U_i)$ meets every $G\in\mathcal{G}$, and so must itself be in $\mathcal{G}$. Okay, so let’s take the point $x_i$ for each index and consider the point $p$ in $\prod\limits_{i\in\mathcal{I}}X_i$ with $i$-th coordinate $x_i$. Then pick some set $V=\prod\limits_{i\in\mathcal{I}}V_i$ containing $p$ from the base for the product topology. For all but a finite number of the $i$, $V_i=X_i$. For those finite number where it’s smaller, the closure of $V_i$ contains the point $x_i\in X_i$, and so $\pi_i^{-1}(V_i)$ is in $\mathcal{G}$. So their finite intersection must be nonempty, and so is $V$ itself! Now, since $V$ is in $\mathcal{G}$, it must intersect each of the closed sets in the original collection $\mathcal{F}$. Since the only constraint on $V$ is that it contain $p$, this point must be a limit point of each of the sets in $\mathcal{F}$. And because they’re closed, they must contain all of their limit points. Thus the intersection of all the sets in $\mathcal{F}$ is nonempty, and the product space is compact! ### Like this: Posted by John Armstrong | Point-Set Topology, Topology ## 5 Comments » 1. [...] Tychonoff’s Theorem tells us that products of closed intervals are also compact. In particular, the closed cube is [...] Pingback by | January 21, 2008 | Reply 2. The two last paragraphs are not much clear to me, in particular V that seems to be defined as a random open set from the base… Comment by | March 28, 2008 | Reply 3. Oh.. $V$ should be picked to contain the point $p$ Comment by | March 29, 2008 | Reply 4. [...] of choice — was essential when we needed to show that every vector space has a basis, or Tychonoff’s theorem, or that exact sequences of vector spaces split. So it’s sort of a mixed bag. In practice, [...] Pingback by | April 24, 2010 | Reply 5. [...] product of one copy of for every element of . Since each copy of is a compact Hausdorff space, Tychonoff’s theorem tells us that is a compact Hausdorff space. If is any finite subring of containing , let be the [...] Pingback by | August 18, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 49, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9339112639427185, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/19167/a-probability-question-about-throwing-an-irregular-fair-die?answertab=oldest
# a probability question about throwing an irregular fair die A and B throw a fair 6-sided die in turn. The die is not regular because the faces are {1,2,2,4,4,6}. The winner is whoever first gets a cumulated sum equal or greater than 10. Ask for the probability that A wins, and the expected number of throws when the game generates a winner. - I recommend using python (or similar) to enumerate all possibilities and then you can calculate anything you like. – Yuval Filmus Jan 27 '11 at 3:15 @Yuval, can you give an example? Also, analytically, are there any convenient method please? thanks. – Qiang Li Jan 27 '11 at 4:57 ## 1 Answer Define the polynomial $P(x)$ by $$P(x) = \frac{x + 2x^2 + 2x^4 + x^6}{6}.$$ This polynomial represents one throw of the die. The probability to get a sum of $s$ after $k$ throws is the coefficient of $x^s$ in $P(x)^k$. Let's denote that by $[P(x)^k]_{x^s}$. The probability that the sum reaches $10$ for the first time at time $t$ is $$w_t = \sum_{s \geq 10} [P(x)^t]_{x^s} - \sum_{s \geq 10} [P(x)^{t-1}]_{x^s};$$ in other words, $$w_t = \sum_{s < 10} [P(x)^{t-1}]_{x^s} - \sum_{s < 10} [P(x)^t]_{x^s}.$$ The probability that the sum doesn't reach $10$ at time $t$ is $$l_t = \sum_{s < 10} [P(x)^t]_{x^s}.$$ Notice that $w_t = l_{t-1} - l_t$. The probability that B wins is $$\sum_{t \geq 1} l_t w_t.$$ The probability that A wins is, similarly, $$\sum_{t \geq 1} w_t l_{t-1}.$$ The expected number of throws is thus $$\sum_{t \geq 1} 2t l_t w_t + (2t-1) w_t l_{t-1}.$$ Since $l_t = 0$ for $t \geq 10$, all these sums are finite. You can compute everything with a CAS. Here are some results from SAGE calculations: • A wins w.p. $64601710707175/101559956668416 \approx 0.636$. • B wins w.p. $36958245961241/101559956668416 \approx 0.364$. • The expected number of throws is $550136643228931/101559956668416 \approx 5.42$. - thank you. What is CAS and SAGE? – Qiang Li Jan 27 '11 at 16:56 – Yuval Filmus Jan 27 '11 at 17:44 thank you! – Qiang Li Jan 27 '11 at 23:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171710014343262, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/03/20/lie-groups-lie-algebras-and-representations/?like=1&source=post_flair&_wpnonce=1ba466d084
# The Unapologetic Mathematician ## Lie groups, Lie algebras, and representations I’m eventually going to get comments from Adams, Vogan, and Zuckerman about Kazhdan-Lusztig-Vogan polynomials, but I want to give a brief overview of some of the things I know will be involved. Most of this stuff I’ll cover more thoroughly at some later point. One side note: I’m not going to link to the popular press articles. I’m glad they’re there, but they’re awful as far as the math goes. Science writers have this delusion that people are just incapable of understanding mathematics, and completely dumb things down to the point of breaking them. Either that or they don’t bother to put in even the minimum time to get a sensible handle on the concepts like they do for physics. Okay, so a Lie group is a group of continuous transformations, like rotations of an object in space. The important thing is that the underlying set has the structure of a manifold, which is a space that “locally looks like” regular $n$-dimensional space. The surface of the Earth is curved into a sphere, as we know, but close up it looks flat. That’s what being a manifold is all about. The group structure — the composition and the inversion — have to behave “smoothly” to preserve the manifold structure. One important thing you can do with a Lie group is find subgroups with nice structures. Some of the nicest are the one-dimensional subgroups passing through the identity element. Since close up the group looks like $n$-dimensional space, let’s get really close and stand on the identity. Now we can pick a direction and start walking that way, never turning. As we go, we trace out a path in the group. Let’s say that after $t$ minutes of elapsed time we’re at $g(t)$. If we’ve done things right, we have the extremely nice property that $g(t_1)g(t_2)=g(t_1+t_2)$. That is, we can multiply group elements along our path by adding the time parameters. We call this sort of thing a “1-parameter subgroup”, and there’s one of them for each choice of direction and speed we leave the origin with. So what if we start combining these subgroups? Let’s pick two and call them $g_1(t)$ and $g_2(t)$. In general they won’t commute with each other. To see this, get a ball and put it on the table. Mark the point at the top of the ball so you can keep track of it. Now, roll the ball to the right by 90°, then away from you by 90°, then to your left by 90°, then towards you by 90°. The point isn’t back where it started, it’s pointing right at you! Try it again but make each turn 45°. Again, the point isn’t back at the top of the ball. If you do this for all different angles, you’ll trace out a curve of rotations, which is another 1-parameter subgroup! We can measure how much two subgroups fail to commute by getting a third subgroup out of them. And since 1-parameter subgroups correspond to vectors (directions and speeds) at the identity of the group, we can just calculate on those vectors. The set of vectors equipped with this structure is called a Lie algebra. Given two vectors $\mathbf{v}$ and $\mathbf{w}$ we write the resulting vector as $\left[\mathbf{v},\mathbf{w}\right]$. This satisfies a few properties. • $\left[c\mathbf{v}_1+\mathbf{v}_2,\mathbf{w}\right]=c\left[\mathbf{v}_1,\mathbf{w}\right]+\left[\mathbf{v}_2,\mathbf{w}\right]$ • $\left[\mathbf{v},\mathbf{w}\right]=-\left[\mathbf{w},\mathbf{v}\right]$ • $\left[\mathbf{u},\left[\mathbf{v},\mathbf{w}\right]\right]=\left[\left[\mathbf{u},\mathbf{v}\right],\mathbf{w}\right]+\left[\mathbf{v},\left[\mathbf{u},\mathbf{w}\right]\right]$ Lie algebras are what we really want to understand. So now I’m going to skip a bunch and just say that we can put Lie algebras together like we make direct products of groups, only now we call them direct sums. In fact, for many purposes all Lie algebras can be broken into a finite direct sum of a bunch of “simple” Lie algebras that can’t be broken up any more. Think about breaking a number into its prime factors. If we understand all the simple Lie algebras, then (in theory) we understand all the “semisimple” Lie algebras, which are sums of simple ones. And amazingly, we do know all the simple Lie algebras! I’m not remotely going to go into this now, but at a cursory glance the Wikipedia article on root systems seems to be not completely insane. The upshot is that we’ve got four infinite families of Lie algebras and five weird ones. Of the five weird ones, $E_8$ is the biggest. Its root system (see the Wikipedia article) consists of 240 vectors living in an eight-dimensional space. This is the thing that, projected onto a plane, you’ve probably seen in all of the popular press articles looking like a big lace circle. So we already understood $E_8$? What’s the big deal now? Well, it’s one thing to know it’s there, and another thing entirely to know how to work with such a thing. What we’d really like to know is how $E_8$ can act on other mathematical structures. In particular, we’d like to know how it can act as linear transformations on a vector space. Any vector space $V$ comes equipped with a Lie algebra $\mathfrak{end}(V)$: take the vector space of all linear transformations from $V$ to itself and make a bracket by $\left[S,T\right]=ST-TS$ (verify for yourself that this satisfies the requirements of being a Lie algebra). So what we’re interested in is functions from $E_8$ to $\mathfrak{end}(V)$ that preserve all the Lie algebra structure. And this, as I understand it, is where the Kazhdan-Lusztig-Vogan polynomials come in. Root systems and Dynkin diagrams are the essential tools for classifying Lie algebras. These polynomials are essential for understanding the structures of representations of Lie algebras. I’m not ready to go into how they do that, and when I am it will probably be at a somewhat higher level than I usually use here, but hopefully lower than the technical accounts available in Adams’ paper and Baez’ explanation. ### Like this: Posted by John Armstrong | Atlas of Lie Groups ## 3 Comments » 1. Hi, I’m glad to see your post because it relates to my concern about Lie algebra and Lie group. However, I’m a beginner in such a field. I do have some questions about nilpotent Lie groups with dilations at regular points in sub-Riemannian geometry. Can you help me? Comment by ngthuhoa | June 12, 2009 | Reply 2. [...] calculating the Kazhdan-Lusztig-Vogan polynomial for the split real form of . Immediately, I gave a quick overview of the idea of a Lie group, a Lie algebra, and a representation; a rough overview for complete [...] Pingback by | January 11, 2010 | Reply 3. [...] calculating the Kazhdan-Lusztig-Vogan polynomial for the split real form of . Immediately, I gave a quick overview of the idea of a Lie group, a Lie algebra, and a representation; a rough overview for complete [...] Pingback by | August 28, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.918333113193512, "perplexity_flag": "head"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/G03/g03cac.html
NAG Library Function Documentnag_mv_factor (g03cac) 1  Purpose nag_mv_factor (g03cac) computes the maximum likelihood estimates of the arguments of a factor analysis model. Either the data matrix or a correlation/covariance matrix may be input. Factor loadings, communalities and residual correlations are returned. 2  Specification #include <nag.h> #include <nagg03.h> void nag_mv_factor (Nag_FacMat matrix, Integer n, Integer m, const double x[], Integer tdx, Integer nvar, const Integer isx[], Integer nfac, const double wt[], double e[], double stat[], double com[], double psi[], double res[], double fl[], Integer tdfl, Nag_E04_Opt *options, double eps, NagError *fail) 3  Description Let $p$ variables, ${x}_{1},{x}_{2},\dots ,{x}_{p}$, with variance-covariance matrix $\Sigma $ be observed. The aim of factor analysis is to account for the covariances in these $p$ variables in terms of a smaller number, $k$, of hypothetical variables, or factors, ${f}_{1},{f}_{2},\dots ,{f}_{k}$. These are assumed to be independent and to have unit variance. The relationship between the observed variables and the factors is given by the model: $x i = ∑ j=1 k λ ij f j + e i$ ${\lambda }_{\mathit{i}\mathit{j}}$, for $\mathit{i}=1,2,\dots ,p$ and $\mathit{j}=1,2,\dots ,k$, are the factor loadings and ${e}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$, are independent random variables with variances ${\psi }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$. The ${\psi }_{i}$ represent the unique component of the variation of each observed variable. The proportion of variation for each variable accounted for by the factors is known as the communality. For this function it is assumed that both the $k$ factors and the ${e}_{i}$'s follow independent Normal distributions. The model for the variance-covariance matrix, $\Sigma $, can be written as: $Σ = Λ ΛT + Ψ$ (1) where $\Lambda $ is the matrix of the factor loadings, ${\lambda }_{ij}$, and $\Psi $ is a diagonal matrix of unique variances, ${\psi }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$. The estimation of the arguments of the model, $\Lambda $ and $\Psi $, by maximum likelihood is described by Lawley and Maxwell (1971). The log likelihood is: $- 1 2 n-1 log Σ - 1 2 n-1 trace S Σ -1 + constant,$ where $n$ is the number of observations, $S$ is the sample variance-covariance matrix or, if weights are used, $S$ is the weighted sample variance-covariance matrix and $n$ is the effective number of observations, that is, the sum of the weights. The constant is independent of the arguments of the model. A two stage maximization is employed. It makes use of the function $F\left(\Psi \right)$, which is, up to a constant, $-2/\left(n-1\right)$ times the log likelihood maximized over $\Lambda $. This is then minimized with respect to $\Psi $ to give the estimates, $\stackrel{^}{\Psi }$, of $\Psi $. The function $F\left(\Psi \right)$ can be written as: $F Ψ = ∑ j = k + 1 p θ j - log⁡θ j - p-k ,$ where values ${\theta }_{\mathit{j}}$, for $\mathit{j}=1,2,\dots ,p$ are the eigenvalues of the matrix: $S * = Ψ - 1 / 2 S Ψ - 1 / 2 .$ The estimates $\stackrel{^}{\Lambda }$, of $\Lambda $, are then given by scaling the eigenvectors of ${S}^{*}$, which are denoted by $V$: $Λ ^ = Ψ 1/2 V Θ-I 1/2 .$ where $\Theta $ is the diagonal matrix with elements ${\theta }_{i}$, and $I$ is the identity matrix. The minimization of $F\left(\Psi \right)$ is performed using nag_opt_bounds_2nd_deriv (e04lbc) which uses a modified Newton algorithm. The computation of the Hessian matrix is described by Clark (1970). However, instead of using the eigenvalue decomposition of the matrix ${S}^{*}$ as described above, the singular value decomposition of the matrix $R{\Psi }^{-1/2}$ is used, where $R$ is obtained either from the $QR$ decomposition of the (scaled) mean-centred data matrix or from the Cholesky decomposition of the correlation/covariance matrix. The function nag_opt_bounds_2nd_deriv (e04lbc) ensures that the values of ${\psi }_{i}$ are greater than a given small positive quantity, $\delta $, so that the communality is always less than one. This avoids the so called Heywood cases. In addition to the values of $\Lambda $, $\Psi $ and the communalities, nag_mv_factor (g03cac) returns the residual correlations, i.e., the off-diagonal elements of $C-\left(\Lambda {\Lambda }^{\mathrm{T}}+\Psi \right)$ where $C$ is the sample correlation matrix. nag_mv_factor (g03cac) also returns the test statistic: $χ 2 = n - 1 - 2 p + 5 / 6 - 2 k / 3 F Ψ ^$ which can be used to test the goodness-of-fit of the model (1), see Lawley and Maxwell (1971) and Morrison (1967). 4  References Clark M R B (1970) A rapidly convergent method for maximum likelihood factor analysis British J. Math. Statist. Psych. Hammarling S (1985) The singular value decomposition in multivariate statistics SIGNUM Newsl. 20(3) 2–25 Lawley D N and Maxwell A E (1971) Factor Analysis as a Statistical Method (2nd Edition) Butterworths Morrison D F (1967) Multivariate Statistical Methods McGraw–Hill 5  Arguments 1:     matrix – Nag_FacMatInput On entry: selects the type of matrix on which factor analysis is to be performed. ${\mathbf{matrix}}=\mathrm{Nag_DataCorr}$ (Data input) The data matrix will be input in x and factor analysis will be computed for the correlation matrix. ${\mathbf{matrix}}=\mathrm{Nag_DataCovar}$ The data matrix will be input in x and factor analysis will be computed for the covariance matrix, i.e., the results are scaled as described in Section 8. ${\mathbf{matrix}}=\mathrm{Nag_MatCorr_Covar}$ The correlation/variance-covariance matrix will be input in x and factor analysis computed for this matrix. Constraint: ${\mathbf{matrix}}=\mathrm{Nag_DataCorr}$, $\mathrm{Nag_DataCovar}$ or $\mathrm{Nag_MatCorr_Covar}$. 2:     n – IntegerInput On entry: if ${\mathbf{matrix}}=\mathrm{Nag_DataCorr}$ or $\mathrm{Nag_DataCovar}$ the number of observations in the data array x. If ${\mathbf{matrix}}=\mathrm{Nag_MatCorr_Covar}$ the (effective) number of observations used in computing the (possibly weighted) correlation/variance-covariance matrix input in x. Constraint: ${\mathbf{n}}>{\mathbf{nvar}}$. 3:     m – IntegerInput On entry: the number of variables in the data/correlation/variance-covariance matrix. Constraint: ${\mathbf{m}}\ge {\mathbf{nvar}}$. 4:     x[$\mathit{dim1}×{\mathbf{tdx}}$] – const doubleInput On entry: the input matrix. ${\mathbf{matrix}}=\mathrm{Nag_DataCorr}$ or $\mathrm{Nag_DataCovar}$ x must contain the data matrix, i.e., ${\mathbf{x}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdx}}+\mathit{j}-1\right]$ must contain the $\mathit{i}$th observation for the $\mathit{j}$th variable, for $\mathit{i}=1,2,\dots ,n$ and $\mathit{j}=1,2,\dots ,{\mathbf{m}}$. ${\mathbf{matrix}}=\mathrm{Nag_MatCorr_Covar}$ x must contain the correlation or variance-covariance matrix. Only the upper triangular part is required. 5:     tdx – IntegerInput On entry: the stride separating matrix column elements in the array x. Constraint: ${\mathbf{tdx}}\ge {\mathbf{m}}$. 6:     nvar – IntegerInput On entry: the number of variables in the factor analysis, $p$. Constraint: ${\mathbf{nvar}}\ge 2$. 7:     isx[m] – const IntegerInput On entry: ${\mathbf{isx}}\left[j-1\right]$ indicates whether or not the $j$th variable is to be included in the factor analysis. If ${\mathbf{isx}}\left[\mathit{j}-1\right]\ge 1$, then the variable represented by the $\mathit{j}$th column of x is included in the analysis; otherwise it is excluded, for $\mathit{j}=1,2,\dots ,{\mathbf{m}}$. Constraint: ${\mathbf{isx}}\left[j-1\right]>0$ for nvar values of $j$. 8:     nfac – IntegerInput On entry: the number of factors, $k$. Constraint: $1\le {\mathbf{nfac}}\le {\mathbf{nvar}}$. 9:     wt[n] – const doubleInput On entry: if ${\mathbf{matrix}}=\mathrm{Nag_DataCorr}$ or $\mathrm{Nag_DataCovar}$ then the elements of wt must contain the weights to be used in the factor analysis. The effective number of observations is the sum of the weights. If ${\mathbf{wt}}\left[i-1\right]=0.0$ then the $i$th observation is not included in the analysis. If ${\mathbf{matrix}}=\mathrm{Nag_MatCorr_Covar}$ or wt is set to the null pointer NULL, i.e., (double *)0, then wt is not referenced and the effective number of observations is $n$. Constraint: if wt is referenced, then ${\mathbf{wt}}\left[i-1\right]\ge 0$ for $i=1,2,\dots ,n$, and the sum of the weights $>{\mathbf{nvar}}$. 10:   e[nvar] – doubleOutput On exit: the eigenvalues ${\theta }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$. 11:   stat[$4$] – doubleOutput On exit: the test statistics. ${\mathbf{stat}}\left[0\right]$ contains the value $F\left(\stackrel{^}{\Psi }\right)$. ${\mathbf{stat}}\left[1\right]$ contains the test statistic, ${\chi }^{2}$. ${\mathbf{stat}}\left[2\right]$ contains the degrees of freedom associated with the test statistic. ${\mathbf{stat}}\left[3\right]$ contains the significance level. 12:   com[nvar] – doubleOutput On exit: the communalities. 13:   psi[nvar] – doubleOutput On exit: the estimates of ${\psi }_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,p$. 14:   res[${\mathbf{nvar}}×\left({\mathbf{nvar}}-1\right)/2$] – doubleOutput On exit: the residual correlations. The residual correlation for the $i$th and $j$th variables is stored in ${\mathbf{res}}\left[\left(j-1\right)\left(j-2\right)/2+i-1\right]$, $i<j$. 15:   fl[${\mathbf{nvar}}×{\mathbf{tdfl}}$] – doubleOutput On exit: the factor loadings. ${\mathbf{fl}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdfl}}+\mathit{j}-1\right]$ contains ${\lambda }_{\mathit{i}\mathit{j}}$, for $\mathit{i}=1,2,\dots ,p$ and $\mathit{j}=1,2,\dots ,k$. 16:   tdfl – IntegerInput On entry: the stride separating matrix column elements in the array fl. Constraint: ${\mathbf{tdfl}}\ge {\mathbf{nfac}}$. 17:   options – Nag_E04_Opt *Input/Output On entry/exit: a pointer to a structure of type Nag_E04_Opt whose members are optional arguments for nag_opt_bounds_2nd_deriv (e04lbc). These structure members offer the means of adjusting some of the argument values of the algorithm. If the optional arguments are not required the NAG defined null pointer, E04_DEFAULT, can be used in the function call. See the document for nag_opt_bounds_2nd_deriv (e04lbc) for further details. 18:   eps – doubleInput On entry: a lower bound for the value of ${\Psi }_{i}$. Constraint: . 19:   fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). 6  Error Indicators and Warnings NE_2_INT_ARG_GT On entry, ${\mathbf{nfac}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nvar}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{nfac}}\le {\mathbf{nvar}}$. NE_2_INT_ARG_LE On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nvar}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{n}}>{\mathbf{nvar}}$. NE_2_INT_ARG_LT On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nvar}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{m}}\ge {\mathbf{nvar}}$. On entry, ${\mathbf{tdfl}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nfac}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdfl}}\ge {\mathbf{nfac}}$. On entry, ${\mathbf{tdx}}=〈\mathit{\text{value}}〉$ while ${\mathbf{m}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdx}}\ge {\mathbf{m}}$. NE_2_REAL_ARG_LT On entry, ${\mathbf{step_max}}=〈\mathit{\text{value}}〉$ while ${\mathbf{optim_tol}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{step_max}}\ge {\mathbf{optim_tol}}$. NE_ALLOC_FAIL Dynamic memory allocation failed. NE_BAD_PARAM On entry, argument matrix had an illegal value. On entry, argument ${\mathbf{print_level}}$ had an illegal value. NE_INT_ARG_LT On entry, ${\mathbf{nfac}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{nfac}}\ge 1$. On entry, ${\mathbf{nvar}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{nvar}}\ge 2$. NE_INTERNAL_ERROR Additional error messages are output if the optimization fails to converge or if the options are set incorrectly. Details of these can be found in the nag_opt_bounds_2nd_deriv (e04lbc) document. An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_INVALID_INT_RANGE_1 Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{max_iter}}$ is not valid. Correct range is ${\mathbf{max_iter}}\ge 0$. NE_INVALID_REAL_RANGE_EF Value $〈\mathit{\text{value}}〉$ given to eps is not valid. Correct range is machine precision $\le {\mathbf{optim_tol}}<1.0$. NE_INVALID_REAL_RANGE_FF Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{linesearch_tol}}$ is not valid. Correct range is $0.0\le {\mathbf{linesearch_tol}}<1.0$. NE_MAT_RANK On entry, ${\mathbf{matrix}}=\mathrm{Nag_DataCorr}$ or ${\mathbf{matrix}}=\mathrm{Nag_DataCovar}$ and the data matrix is not of full column rank, or ${\mathbf{matrix}}=\mathrm{Nag_MatCorr_Covar}$ and the input correlation/variance-covariance matrix is not positive definite. This exit may also be caused by two of the eigenvalues of ${S}^{*}$ being equal; this is rare (see Lawley and Maxwell (1971)) and may be due to the data/correlation matrix being almost singular. NE_NEG_WEIGHT_ELEMENT On entry, ${\mathbf{wt}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$. Constraint: when referenced, all elements of wt must be non-negative. NE_NOT_APPEND_FILE Cannot open file $〈\mathit{string}〉$ for appending. NE_NOT_CLOSE_FILE Cannot close file $〈\mathit{string}〉$. NE_OBSERV_LT_VAR With weighted data, the effective number of observations given by the sum of weights $\text{}=〈\mathit{\text{value}}〉$, while the number of variables included in the analysis, ${\mathbf{nvar}}=〈\mathit{\text{value}}〉$. Constraint: effective number of observations $>{\mathbf{nvar}}+1$. NE_OPT_NOT_INIT Options structure not initialized. NE_SVD_NOT_CONV A singular value decomposition has failed to converge. This is a very unlikely error exit. NE_VAR_INCL_INDICATED The number of variables, nvar in the analysis $\text{}=〈\mathit{\text{value}}〉$, while number of variables included in the analysis via array ${\mathbf{isx}}=〈\mathit{\text{value}}〉$. Constraint: these two numbers must be the same. NW_COND_MIN The conditions for a minimum have not all been satisfied but a lower point could not be found. Note that in this case all the results are computed. See nag_opt_bounds_2nd_deriv (e04lbc) for further details. NW_TOO_MANY_ITER The maximum number of iterations, $〈\mathit{\text{value}}〉$, have been performed. 7  Accuracy The accuracy achieved is discussed in nag_opt_bounds_2nd_deriv (e04lbc). 8  Further Comments The factor loadings may be orthogonally rotated by using nag_mv_orthomax (g03bac) and factor score coefficients can be computed using nag_mv_fac_score (g03ccc). The maximum likelihood estimators are invariant to a change in scale. This means that the results obtained will be the same (up to a scaling factor) if either the correlation matrix or the variance-covariance matrix is used. As the correlation matrix ensures that all values of ${\psi }_{i}$ are between 0 and 1 it will lead to a more efficient optimization. In the situation when the data matrix is input the results are always computed for the correlation matrix and then scaled if the results for the covariance matrix are required. When you input the covariance/correlation matrix the input matrix itself is used and so you are advised to input the correlation matrix rather than the covariance matrix. 9  Example The example is taken from Lawley and Maxwell (1971). The correlation matrix for nine variables is input and the arguments of a factor analysis model with three factors are estimated and printed. 9.1  Program Text Program Text (g03cace.c) 9.2  Program Data Program Data (g03cace.d) 9.3  Program Results Program Results (g03cace.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 171, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7286524176597595, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/251467/why-is-the-fourier-transform-of-a-levy-process-a-continuous-function-what-about?answertab=oldest
# Why is the Fourier Transform of a Lévy Process a continuous function? What about the inverse? (Bochners Theorem) I was confronted with this question when reading "Stochastic Integration and Differential Equations" by Protter. Just after the definition of a Lévy process he says the following: If $X_t$ is a Lévy-process and we consider the function $f_t(u)=\mathbb{E}(e^{iuX_t})$ where $f_0(u)=1$ and $f_{t+s}(u)=f_t(u)f_s(u)$, and $f_t(u) \neq 0$ for every $(t,u)$. Then, using the right continuity in probability we conclude that there exists a continuous function $\psi$ with $\psi(0)=0$ such that $f_t(u)=\text{exp}(-t\psi(u))$. How can one prove this? (Right) continuity in probability seems a rather weak notion to me for the existence of a fully continuous $\psi(u)$. It would mean that also $f_t(u)$ is continuous right? So what we need is that the Fourier transform of a Lévy process is continuous i think. Any hints on that? (Probably its a well-known fact and I am missing something obvious here) In the same section the so-called "Bochners Theorem" is also mentioned. Could anyone share a resource for me with the details and the sketch of proof? - ## 1 Answer From the fact that $X_t$ is continuous in probability and from $|f_t(u)|\leq1$ you can deduce continuity of $f_t(u)$ (except at point $t=0$ where you only have right continuity). To do it, you only need to use a version of Lebesgue dominated convergence theorem where you have convergence in probability and domination by $1$ which is an integrable function here NB : This version can be deduced from equivalence between convergence of a sequence of integral and (the convergence in probablity + uniform integrability of the sequence of integrands) which is a standard result in measure theory. Now if you have a look at the Poisson process section of Protter's book, he uses also the same fact that the only function with the properties : -right continuity -semigroup property is an exponential or equal to 0. As $f_0(u)=1$ it is an exponential and there is a constant $\psi(u)$ that does the job. I don't have the proof of this "real analysis" theorem. But I believe it's fairly standard, is the sense that this characterizes the exponential function. Edit : wiki gives a closely related proof here : http://en.wikipedia.org/wiki/Characterizations_of_the_exponential_function Best regards - Hi! Ok I get the argument with the semigroup generator $\psi$ but could you elaborate a little bit on the convergence in probability please? I still dont get it. Do you need a result like "in the space ... $L^1$-convergence and convergence in measure are the same"? – vanguard2k Dec 6 '12 at 7:35 1 @Vanguard : You have the following theorem which states equivalence between ( $E[f_n]\to E[f]$) and (convergence in proba + uniform integrability of the sequence). As $f_n$ is bounded by 1, you have uniform integrability so convergence of $E[f_n]\to E[f]$ by the theorem if I am not mistaken. For the rest of the question, you can go along those steps. 1 - Use Kolmogoroff existence theorem to show that there is a processus with the characterisitc functions $e^{t.\psi(u)}$.2 Show indepedence of increments of this process. 3 Show right continuity in proba the hard part the hard part – TheBridge Dec 6 '12 at 8:17 Sounds good to me :-). I wasnt aware of the first result (anymore). I am new to the theory of Lévy processes so now and then I stumble a bit. Thank you! – vanguard2k Dec 6 '12 at 10:42 @Vanguard2k : I wonder if Bochner's theorem doesn't give the conclusion directly I should try to find an "instance" of it. For the measure theory theorem about Uniform integrability, it is stated on wiki's page on uniform integrability section "Relation to convergence of random variables". Best Regards. – TheBridge Dec 6 '12 at 16:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354979395866394, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/2335/different-inner-and-outer-hash-functions-for-nist-recommended-hmac
# DIfferent inner and outer hash functions for NIST Recommended HMAC? The NIST recommended HMAC uses $$\operatorname{HMAC}_k(text) = H_\mathrm{out}( (k \oplus \mathrm{opad}) \operatorname\| H_\mathrm{in}((k \oplus \mathrm{ipad}) \operatorname\| text) )$$ Is it feasible to analyze the security and efficiency with different hash function implementations for $H_\mathrm{in}$ and $H_\mathrm{out}$ for a single instance? I would like to know whether it makes sense to use different hash functions for the same HMAC. Thanks! - ## 1 Answer You may want to take a look at the original HMAC paper by Bellare, Canetti and Krawczyk (1996), or at the new security proof by Bellare (2006). As far as I can tell at a glance, there's nothing in either of these proofs that would actually rely on the inner and outer hash functions being the same function, as long as both of them satisfy the appropriate security properties. - @limari: Bellare et. al. have remarked(Remark 4.6 from 1996 paper) that the use of combination hash functions.. 1. Can this atleast be argued from an efficiency(implementation) point of view, considering the fact that the strength of the security would be unaffected? 2. Also we can argue that collision resistance increases with increase in hash output size. So can there be an analysis from that direction? – Maverickgugu Apr 14 '12 at 9:52 @Maverickgugu: For collision resistance, the minimum of both hash output sizes counts. I don't really see how using two different functions can get faster than using just the faster one of them twice. – Paŭlo Ebermann♦ Apr 14 '12 at 14:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9080967307090759, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/39539/other-ways-of-checking-whether-particular-system-result-in-non-locality
Other ways of checking whether particular system result in non-locality In quantum mechanics, when hamiltonian $H$ is constrained ($H = \sqrt{m^2 - \hbar^2 \nabla^2}$) so that it would produce simple "relativistic" model of quantum mechanics, we can show that it results in non-locality (Reference: $\nabla$ and non-locality in simple relativistic model of quantum mechanics ) The question is would Taylor-expanding every constraint equation on some quantity/operator, such as Hamiltonian, show that it will result in non-locality? Or in some case, should we check other expansions/methods? - anyone........? – War Oct 12 '12 at 8:02 any comment...? – War Oct 13 '12 at 3:55 1 Answer In the specific case of the Hamiltonian the non-locality arises because the time evolution depends on values of the field which are arbitrary far away. In the one dimensional case we have \begin{equation} i \frac{\partial}{\partial t}\, \psi ~~~=~~~ \tilde{H}\,\psi ~~=\ \sqrt{~m^2+\mathbf{\tilde{p}}^2_x~}\ \psi\ =\ \nonumber \end{equation} \begin{equation} \sqrt{ ~m^2-\partial_x^2~}~~ \psi ~~=~~ \frac{m}{x} K_1\left(mx\right) ~*~ \psi ~~~~~~~ \end{equation} ( We used $\hbar=c=1$ ). In the last term $*$ denotes a convolution, in this case with a Bessel K function. It is clear that this instantaneous dependency violates the speed of light restriction. See also my stackexchange answer here: $\nabla$ and non-locality in simple relativistic model of quantum mechanics Now in the general case the value of $\psi(x)$ will depend on $\psi(y)$ at other locations in the past an it will depend on other fields such as $A^\mu(y)$ at other locations in the past. Mathematically these dependencies stem from "Taylor-expanded series" of differential operators but as long as you don't violate the speed of light restriction then this is perfectly fine. Hans -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8705364465713501, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/84182/shape-of-snowflakes/84186
## Shape of snowflakes ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there a mathematical theory that explains the shape of a snowflake? Why is it not round? Update Tree-like metric spaces appear often as limits of sequences of metric spaces (say, asymptotic cones or boundaries of metric spaces). I wonder if similar objects can be obtained as shapes minimizing some kind of energy functional. This may lead to new constructions in geometric group theory. I just saw Igor Rivin's answer which may be what is needed. Perhaps somebody can give a more detailed answer? - 6 Water crystallizes in a hexagonal lattice, so small snowflakes are just hexagons. For reasons of surface chemistry that I don't really understand, water molecules are more likely to attach at a vertex than in the middle of an edge, so as the hexagons get large the vertices grow faster than the edges, creating a non-convex figure with 12 edges. Iterating this procedure (growing faster at vertices than at edges) gives a snowflake--for a picture of this, see pg. 884 here: its.caltech.edu/~atomic/publist/rpp5_4_R03.pdf (the whole article is pretty interesting. So really, the Koch... – Daniel Litt Dec 23 2011 at 20:12 ...snowflake with appropriately chosen parameters is not a bad model at all. – Daniel Litt Dec 23 2011 at 20:12 @Daniel: This may not be what I want since I wanted an explanation, not a description. An example of an explanation could be something like this: "consider the following `energy' functional ..., the shapes that minimize that functional are snowflakes." – Mark Sapir Dec 23 2011 at 20:27 4 The review that Daniel Litt linked to explains that the shapes of snowflakes are the outcome of a complicated growth process, involving the competition of between the rates of diffusion and attachment of water molecules and heat transport, among other things. In particular, as snowflakes are formed very far from the equilibrium state of water and ice, their overall shapes won't be the minimizers of any natural thermodynamic energy functional, or even randomly perturbed versions of such, in the way that large facets of crystals might be. – jc Dec 24 2011 at 5:24 4 I think their shapes are more akin to the random fractals arising from diffusion-limited aggregation, where molecules diffusing in "from infinity" attach to a seed. I think this paper psoup.math.wisc.edu/papers/h2l.pdf by the group that created the pictures that Joseph O'Rourke linked to below is probably the best current explanation for snowflakes, and has a similar philosophy starting from growth rather than minimization - create a random growth model (on a lattice, even) with certain physically motivated properties and one finds shapes that match well the real life snowflakes. – jc Dec 24 2011 at 5:35 show 2 more comments ## 3 Answers Yes, there is a quite active theory of crystal formation, in which the late Fred Almgren and the very much with us Jean Taylor did groundbreaking work. If you google "ALmgren Taylor dendrites" you will be enlightened. You can read the papers (and papers referring to the papers) -- I think the theory is not so simple. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Janko Gravner at UC Davis and David Griffeath at the University of Wisconsin-Madison have modeled snowflake growth, as reported on this web page: the researchers were able to recreate a wide range of natural snowflake shapes. Rather than trying to model every water molecule, it divides the space into three-dimensional chunks one micrometer across. The program takes about 24 hours to produce one "snowfake" on a modern desktop computer. Paper, code, and movies on this modeling are available here. Here is a 1min17sec YouTube video of growth simulations following this model; and here is a 26sec, more colorful set of simulations. Added. Some added detail from the G-G papers: The building blocks for snowflakes are hexagonally arranged molecules of natural ice (Ih). Just how the elaborate designs emerge as water vapor freezes is still poorly understood.... The solidification process involves complex physical chemistry of diffusion limited aggregation and attachment kinetics....Our basic set-up features solidification Cellular Automata on the triangular lattice $\mathbb{T}$ (to reflect the arrangement of water molecules in ice crystals). To echo Igor, it's "not so simple"! A more physically based, 3D model is explored in the paper "Monte Carlo Simulation of the Formation of Snowflakes," by Maruyman and Fujiyoshi Journal of the Atmospheric Sciences, 2005. Comparisons of the shape to "observed snowflakes" are made: - 6 @Joseph: Thanks! But right now I see lots of snowflakes appearing in real time outside my window.:) – Mark Sapir Dec 23 2011 at 20:49 I was curious about the OP's second question, which I now think is actually rather difficult. Namely Why is it [the shape of a snowflake] not round? Various physics-y sources (e.g. this paper) suggest the following "explanation" for the shape of a snowflake, which I mention in my comment on the original question--namely, that water crystallizes in a hexagonal lattice, so small snowflakes are just hexagons; new water molecules are more likely to attach at corners than edges or faces (for complicated reasons I don't really understand) so vertices grow faster than edges. Thus the hexagon will become a non-convex 6-pointed star; then the edges of this figure will split similarly, and so on. This interpretation is born out by e.g. the picture on pg. 884 of the paper above. This inspired the following simple model, which comes in both deterministic and random flavors. We'll build a snowflake on the standard hexagonal lattice in $\mathbb{R}^2$, spanned by e.g. $(1, 0)$ and $(1/2, \sqrt{3}/2)$. Start with a single regular hexagon of side length $1$ centered at the origin, with vertices the six shortest lattice vectors. In the deterministic version of the model, at each positive integer time $t$ we add a regular lattice hexagon with side length $1$ centered at each lattice point which is the vertex of exactly one hexagon. In the random version, at each positive integer time $t$, we add a hexagon centered at a random lattice point which is the vertex of exactly one hexagon with uniform probability over such lattice points. I had some time this morning, so I coded up both models in the language "Processing." Here is a typical pair of snowflakes from the deterministic model: This model has the following interesting properties, none of which are particularly difficult to prove. 1) By the envelope of a snowflake I mean the smallest simply-connected polygon containing it. Let $S_n$ be the envelope of the snowflake at time $n$. Consider the sequence $S_n$ in the space of plane polygons metrized by Hausdorff distance, modulo homothety (two polygons are homothetic if one is congruent to a rescaling of the other). Then $S_n$ is recurrent (that is, any homothety class visited by $S_n$ is approached arbitrarily closely infinitely many times). However, the only homothety class taken infinitely many times by the $S_n$ is that of a regular hexagon. (Thus in this setting the adage that no two snowflakes are alike is pretty far off.) 2) Let $H_n$ be the smallest regular hexagon containing $S_n$. Then $$\frac{\text{area}(S_n)}{\text{area}(H_n)}$$ is bounded above by $1$ and below by, say $1/2$ (though one can do better). By virtue of the recurrence of the $S_n$, however, this ratio does not attain a limit. 3) Certain interior triangles are never filled in, and as is visible in the pictures above these follow a beautiful regular pattern which I haven't bothered to work out. Now let's look at the random model. Here are two typical snowflakes: As you can see, these are quite round, so they might be better called snowballs. I understand this model much less well than the deterministic one above. However, the following conjectures are natural given the pictures. 4) (Conjecture) In the space of homothety classes of plane polygons, metrized by Hausdorff distance, as in (1) above, the envelopes of these shapes tend towards the homothety class of a circle with probability $1$. 5) (Conjecture) The ratio $$\frac{\text{perimiter}(S_n)}{\sqrt{\text{area}(S_n)}}$$ tends to infinity with probability $1$. In other words--the random model I implicitly suggested in my comment on the original question seems to give round snowflakes! So I at least think the physics question as to why snowflakes aren't round is still pretty interesting. In the comments, Rebecca Bellovin suggests another random model--namely, fix a probability $0\leq p\leq 1$ and at each time $t$, and each valid lattice point (namely, each lattice point which is the vertex of exactly one hexagon) add a hexagon centered at that point with probability $p$. At least for small $t$ (e.g. $t<10000$), this seems to interpolate between the two models I give here, and certainly if one scales $p$ in proportion to the number of valid lattice points (so that for example the probability is negligible that more than one hexagon will be added, or that no valid points will be missed), these models will behave exactly like the ones I give. On the other hand, for middling $p$, something interesting happens--namely, the snowflakes look like rounded hexagons. At Rebecca's request, I am posting a picture for $p=0.7$, below: I have no real explanation for this phenomenon; only unconvincing heuristics. - I'd be happy to send the Processing code/java applet to anyone who'd like to play with these models--my email is in my MO profile. – Daniel Litt Dec 24 2011 at 22:47 1 I don't like your random model: it seems much more reasonable to fix a probability p and at time t, attach a new hexagon at every available lattice point with probability p. Assuming a fair amount of water in the air, water crystallizing at one vertex should be independent of water crystallizing at other vertices (to a first approximation). – Rebecca Bellovin Dec 24 2011 at 22:58 2 Nice! Is it similar to percolation clusters, self-avoiding random walks, etc.? – Mark Sapir Dec 24 2011 at 23:50 1 @Daniel: I know some people who know these subjects. I will ask when I see them. – Mark Sapir Dec 26 2011 at 1:19 1 The random model looks very close to the Richardson growth process; my web page shows 3D images www-fourier.ujf-grenoble.fr/~bkloeckn/images.html and Olivier Garet's one has many images of related random processes iecn.u-nancy.fr/~garet/images.php – Benoît Kloeckner Mar 14 2012 at 13:52 show 7 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281448721885681, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/45859/reusing-samples-for-product-estimators
## Reusing samples for product estimators ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The product of two unbiased estimates, $x$ and $y$, from the same data is a biased estimator. However, I don't want to split the data into two sections. I would like to take the average estimate for $x$ and $y$ using different sections of the data. This to me seems intuitively similar to cross-validation and bootstrapping. I'd like to find some theoretical analysis on this process but I've not really been able to find any information. It seems like a pretty general idea that someone must have done before. - repost on stats.stackexchange.com ? – Suresh Venkat Nov 17 2010 at 5:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508268237113953, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/8123/distinguishability-in-quantum-ensembles
# Distinguishability in Quantum Ensembles Inspired by this question: Are these two quantum systems distinguishable? and discussion therein. Given an ensemble of states, the randomness of a measurement outcome can be due to classical reasons (classical probability distribution of states in ensemble) and quantum reasons (an individual state can have a superposition of states). Because a classical system cannot be in a superposition of states, and in principle the state can be directly measured, the probability distribution is directly measurable. So any differing probability distributions are distinguishable. However in quantum mechanics, an infinite number of different ensembles can have the same density matrix. What assumptions are necessary to show that if two ensembles initially have the same density matrix, that there is no way to apply the same procedure to both ensembles and achieve different density matrices? (ie. that the 'redundant' information regarding what part of Hilbert space is represented in the ensemble is never retrievable even in principle) To relate to the referenced question, for example if we could generate an interaction that evolved: 1) an ensemble of states $|0\rangle + e^{i\theta}|1\rangle$ with a uniform distribution in $\theta$ to 2) an ensemble of states $|0\rangle + e^{i\phi}|1\rangle$ with a non-uniform distribution in $\phi$ such an mapping of vectors in Hilbert space can be 1-to-1. But it doesn't appear it can be done with a linear operator. So it hints that we can probably prove an answer to the question using only the assumption that states are vectors in a Hilbert space, and the evolution is a linear operator. Can someone list a simple proof showing that two ensembles with initially the same density matrix, can never evolve to two different density matrices? Please be explicit with what assumptions you make. Update: I guess to prove they are indistinguishable, we'd also need to show that non-unitary evolution like the projection from a measurement, can't eventually allow one to distinguish the underlying ensemble either. Such as perhaps using correlation between multiple measurements or possibly instead of asking something with only two answers, asking something with more that two so that finally the distribution of answers needs more than just the expectation value to characterize the results. - Hah! I addressed your update in my answer before I even saw it. – Keenan Pepper Apr 6 '11 at 1:05 ## 2 Answers You only need to assume 1. the Schrödinger equation (yes, the same old linear Schrödinger equation, so the proof doesn't work for weird nonlinear quantum-mechanics-like theories) 2. the standard assumptions about projective measurements (i.e. the Born rule and the assumption that after you measure a system it gets projected into the eigenspace corresponding to the eigenvalue you measured) Then it's easy to show that the evolution of a quantum system depends only on its density matrix, so "different" ensembles with the same density matrix are not actually distinguishable. First, you can derive from the Schrödinger equation a time evolution equation for the density matrix. This shows that if two ensembles have the same density matrix and they're just evolving unitarily, not being measured, then they will continue to have the same density matrix at all future times. The equation is $$\frac{d\rho}{dt} = \frac{1}{i\hbar} \left[ H, \rho \right]$$ Second, when you perform a measurement on an ensemble, the probability distribution of the measurment results depends only on the density matrix, and the density matrix after the measurement (of the whole ensemble, or of any sub-ensemble for which the measurement result was some specific value) only depends on the density matrix before the measurement. Specifically, consider a general observable (assumed to have discrete spectrum for simplicity) represented by a hermitian operator $A$. Let the diagonalization of $A$ be $$A = \sum_i a_i P_i$$ where $P_i$ is the projection operator in to the eigenspace corresponding to eigenvalue (measurement outcome) $a_i$. Then the probability that the measurement outcome is $a_i$ is $$p(a_i) = \operatorname{Tr}(\rho P_i)$$ This gives the complete probability distribution of $A$. The density matrix of the full ensemble after the measurment is $$\rho' = \sum_i P_i \rho P_i$$ and the density matrix of the sub-ensemble for which the measurment value turned out to be $a_i$ is $$\rho'_i = \frac{P_i \rho P_i}{\operatorname{Tr}(\rho P_i)}$$ Since none of these equations depend on any property of the ensemble other than its density matrix (e.g. the pure states and probabilities of which the mixed state is "composed"), the density matrix is a full and complete description of the quantum state of the ensemble. - Oh, and for the case of an observable $A$ with a continuous spectrum, it works basically the same way. For mathematicians it might get more hairy, but as a physicist I have no problem just saying "replace all the summation signs with integrals". – Keenan Pepper Apr 6 '11 at 0:59 You don't even need to assume Schrödinger equation, but only the fact that the evolution of a quantum state is unitary. – Frédéric Grosshans Apr 10 '11 at 19:14 Density matrices are an alternative description of quantum mechanics. Consequently, if two ensembles have the same density matrix, they are not distinguishable. Example, consider the unpolarized spin-1/2 density matrix which can be modeled as a system that is half pure states in the +x direction and half in the -x direction, or alternatively, as half pure states in the +z direction (i.e. spin up) and half in the -z direction (i.e. spin down): $$\begin{pmatrix}0.5&0\\0&0.5\end{pmatrix} = 0.5\rho_{+x}+0.5\rho_{-x} = 0.5\rho_{+z}+0.5\rho_{-z}$$ Now compute the average value of an operator $H$ with respect to these ensembles. Let $$H = \begin{pmatrix}h_{11}&h_{12}\\h_{21}&h_{22}\end{pmatrix}$$ then the averages for the four states involved are: $$\begin{array}{rcl} \langle H\rangle_{+x} &=& 0.5(h_{11}+h_{12}+h_{21}+h_{22})\\ \langle H\rangle_{-x} &=& 0.5(h_{11}-h_{12}-h_{21}+h_{22})\\ \langle H\rangle_{+z} &=& h_{11}\\ \langle H\rangle_{-z} &=& h_{22} \end{array}$$ From the above, it's clear that taking the average over $\pm x$ will give the same result as taking the average over $\pm z$, that is, in both cases the ensemble will give an average of $$\langle H\rangle = 0.5(h_{11}+h_{22})$$ Any preparation of the system amounts to an operator acting on the states and so $H$ can stand for a general operation. Therefore there is no way of distinguishing an unpolarized mixture of +- x from an unpolarized mixture of +-z. The argument for general density matrices is similar, but I think this gets the point across. - Are you saying instead of representing a state as a vector in Hilbert space, it is sufficient to represent a state as a density matrix? It seems like this view would change the counting of physical states and would have an effect in statistical mechanics or thermodynamics of a system. It almost seems like you would be reducing the entropy by mixing two ensembles. – Ginsberg Apr 6 '11 at 0:32 1 Either way, the whole point of the question was to see a concrete mathematical proof. Instead of just saying it is so, can you please show how it is so, such that I can learn more? – Ginsberg Apr 6 '11 at 0:34 1 @Ginsberg; Yes, a density matrix is equivalent to a collection of pure states (presumably represented by state vectors) along with a probability density for the pure states. I've not found the reference I was looking for so I'll type up an outline of a proof and edit it in. – Carl Brannen Apr 6 '11 at 0:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321711659431458, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2012/01/14/faradays-law/?like=1&source=post_flair&_wpnonce=a34a077b61
# The Unapologetic Mathematician ## Faraday’s Law Okay, so let’s say we have a closed circuit composed of a simple loop of wire following a closed path $C$. There’s no battery or anything that might normally induce an electromotive force around the circuit by chemical or other means. And, as we saw when discussing Gauss’ law, Coulomb’s law gives rise to an electric field that looks like $\displaystyle E(r)=\frac{1}{4\pi\epsilon_0}\int\rho(s)\frac{r-s}{\lvert r-s\rvert^3}\,d^3s$ As we saw when discussing Gauss’ law for magnetism, we can rewrite the fraction in the integrand: $\displaystyle\begin{aligned}E(r)&=-\frac{1}{4\pi\epsilon_0}\int\rho(s)\nabla\left(\frac{1}{\lvert r-s\rvert}\right)\,d^3s\\&=-\nabla\left(\frac{1}{4\pi\epsilon_0}\int\rho(s)\frac{1}{\lvert r-s\rvert}\,d^3s\right)\end{aligned}$ So this electric field is conservative, and so its integral around the closed circuit is automatically zero. Thus there is no electromotive force around the circuit, and no current flows. And yet, that’s not actually what we see. Specifically, if we wave a magnet around near such a circuit, a current will indeed flow! Indeed, this is exactly how the simplest electric generators and motors work. To put some quantitative meat on these qualitative observational bones, we have Faraday’s law of induction. This says that the electromotive force around a circuit is equal to the rate of change of the magnetic flux through any surface bounded by that circuit. What? maybe a formula will help: $\displaystyle\mathcal{E}=\frac{\partial}{\partial t}\int\limits_\Sigma B\cdot dS$ where $\Sigma$ is any surface with $\partial\Sigma=C$. Why can we pick any such surface? Because if $\Sigma'$ is another one then: $\displaystyle\int\limits_\Sigma B\cdot dS-\int\limits_{\Sigma'}B\cdot dS=\int\limits_{\Sigma-\Sigma'}B\cdot dS$ We can calculate the boundary of this combined surface: $\displaystyle\partial(\Sigma-\Sigma')=\partial\Sigma-\partial\Sigma'=C-C=0$ Since our space is contractible, this means that our surface is itself the boundary of some region $E$. $\displaystyle\int\limits_{\partial E}B\cdot dS=\int\limits_E\nabla\cdot B\,dV$ But Gauss’ law for magnetism tells us that this is automatically zero. That is, every surface has the same flux, and so it doesn’t matter which one we use in Faraday’s law. Now, we can couple this with our original definition of electromotive force: $\displaystyle\begin{aligned}\int\limits_\Sigma\frac{\partial B}{\partial t}\cdot dS&=-\int\limits_{\partial\Sigma}E\cdot dr\\&=-\int\limits_\Sigma\nabla\times E\cdot dS\end{aligned}$ But this works no matter what surface $\Sigma$ we consider, so we come up with the differential form of Faraday’s law: $\displaystyle\nabla\times E=-\frac{\partial B}{\partial t}$ ## 8 Comments » 1. What’s the theorem that you used at the end that makes two functions be equal if their integrals are equal over any domain? I know how to prove that statement for continuous functions, but I’d like to know its name. I took two mechanics courses in the past year where this proof method was used extensively and only the professor who taught the first course had a name for it – “Lagrange’s lemma”, but I don’t know if that’s a canonical name. Comment by Andrei | January 14, 2012 | Reply 2. I’m not sure what the name is, actually. It’s pretty straightforward for continuous functions, of course, and physics generally assumes everything is continuous — even smooth — unless specifically stated otherwise. Comment by | January 15, 2012 | Reply 3. [...] One unexpected source of electromotive force comes from our fourth and final experimentally-justified axiom: Faraday’s law of induction [...] Pingback by | February 1, 2012 | Reply 4. [...] the case of Faraday’s law, we’re already done, since it’s exactly the third of Maxwell’s equations in [...] Pingback by | February 3, 2012 | Reply 5. [...] charge” at a point in an electric field it experiences a force . As we saw when discussing Faraday’s law, for a static electric field we can write for some “electric potential” function . [...] Pingback by | February 14, 2012 | Reply 6. [...] Faraday’s law tells us about the electromotive force induced on the [...] Pingback by | February 14, 2012 | Reply 7. [...] Faraday’s law tells us [...] Pingback by | February 17, 2012 | Reply 8. [...] about the existence of potentials, and the argument usually goes like this: as Faraday’s law tells us, for a static electric field we have ; therefore for some potential function because the [...] Pingback by | February 18, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302104115486145, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/90166-cosine-fixpoint.html
# Thread: 1. ## Cosine fixpoint Is the cosine fixpoint (the solution to the equation $x=\cos x$) rational or irrational, and is it possible to prove it? 2. ## Rational or irrational Hello espen180 Originally Posted by espen180 Is the cosine fixpoint (the solution to the equation $x=\cos x$) rational or irrational, and is it possible to prove it? I can't imagine that it could possibly be rational, but I have no idea how you might set about proving it! Grandad
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359996318817139, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/280530/can-you-provide-me-historical-examples-of-pure-mathematics-becoming-useful/280602
# Can you provide me historical examples of pure mathematics becoming “useful”? I'm trying to think/know about something but I don't know if my basis premise is plausible, here we go. Sometimes when I'm talking with people about pure mathematics, they usually dismiss it because it has no practical utility, but I guess that according to the history of mathematics, the math that is useful today was once pure mathematics (I'm not so sure but I guess that when the calculus was invented, it hadn't a pratical aplication). Also, I guess that the development of pure mathematics is important because it allows us to think about non-intuitive objects before encountering some phenomena that is similar to these mathematical non-intuitive objects, with this in mind can you provide me historical examples of pure mathematics becoming "useful"? - 20 Newton invented his fluxions (i.e. his calculus) in order to compute the orbits of celestial objects that move according to his law of gravitation. The foundations of calculus as pure mathematics were not established until the 18th century. – Ron Gordon Jan 17 at 5:42 19 @JavaMan: I think there might be some debate as to whether string theory is useful ... – Henry B. Jan 17 at 5:49 2 @HenryB or whether an application of pure mathematics to pure mathematics is what the OP had in mind. – Willie Wong♦ Jan 17 at 8:48 9 Two similar posts under the heading "Useless math that became useful" here and on MO. – Martin Jan 17 at 8:52 3 @Brad I think rglordonma was answering the OP's point about calculus being invented without a practical application, which is false as it was invented exactly for a practical application. – user50229 Jan 17 at 20:02 show 13 more comments ## 31 Answers Here are few such examples • Public-key cryptosystems based on elliptic curves, factorization ,trapdoor functions, lattices and hyperelliptic curves. • Use of algebraic topology in distributed computing and sensor networks.Topology has uses in various other branches engineering as well. • Use of differential geometry for computer graphics, computer vision algorithms and general relativity. • Error-correcting codes based on algebraic geometry. Algebraic geometry has also applications in robotics. • Lattice theory is used in program analysis and verfication. • Group theory is used in chemistry as well as physics • Tropical geometry, a branch of algebraic geometry, has applications in mathematical biology. • Digital electroniics is impossible without boolean algebra. • Topos theory has been applied to music theory - 13 – Nathan Long Jan 17 at 14:18 1 To add to the first bullet, many types of cryptography are based on pure number theory that was developed long before the cryptography. I also think vectors started out on the pure side before physics started using them, but I don't have a reference off hand. – TimothyAWiseman Jan 17 at 17:14 1 @N.S. Of course it's NP - I think you mean "We don't know it's NP-complete." In fact, it almost certainly isn't, since we (rather quickly) devised a way to quickly factor using quantum computers, but there is no known algorithm for quickly solving any NP-complete problems with quantum computers. In any case, I don't see how that's at all relevant - the fact is, RSA is the most widely used public-key crypto algorithm in use today, is still considered secure, and relies on the difficulty in factoring, so it fits the question. – BlueRaja - Danny Pflughoeft Jan 18 at 5:57 show 3 more comments Negative numbers and complex numbers were regarded as absurd and useless by many mathematicians prior to $15^{th}$ century. For instance, Chuquet referred negative numbers as "absurd numbers." Michael Stifel has a chapter on negative numbers in his book "Arithmetica integra" titled "numeri absurdi". And so too were complex/imaginary numbers. Gerolamo Cardano in his book "Ars Magna" calls the square root of negative numbers as a completely useless object. I guess the same attitude towards Quaternions and Octonions would have been prevalent, when they were initially discovered. This is from my answer to a similar question here. Below are some uses of negative and complex numbers. - 3 By the time of Quaternions things had actually changed, and they were sought for for a long time in the hope they would be as good for modeling 3d movements as Complexs are for 2d. Unfortunately they came a bit too late and Linear Algebra had already eaten most of the cake – Thomas Ahle Jan 17 at 11:52 1 I hope people that have difficulties to accept complex numbers know why they accept real numbers, those are much harder to describe. – AD. Jan 18 at 11:28 2 This answer doesn't really say what is "useful" about complex numbers. Or negative numbers, for that matter... – AShelly Jan 18 at 13:54 show 10 more comments The discussion of conic sections by the ancient Greeks, see the wikipedia article, gave the basic definitions required by Kepler to formulate his law of planetary orbits. Of course the Greeks did not have term "pure mathematics". An example from pure mathematics of the 20th century is the applications of category theory to computer science. People also forget that the notion of the graph of a function was invented by Descartes and of course is now ubiquitous in our daily papers, to show clearly how bad things are getting! - 3 +1 for cat theory, this demonstrates that even the most abstract nonsense can have practical applications – jk. Jan 17 at 11:41 6 This might be the example that had the most direct impact on the largest number of "normal" people. Who would have ever thought that Microsoft would add Monad Comprehensions to BASIC? – Jörg W Mittag Jan 17 at 13:44 Here are some examples of pure mathematics that has shown to have real applications - however I am not sure of the origins. • Radon transform applied to tomography e.g ultrasound detection of babies. ;) • Partial differential equations applied to heat, waves, weather,finance etc. • Graph theory applied to logistics of transportation. • Stochastic analysis applied to finance e.g. option pricing leaded by Black, Scholes and Merton. • Discrete Fourier transform (or rather discrete cosine transform) applied to image analysis e.g. jpeg. • Control theory is used in order to strengthen signals in the telecom industry, as well as calibrating cd-drives. Control theory is pretty much based on Fourier analysis and on the theory of $H^\infty(\mathbb{D})$ (i.e. the space of bounded analytic functions on the unit disc). - 4 Stochastic analysis came from finance. Louis Bachelier was the first one to treat Brownian motion mathematically in his thesis on speculation. I would also be curious where optimal control is supposed to have originated outside applied math. – Michael Greinecker Jan 17 at 8:55 4 I am doubtful that the subjects of PDE or (discrete) Fourier transforms could be considered pure math, historically. – KCd Jan 17 at 10:50 show 9 more comments Euler's Theorem from pure number theory is at the heart of the RSA open key encryption system. - 14 – KCd Jan 17 at 10:46 Complex numbers are very useful in electrical engineering. An imaginary number is a hare-brained idea if you think about it: square of this thing is -1?????!? And yet, it's very valuable when calculating alternative currents. The "trouble" with pure mathematics or ideas is that empirical world is open world (not closed like in mathematics), and as we build newer and newer practical things on top of it, you never know what's useful. Say, lambda calculus and functional programming. If you asked SW engineer 30 years what's functional programming good for, you'd most often get an answer "feh! silly academic toy! useless!". Fast forward 20 years to MapReduce applied by Google and it turns out that yes, it's actually quite practical. Werner von Braun: "research is what i'm doing when I don't know what I'm doing". Combine that with Einstein's "there's nothing as practical as good theory". Result of this combo is: since we do not know which theory is good, we have to test them; but how do you test something that you have not even formulated as pure theory first? "Bottom up" is such an approach, but not everything can be worked out this way. Although I feel you focus on the wrong problem: applicability of pure theory is trivial, just check if it works in practice, try to apply theory of gravity by Aristotle to shooting cannonballs and see it doesn't work (a stone goes up on a curve and at the highest point of trajectory falls vertically down to the ground - has Aristotle never thrown stones or smth?). A harder problem is when pure theory deceives us into wrong representation of real world, for example classical logic has done huge conceptual damage to knowledge representation in AI and the way we think about the problems (all those silly logical rules that don't work, akin to the "witch" skit from Monty Python's Holy Grail). P.S. Certain paper on fast resolution of big Horn clauses is theory behind pattern matching used for programming in Prolog and Erlang (maybe there are more applications I don't know of), although I can't remember the name of the paper. - 1 – mrkafk Jan 20 at 18:11 show 7 more comments Group theory is commonplace in quantum mechanics to represent families of operators that possess particular symmetries. You can find some info here. - Algebraic Topology has found applications data mining (thus to cancer research, I believe), in the field of topological data analysis. See http://www.guardian.co.uk/news/datablog/2013/jan/16/big-data-firm-topological-data-analysis - Most of our current mathematical knowledge was developed to explain something already observed empirically. Going way back, many early civilizations had no concept of "zero" as being a numerical quantity; however, the concept of "nothing" or "none" existed, and eventually the Babylonians, around 2000 BC, began using symbols for "none" or "zero" alongside numerals, equating the concepts. Newton laid the foundations of what we know today as calculus (also developed independently by Liebnitz) in order to mathematically explain and calculate the motion of celestial bodies (and also of projectiles here on earth). Einstein developed tensor calculus in order to establish the mathematical backing for general relativity. It can also, however, happen in reverse. Usually, this is when "pure" math exhibits some "oddity", such as a divergence or discontinuity of an "ideal" formula that otherwise models real-world behavior very closely, or something originally thought of as a practical impossibility. Then, we find that in fact the real-world behavior actually follows the math even in these "edge cases", and it was our understanding of the way things worked that was wrong. Here's one from physics which touches on some of the most basic grade-school math and yet challenges those very foundations of thought: negative absolute temperature. Temperature, classically, is the measure of thermal energy in a system. By that definition, you can never have less than no energy in the system; hence, the concept of "absolute zero". Most "normal" people hold to this concept and think of zero degrees Kelvin as a true absolute; you can't go lower than that. However, the theoretical, more rigorous, definition of temperature has as its defining character the ratio between the change in energy and the change in entropy. As you add total energy to a system, some remains "useful" as energy, while some is lost to entropy (natural disorder). It's still there (First Law of Thermodynamics), but cannot do work (Second Law of Thermodynamics). The graph of temperature using this definition has computable negative values; if entropy and energy are ever inversely related (entropy reduces as energy increases, or vice-versa), then this fraction, and thus the temperature, is negative. Even more interesting is that the graph of temperature as a function of energy over entropy diverges at absolute zero; the delta of entropy approaches zero for deltas of energy around absolute zero, producing infinitely positive or negative values with an undefined division by zero at the origin. That graph, therefore, predicts that absolute zero is actually a state not of zero energy, but of zero change in entropy, regardless of the amount of energy in the system. Absolute zero, therefore, could in fact be observed in systems with extreme (even infinite) amounts of energy, as long as no additional energy added was ever lost to entropy. This used to be discounted out-of-hand; until recently, every thermal system known to man always exhibited a direct relationship between energy and entropy. You could keep adding all the energy you wanted, to infinity, and entropy would continue to increase as well. You could keep cooling a system all you wanted, until you took out all you could possibly remove, and entropy would decrease as well. Again, this is borne out by our everyday observations of the world; solid, crystalline ice, when heated, becomes more chaotic but generally predictable water, which when further heated becomes less predictable gas, and eventually decomposes into its even less predictable component atoms, which would further decompose into plasma. However, work with lasers, and the theoretical behavior of same, gave us a thermal system that has an "upper bound" to the amount of possible energy we could add that remains contained within the system, and moreover, that limit was pretty easy to reach. This allows us to observe a system that actually becomes less chaotic as more energy is added to it, because the more energy that is in the system, the closer it gets to its upper limit of total energy state, and thus the fewer the number of particles in the system that are at a state less than the highest state (and thus the ability to accurately predict the energy state of any arbitrary particle is increased). On the other side of the spectrum, recent news has reported that scientists have produced the opposite; they can get entropy to increase by removing energy from the system. Work with superfluids at extremely cold temperatures has demonstrated that at a critical point of energy removal from the system, particles within it no longer have sufficient energy to sustain the electromagnetic force that attracts them to and repels them from each other in their lowest energy state (which is also their most ordered state). They lose the ordered structure that defines conventional matter, and begin to "flow" around each other without resistance (zero viscosity). At that critical point, you have increased entropy as the result of removing energy; the particles become less predictable as to position and direction of motion when they're cooled, instead of our classical idea that things which are cooled become more orderly. At this point, we have reached "negative absolute temperature". Thus, temperature seems to exhibit a "wraparound"; as energy increases to infinity, eventually the amount of it that can be in entropy will decrease, seemingly breaking the First Law of Thermodynamics and allowing us to get more energy back from the system than the incremental amount we added (but not more than the total amount of energy ever introduced to the system, so the First Law still holds). Because that threshold is attained (in an unbound system) at infinite energy states, we'll never get there with most of our everyday thermal systems, but we can see it in a bound system, and we can "wrap around" from the low end by removing energy to reach a negative absolute temperature. This is backed up by observance of the reciprocal of temperature, which is the thermodynamic beta or "perk". This fraction, by placing the zero entropy delta in the numerator, is perfectly continuous for all real values of the domain, including zero. - show 1 more comment I had a teacher that once told me that Riemann's new idea of measure (that the way we measure has to change depending on the manifold) opened the door to relativity's theory. Also, one of the calculus pioneers was by Francois Viete, who allowed Leibniz and Newton to develop the machinery of classic mechanics. - show 4 more comments The implementation of the Fast Fourier Transform by Cooley and Tukey and maybe Shor's Quantum Algorithm to factor number in polynomial time, using the Quantum Fourier Transform...at least it might become useful somewhen... - 1 FFTs were always useful for other things though – jk. Jan 17 at 11:05 Just look at the field of quantitative finance, financial mathematics(Brownian motion, Fourier Transformation ect.) - show 1 more comment Turing's development of computability which led to the theoretical basis of computing. As a personal note, I take pride in dealing with models of ZF without the axiom of choice and all sort of strange consistency results. The only way an amorphous sets and D-finite combinatorics could be utilized for "practical uses" is when we prove that the universe is actually a good model for an infinite D-finite set, and we can apply all sort of crazy non-AC theorems to argue about properties of the universe. The only reason this would turn out to be really awesome is that it may invalidate parts of quantum mechanics (see The Axiom of Choice in Quantum Theory). - show 1 more comment Too many to count, many "pure mathematics" in the past become "applied mathematics" now. The problem with pure mathematics is that it has advanced too much for science and engineering to catch up now. Btw, doing a PhD in any serious science and engineering displine (even some social science subjects) is like doing some mathematics in the end, and of cause many of these mathetmatics used there were regarded as "pure mathematics" 100-200 years ago. - show 1 more comment Just to add another example: Boolean algebra was developed in 1854: its abstract and maybe boring, but it set the basis for the development of digital circuits. So all the digital devices that you use right one, are heavily based on abstract mathematics from 1854. - show 1 more comment Coding theory is mainly based on algebra; see for example the Goppa code which uses algebraic geometry tools. - I'd say that basically all technological achievements are founded in pure mathematics. The relationship is often long and distant, but I'd say without pure mathematics they wouldn't be possible. In fact, I think it'd be rather hard to find a technological achievement that wouldn't be based on results of pure mathematics. To give a few examples: • Computer science. Computers are based on Turing's and Church's research about what mathematical functions are computable in some sense. At that time, it was pure mathematics, yet now it's the basis of what we use every day. CS uses many concepts from pure mathematics, starting from binary numbers, number theory etc. • Physics. Physics evolved hand in hand with mathematics. Things that used to be purely mathematical were subsequently used in physics,. Without this pure math, we wouldn't have many achievements in physics, simply because physicists wouldn't have the required theoretical tools to work with. And this means, we wouldn't have engineering achievements that use them. To give some examples: • Without calculus and infinitesimals, we wouldn't have statics, which is indispensable for most today's complex architecture. • Lie groups, a purely theoretical idea, become very useful in particle physics, which is the basis of many nowadays technological advancements. • Probability and statistics are used everywhere. All empiric research is (or should be) validated using statistical methods. • And something less serious - without topology, we wouldn't have so many ways of putting on a necktie. - Error correction codes! When CDs were first being discussed, the engineers from Phillips were in discussion in Japan with the company Sony on standards, and those from Sony said they were not happy with the error correction standards set by Phillips. So their engineers went back to Eindhoven and called people together to ask who was the best expert in Europe on this new science of error correction. They were told it was a Professor of Number Theory, J. van Lint, in Eindhoven! I did check this story with him. I have been told that the high quality of the pictures from the Voyager space probes would not be possible without error correction, because of the weak signals, and the noisy space. Error correction is quite widespread, from hard disks, to simple ones in the ISBN, and the advanced ones, see for example the wikipedia article, use sophisticated pure mathematics. The first such code, the Hamming code, was invented by a researcher at Bell Labs, when he ran programs over the weekend, and came back to find "your program has an error". He swore to himself, and thought: "If it can find an error, why can't it correct it?" - Topology helps understanding the molecular structures. See this book, When Topology Meets Chemistry: A Topological Look at Molecular Chirality written by Erica Flapan. I skimmed a few chapters of the book and it was very interesting. Related: Real life applications of Topology - Group theory in physics Standard Model. - Wavelet and Fourier transforms are used in a very long list of medical equipment (MRA, blood pressure monitor, diabetis monitor, just to mention a few), in audio-video compression (mp3, jpeg, jpeg2000,h.264 et al) and audio-video effects (audio equalization, image enhancing, etc). Linear-Algebra is the basis of the Google Page-rank algorithm, and some face-recognition algorithms. This is not by any means an extensive list of applications, just a few that I remember. - IMO any pure mathematics which is generated by a human brain (and there probably exists and most certainly will exist other kinds in the near future) is at least motivated by something which actually exists in the world of human experience. But once the work actually gets underway on a new idea in some area it takes on a life of its own and will, when polished & refined, look very different from how it did at the outset. Calculus is a great example of a very refined area of mathematics - you can see this in the notation, which has been polished smooth by generations of heavy usage and is very powerful & expressive (and typically takes students a long time to learn well). And the magic is that every time a human brain learns a new piece of pure mathematics, it monitors its own (human) experience for any relevance/connections and the chances increase for the discovery of a new application. So I'm not sure it has ever happened that a piece of pure mathematics was invented for no reason and was absolutely useless until an application was discovered later. And conversely, I'd be willing to bet that almost every aspect of applied mathematics has been the inspiration for pure theoretical work of some sort (whether it led to any significant advances or not). I guess what I'm trying to say is that in mathematics (as in all of science) the dialogue between theory and practice goes in both directions and never stops. - Coming from software development background I can say that functional programming languages were influenced to some degree from Lambda Calculus, a formal system. Lambda Calculus was introduced by mathematician Alonzo Church in the 1930s - How about in calculating orbital patterns (i.e. before the first satelite was ever launched). Without the work of pure mathematics laying the ground work for astro-physics, Apollo 13 would have been lost. - Matrix Not the the movie, the "array of numbers"... It was invented before computers and it's now used for all heavy 3D stuffs (real-time or not) and more. It started as pure theory and it's now used in most of your favourite movies and all 3D video games... Pure research is important ;) - Fractals were invented specifically to explore areas of geometry which were thought to only exists in the world of imagination of pure mathematics. They failed miserably, because it turned out that the world is chock full of fractals. Nowadays, fractals are used heavily in computer graphics and describing the patterns of nautilus shell, pine cones, coastlines, lightnings, among many other natural phenomenons. According to Pickover, the mathematics behind fractals began to take shape in the 17th century when the mathematician and philosopher Gottfried Leibniz pondered recursive self-similarity (although he made the mistake of thinking that only the straight line was self-similar in this sense). In his writings, Leibniz used the term "fractional exponents", but lamented that "Geometry" did not yet know of them. Indeed, according to various historical accounts, after that point few mathematicians tackled the issues and the work of those who did remained obscured largely because of resistance to such unfamiliar emerging concepts, which were sometimes referred to as mathematical "monsters". (Wikipedia) - The classical example of this for me it is just binary numbers and its properties (boolean algebra) - 1 It would be nice to explicitly say when this became "useful". – robjohn♦ Jan 18 at 10:37 show 1 more comment This may be a little late but I think the most basic and pure proof and yet the most astonishing was Euclid's Wikipedia Entry on Euclid There are uses in Digital Media, Cryptography, Physics and Engineering. A lot of Pure math is knowing how to apply it, most theories have a specific problem set they are known to solve because they are designed that way or were found to solve that problem set that way but when you apply theories in ways which are not typical of the solution you incite innovation and expand your horizons. - Here are examples of Applied mathematics : • Group theory applied to crystallography: see PDF notes of summer curse in Mathematical and Theorycal Crystallography • Use of Mathematica Optimization in rigid body dynamics,Economics, Operations research and Control engineering. • Probability theory is applied in everyday life in risk assessment and in trade on financial markets - Matrix operations were used by Pauli to model electron spin. Pauli spin matrix - show 1 more comment ## protected by Zev Chonoles♦Feb 22 at 8:24 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9607378840446472, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/newtonian-mechanics+simulation
# Tagged Questions 1answer 79 views ### Initial position and velocity of rocket to escape earth's gravity I'm trying to numerically simulate a spacecraft trajectory between earth and mars. I already wrote the solar system model where the sun is at the origin of the x,y,z plane Earth and Mars are orbiting ... 0answers 23 views ### Rolling (without slipping) ball on a moving surface 2 [duplicate] Possible Duplicate: Rolling (without slipping) ball on a moving surface Apparently I didn't log in properly when I asked a question this morning: Rolling (without slipping) ball on a moving ... 1answer 289 views ### Rolling (without slipping) ball on a moving surface I've been looking at examples of a ball rolling without slipping down an inclined surface. What happens if the incline angle changes as the ball is rolling? More precisely I've been trying to find ... 1answer 81 views ### Mutual Interaction of $N$-Particles in a Cartesian Plane I am making a simulation of $N$-Particles in a cartesian plane and need help with understanding the basics. At anytime, in my particle system, I will have $N$ number of particles. I am treating the ... 0answers 87 views ### What is the correct way of integrating in astronomy simulations? [closed] I'm creating a simple astronomy simulator that should use Newtonian physics to simulate movement of plants in a system (or any objects, for that matter). All the bodies are circles in an Euclidean ... 1answer 368 views ### Simple 2D Vehicle collision physics I'm trying to create a simplified GTA 2 clone to learn. I'm onto vehicle collisions physics. The basic idea I would say is, To apply force F determined by vehicle A's position and velocity onto point ... 0answers 129 views ### 2D Car Physics including Throttle For a simulation for testing on automatic cruise control, I came across the equation: $$v_{n+1} = (1 - k_1 / m) v_n + (1 - k_b) \begin{pmatrix} T_n \\ θ_n \\ \end{pmatrix}$$ where: $T$ = ... 1answer 89 views ### Simulating a car in an intersection I'm somewhat confused. I want to simulate in real-time an intersection where cars have to turn left, right or go straight. What I have are 2 way points: One at the beginning of the intersection on ... 1answer 168 views ### Friction + Bouncing of an Object against an Elastic Wall I am trying to create the formula for applying a bouncing effect to an element which is already slowing down by friction. At the moment I have an element which moves in one dimension at speed "S" and ... 2answers 511 views ### Trying to model pinball physics for game AI I'm working on an AI for a pinball-related video game. The ultimate goal for the system is that the AI will be able to fire a flipper at the appropriate time to aim a pinball at a particular point on ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9252254366874695, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/52008/does-light-accelerate-or-slow-down-during-reflection/52010
# Does light accelerate or slow down during reflection? After all, it does change direction when reflection occurs. So shouldn't it also accelerate? And since the acceleration cannot increase the speed of light, mustn't it slow down? - 1 Like velocity, acceleration has both magnitude and direction. So you're half right. – Gugg Jan 23 at 22:35 – Qmechanic♦ Jan 23 at 23:41 – Ϛѓăʑɏ βµԂԃϔ Jan 24 at 3:50 Currently, the answers below are all about the interesting properties of light. However, light isn't where this argument goes wrong. I would like to point out that you seem to have switched the meaning of "acceleration" twice (from general parlance to physics and back again). – Gugg Jan 24 at 7:36 ## 4 Answers Light does not slow down during a reflection. Light is a signal disturbance in electric and magnetic fields. These disturbances propagate through space at a fixed speed $c$ in vacuum. The situation is completely analogous, in a mathematical sense, to a wave pulse that is sent along a string. When the pulse encounters a boundary, it flips direction, and may or may not change phase depending on the type of boundary encountered. For good graphical depictions of this phenomenon, visit this page. If you emit a pulse of light at a distance of 1 meter from a plane mirror, and measure the amount of time it takes for the signal to return, you will find that it is 2 meters / $c$, neglecting refractive effects of the air. In this sense, we say that the light has not slowed down, even though it has changed direction in the middle of its journey. - Light changes energy during reflection or refraction. However it maintains its constant speed. The change in this energy is detected as a change in the frequency and/or wavelength. - There are some processes which may be termed acceleration. For example a photon having velocity $c$ in vacuum and then having velocity $c/n$ after entering a medium with refractive index $n$. Or as you describe, a change from velocity $c$ to $-c$ when reflected from a mirror. However, the magnitude of it's velocity is $c$ at all times, and in fact in when changing medium the photon is absorbed and a new one emitted, so whether that is to be called an actual acceleration is debatable. This queston is related and may be of interest. - The actual thing that happens, in a more general way than just reflection, is that when the space properties change, for example, a mirror, the particles of the mirror absorb the photons, and emit new ones, the light doesn't reflect but dissapears giving energy to the atoms that then loose it again as a new photon. In an interface of materials, when light passes from $c/n_1$ to $c/n_2$, the same thing happens, the light is absorbed and re-emited by all the particles, and it actually travels at speed $c$ between them, the average behaviour of the wave front is that it travels a little bit slower, when actually the light is not travelling directly in the direction of the wave front, but average. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462267160415649, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/195225-finding-tangent-line-normal-parametric-equation.html
Thread: 1. Finding Tangent Line and Normal of a parametric equation For $\delta \mathbb{R}\rightarrow \mathbb{R}^2,\ t\ =\ \delta (t)\ =\ (cosh(2t-2),sin(2\pi t^2))$ find the tangent line and the normal line passing through $\delta(t_0),t_0\ =\ 1$ So i think the tangent line is $T\delta (t_0)\ =\ \delta' (t_0)s\ +\ \delta (t_0)$ However Im clueless as to how to find the normal line. 2. Re: Finding Tangent Line and Normal of a parametric equation it's ok apparently, after pluggin in the to into the tangent equation to get Tdelta(to) = (x,y)s + (a,b) the normal line is simply Ndelta(to) = (-y,x)s + (a,b). ps it was meant to say deltaR rightarrow R to the power of 2 not that weird symbol. 3. Re: Finding Tangent Line and Normal of a parametric equation $\delta'(t_0)$ will be a two dimensional vector- the normal will be perpendicular to that so you just need a vector perpendicular to $\delta'(t_0)$. In two dimensions, that's easy. A normal to the vector <a, b> is <b, -a>.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8989356160163879, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/166713/if-any-triangle-has-area-at-most-1-points-can-be-covered-by-a-rectangle-of-are
If any triangle has area at most 1 , points can be covered by a rectangle of area 2. I am working on this problem for some time, and I am not able to finish the argument: There is a finite number of points in the plane, such that every triangle has area at most 1. Prove that the points can be covered by a rectangle of area 2. My progress so far: Consider the two points furthest apart, A and B. Draw lines through A and B, perpendicular to AB. Then every point has to be between the two lines, since AB has maximal length among our pair of points. Now take the points C and D, furthest apart from the line AB, on each of the two halfplanes determined by AB. Every other point in the plane has to be between the two lies through C and D, parallel to AB . The rectangle determined by the four lines gives us an area of at most 4, but I feel like this argument can be improved somehow to give us the required area of 2. Thank you! (Edit: thank you for the advice on posting problems here ) - 7 Since you are new, I want to give some advice about the site: To get the best possible answers, you should explain what your thoughts on the problem are so far. That way, people won't tell you things you already know, and they can write answers at an appropriate level; also, people are much more willing to help you if you show that you've tried the problem yourself. If this is homework, please add the [homework] tag; people will still help, so don't worry. Also, many would consider your post rude because it is a command ("Prove..."), not a request for help, so please consider rewriting it. – Zev Chonoles♦ Jul 4 '12 at 19:13 3 Answers An even simpler counterexample would be the corner points of any non-rectangular parallelogram of area 2, e.g. $(0,0)$, $(0,1)$, $(2,2)$ and $(2,1)$. Any triangles formed from those points must have half the area of the parallelogram, i.e. 1, but as the parallelogram is not a rectangle, any rectangle that covers it must have an area greater than 2. - In the arrangement of four points below, each of the $4$ triangles has area $1$. $\hspace{7mm}$ As $L\to\infty$, it seems that the smallest rectangle enclosing the $4$ points would have to be about $L\times4/L$, which would give it an area of $4$, not $2$. - +1 for the picture. – Ilmari Karonen Jul 4 '12 at 21:38 If the theorem is true, a circle that is approximated by a polygon of sufficiently enough vertices verifies the theorem too. The radius of the circle is $r$. The rectangle is a square and has area $4 r^2$. The triangle of greatest area in a circle is equilateral, so has area $3/2 \times r \times \sqrt{3}/2 \times r$. If the theorem is true, $4r^2 \leq 2 \times (3/2 \times r \times \sqrt{3}/2 \times r)$. So $16 \leq 27 /2$. Contradiction. The theorem seems to be false. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9522548317909241, "perplexity_flag": "head"}
http://arcsecond.wordpress.com/tag/global-warming/
# Arcsecond ## Posts Tagged ‘global warming’ ### Let’s Read the Internet! Week 11: Good and Bad Explanation May 14, 2010 Do you think rationally about all the opinions you read, carefully considering why you agree or disagree with any given viewpoint, or is your method for discourse more like the way you sift through a hundred crappy photos of yourself to find the kinda-hot-but-not-too-slutty one that will be your Facebook profile picture? Oh yes, I like this one. All the other can go now. It’s been a long time since I last read the internet with you, so it’s time to do that again. Hopefully you’ll be entertained, and also question the way you think about facts and reality. Although this is a links dump, incredibly none of it involves cats or pornography. Via Swans on Tea, Feynman discusses, in a tangential manner, what magnetism is. When I launch into an explanation, my goal is something is along the lines of, “I’m going to say something to you, and when I’m done, you’ll understand it the way I do.” My guess is that most people implicitly think about explanation the same way. An explainer says some words, possibly along with drawing pictures or doing a demonstration, and the explainee watches, listens, and understands. We expect some confusion and some back-and-forth questions. Also, the scope of what is explained may be very small, so that the explainer perhaps knows a lot more details, but despite these caveats I think this “I will give you my knowledge” approach is the subtext for most of our explanations. The strange thing is that if you ask people directly what explanation is, they do not believe this. They believe that explanations are highly context-dependent, and that they’re imperfect, and that their scope is limited. (“I don’t expect the explainee to get everything. The explanation just gives the general idea, and they’ll work out the details in due time…”), but when I watch two people engaged in a explainer/explainee interaction I get the feeling that they will consider the exchange a failure (or at least not wholly successful) if the explainee ultimately does not understand the subject the way the explainer does. Even the drastically different approaches people take when explaining something to an adult or to a child seem based on the principle that in order for the explanation to be effective, it must be worded to suit the audience, but the explainer still hopes to be completely understood. They just need to find the right way to say things. Feynman points out that this sort of explanation is impossible because knowledge doesn’t consist of tidbits. Feynman cannot take his knowledge of magnetism and “dumb it down” in any sort of accurate way, because that knowledge is couched in the context of everything else he knows about nature. Feynman’s understanding of magnetic forces was much more thorough than the interviewer’s because Feynman understood the fundamental forces involved; he knew all about quantum theory and the interaction of light with matter, and had a feeling for what things were and were not already known and explained by physical models. He also had practical experience with magnets, and had taught students about magnetism and investigated all sorts of magnetic phenomena. But in addition to this knowledge of the theories and models of magnetism, Feynman’s understanding is tempered by his abilities. What separates the scientist from the layperson is not their knowledge of science, but their ability to mathematically manipulate the model, or even create a new one, to derive understanding. If Feynman were still around and he sat down to tutor me in all aspects of electromagnetism, we could probably make a lot of progress. With enough time, he could teach me everything he knew. But I still wouldn’t understand it the way he did. With that, let’s look at an explanation I particularly liked: We Recommend a Singular Value Decomposition David Austin at the American Mathematical Society. This is an explanation of the singular value decomposition, a basic tool in linear algebra. I remember learning about it while studying linear algebra, but I didn’t understand it very clearly. I thought about it only formally, and I kept getting the idea of what it was confused with the proof that it exists. As a result, if I were asked to explain singular value decompositions to someone else, I’d have first gone back to my linear algebra book to review, then pretty much repeated what it said there, trying desperately to do things just differently enough that I wasn’t copying. I got the feeling that Austin did the opposite in writing this article. he did not sit down and say, “Okay, what are all the things I know about SVD and all the good examples of it, and then how can I condense them all and make it appropriate to the audience?” Instead, it seemed like he said, “I happen to know a couple of good pictures that make this clear in the case of a 2×2 matrix. Based on that, what sort of presentation of the SVD makes sense? What level of detail would muddy the presentation? If I change the order I present the ideas, how will that change the reader’s perception of the SVD’s theoretical and practical importance? What can be left out, and how can I get straight to the heart of the matter and communicate that first?” Very quickly in the essay, Austin gets to this picture: which illustrates the singular value decomposition of $\left[ \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array}\right]$. There are only a few short paragraphs before that, but already we’ve walked through a story that motivates it. Austin gives three examples showing how we can understand linear transformations visually, and by the time we finish the third, it was apparent to me that a singular value decomposition is a logical extension of the linear algebra I was already familiar with. He had me hooked for the rest of the article. After giving his example, Austin builds directly to the equation $M = U \Sigma V^T$ which illustrates why it’s a “decomposition”, and what each part of the decomposition means. Only after giving a fairly complete explanation of what a singular value decomposition is did he start to go into how to find it and how to apply it. Lots of math or physics writing I see doesn’t take this approach. Instead, the first I see a particular equation is at the end of its derivation. That means that all the derivation leading up to it seemed unmotivated to me. Austin doesn’t even include the derivations. There’s enough detail that I could work through the missing parts by myself, ultimately understanding them better than I would if each step were spelled out for me. For example, he writes In other words, the function $|M x|$ on the unit circle has a maximum at $v_1$ and a minimum at $v_2$. This reduces the problem to a rather standard calculus problem in which we wish to optimize a function over the unit circle. It turns out that the critical points of this function occur at the eigenvectors of the matrix $M^TM$. That’s actually more effective for me than actually going through the details of the calculus problem. It points me in the right direction to go over it when I’m interested, but in the meantime lets me continue on to the rest of the good stuff. By reorganizing the material, omitting details, and (literally) illustrating his concepts, Austin finally got me to pay attention to something I ostensibly learned years ago. Next, I’d like to illustrate my lack of creativity by returning to Feynman, this time his Caltech commencement address from 1974 Cargo Cult Science Feynman identifies a problem: In the South Seas there is a Cargo Cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he’s the controller—and they wait for the airplanes to land. They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No airplanes land. So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land. and suggests a solution: Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition. For an example of awful science, take a look at a story that made it to Slashdot a little while ago, Scientists Postulate Extinct Hominid with 150 IQ. The Slashdot summary says, Neuroscientists Gary Lynch and Richard Granger have an interesting article in Discover Magazine about the Boskops, an extinct hominid that had big eyes, child-like faces, and forebrains roughly 50% larger than modern man indicating they may have had an average intelligence of around 150, making them geniuses among Homo sapiens. The combination of a large cranium and immature face would look decidedly unusual to modern eyes, but not entirely unfamiliar. Such faces peer out from the covers of countless science fiction books and are often attached to ‘alien abductors’ in movies. Slashdot is known for being strong on computer news, not for their science coverage, but still it’s surprising to me that such a ridiculous bit of claptrap got so much attention. A few commenters point out how absurd the conclusion that an entire race of people had an average IQ of 150 is, but there is so much white noise in the comments of any large online community that most people usually don’t read them, probably including the people who write the comments in the first place. And even if Slashdot will publish sensational cargo cult stories like this, what business does it have in Discover Magazine, which I don’t read, but had assumed was fairly reputable? Discover published this quote about the Boskops: Where your memory of a walk down a Parisian street may include the mental visual image of the street vendor, the bistro, and the charming little church, the Boskop may also have had the music coming from the bistro, the conversations from other strollers, and the peculiar window over the door of the church. Alas, if only the Boskop had had the chance to stroll a Parisian boulevard! First, that doesn’t sound like high intelligence to me. It sounds like autism. Second, how the fuck would you know that from looking at some skulls? Such conclusions obviously have no place in the science-with-integrity Feynman described. 20 years ago, if I had read that story I would not have gone to the effort to follow up on it. (For one thing I’d have been five years old, and so instead of doing some research I would have drank a juice box, gone outside to play, and pooped myself.) Now we have the internet, and follow-up is very easy. Fortunately, high up on the Google results is John Hawks’ article, The “Amazing” Boskops. Hawks, summarizing his review of literature on the Boskops, writes, …in fact, what happened is that a small set of large crania were taken from a much larger sample of varied crania, and given the name, “Boskopoid.” This selection was initially done almost without any regard for archaeological or cultural associations — any old, large skull was a “Boskop”. Later, when a more systematic inventory of archaeological associations was entered into evidence, it became clear that the “Boskop race” was entirely a figment of anthropologists’ imaginations. Instead, the MSA-to-LSA population of South Africa had a varied array of features, within the last 20,000 years trending toward those present in historic southern African peoples. Hawks then followed up with more detail later. The good news is that the Boskop nonsense will die out because it’s wrong, and our system works well enough that things that are wrong do eventually die out. In that little vignette, I looked at a big magazine and published book that were nonsense, and debunked by a blog. It’s not always easy to determine the credibility of a source, and its reputation can be misleading. Blogs have a terrible a reputation in general, while some people seem to believe that if it’s in a book, it must be true. (Unfortunately people take this to the extreme with one particularly poorly-documented and self-contradictory bestselling book!) A more difficult stickier issue is anthropogenic global warming. There is little doubt in my mind that anthropogenic global warming is real, but unlike with evolution, I do not believe that because I have looked at the scientific evidence and thought about the arguments for and against. I haven’t examined the methods of collecting raw data or the factors accounted for in climate models. I don’t even know how accurate those models’ predictions are. I take it all on the word of climate scientists and a cursory review of their reports. I do not see this as a problem or a failure of my rationality. I do withhold judgment on whether global warming is as important an issue as, say, pollution or direct destruction of natural resources, but I do not feel reservation in stating that I think it is very likely that if humans continue on the way they’ve been going, the Earth will warm with severe consequences. What does this have to do with cargo cult science? Cargo cult science is the reason I believe the climate scientists rather than the climate skeptics. My goal here isn’t to convince you one way or another about climate science, or to link to the best-reasoned discussions about it or to give an accurate cross-section of the blogosphere’s thinking process. These are various opinions on anthropogenic global warming, and my hope is that reading for the underlying decision-making process is an instructive exercise. Here is Lord Monckton, a prominent global warming critic: Here he is interviewing a Greenpeace supporter about why she believes in anthropogenic global warming: Here is the UN group Monckton criticizes, the Intergovernmental Panel on Climate Change In particular, their Climate Change 2007 Synthesis Report, a 52-page summary of all things climate science. For more detail, their Publications and Data are available. Here is a recent letter published in Science. It discusses the process scientists use to create reports on the climate, the uncertainty in scientific results, the fallibility of scientific findings, and the role of integrity in science. Climate Change and the Integrity of Science Here is statistician and blogger Andrew Gelman talking about expert opinion and scientific consensus: How do I form my attitudes and opinions about scientific questions? Here is famous skeptic James Randi on the pressure for scientific consensus, the fallibility of scientists, the uncertainty in models of complicated phenomena, and his skepticism of anthropogenic global warming: AGW Revisited Here is the petition Randi describes, the Petition Project Here is a reply to Randi and the Petition Project from PZ Myers, a biologist and well-known angry internet scientist. Say it ain’t so, Randi! Here is a graphic by David McCandless. Its goal is to present an example of the arguments one would uncover in an attempt to self-educate about climate science using only the internet. Global Warming Skeptics vs. The Scientific Consensus Greg Laden writes about skepticism, rationality, and groupthink in a lengthy post. Are you a real skeptic? I doubt it. Here is the Wikipedia Article on anthropogenic global warming, along with tabs to the discussion page for the article and the article history. This is a featured article on Wikipedia. Global Warming My focus on the process people are using to come to terms with global warming isn’t meant to deemphasize the importance of this issue and of other aspects of the relationship between humanity and our biome. Our Earth is a fantastically diverse and endlessly beautiful home. Of course I want to understand it better. Also here is a physics blog story about a mathematical model of cows. Tags:AGW, cargo cult science, explanation, Feynman, global warming, learning, magnetism, rationality, science, skepticism, teaching
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9555193185806274, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/310119/number-of-partitions-of-n-with-k-parts-equals-the-number-of-partitions-of-n/310130
# Number of partitions of $n$ with $k$ parts equals the number of partitions of $n + \binom k {2}$ How do I prove bijectively The number of partitions of $n$ with $k$ parts equals the number of partitions of $n + \binom k {2}$ with $k$ distinct parts - use the fact that kC2 is trianglular number – user58512 Feb 21 at 14:22 ## 2 Answers We can establish a bijection like this. Let us say we have a partition of $n$ into $k$ parts. Order it in non-decreasing order. Now start adding $0$ to the first part, $1$ to the second part, $2$ to the third part, $\ldots,$ $k-1$ to the $k$th part. You will get a partition of $n + \binom{k}{2}$ into $k$ distinct parts. I hope it is clear. - (+1) I added a little $\LaTeX$ and formatting to make it easier to read, and changed partition to part where you’re talking about one part of the partition. – Brian M. Scott Feb 21 at 13:13 HINT (complementary to Novice’s answer): Start with the Ferrers diagram of a partition of $n+\binom{n}2$ into $k$ distinct parts. Note that the bottom ($k$-th) row must have at least one dot, the next one up at least $2$, and so on. Now remove the following array of dots: $$\begin{array}{c|l} \text{Row}&\text{Remove}\\ \hline 1&\underbrace{\bullet\bullet\bullet\ldots\bullet\bullet\bullet}_{k-1}\\ 2&\underbrace{\bullet\bullet\bullet\ldots\bullet\bullet}_{k-2}\\ 3&\underbrace{\bullet\bullet\bullet\ldots\bullet}_{k-3}\\ &\quad\vdots\\ k-3&\bullet\bullet\bullet\\ k-2&\bullet\bullet\\ k-1&\bullet\\ k&\text{(nothing)} \end{array}$$ How many dots have you removed? How many are left? Have you emptied any row? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92218416929245, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/135157/general-definition-of-growth-in-mathematics
# General definition of growth in mathematics From high school math one knows "linear growth", "exponential growth", "logistic growth", "bounded growth" etc., but is there a common accepted general definition of "growth" which covers the special cases above? Perhaps "growth" is just the same as a strictly monotonically increasing function ($\mathbb{R} \to \mathbb{R}$). However for example the term "exponential growth" often covers also the case of "exponential decay" (as mentioned in the wikipedia article: http://en.wikipedia.org/wiki/Exponential_growth). So one might say that "growth" in general means just a strictly montonic function. So is there a common accepted and used definition of the general concept "growth" in mathematics. If so, do you have references? Edit: Just found a similar definition of "growth" in the german wikipedia (growth = Wachstum in german): http://de.wikipedia.org/wiki/Wachstum_%28Mathematik%29 In the section "Mathematische Beschreibung" growth is described as the behaviour of a measured quantity in time. One first determines the value of this quantity $W_1$ at time $t_1$ than at time $t_2$ ($t_2$ > $t_1$) $W_2$. If $W_2 > W_1$ the growth is called positive growth. If $W_1 < W_2$ negative growth and if $W_1 = W_2$ zero-growth. However there are no references in this article and it is not clear whether $W_2 > W_1$ must hold for all $t_2 > t_1$ or just for one pair $t_2 > t_1$. And it's not clear to me if this definition (if made more rigorous) is commonly used. - 2 I don't think there's an actual definition, it's just an English term used as such. If you want, you can picture the set of all functions with a given amount of regularity as being partitioned according to asymptotic relations like $\sim$ or $=\Theta(\cdot)$, and "growth" can refer to a particular equivalence class named after a prototypical representative element. | Why was this question downvoted? – anon Apr 22 '12 at 8:32 – dtldarek Apr 22 '12 at 8:45 ## 1 Answer I agree: there is no rigorous definition of growth. In modern mathematics, growth is something relative and not absolute. So you speak of exponential growth for the function $u$ if $$\lim_{x \to +\infty} \frac{u(x)}{e^{px}}=\ell\neq 0$$ for some $p>0$, or $$0<\limsup_{x \to +\infty} \left| \frac{u(x)}{e^{px}}\right|<+\infty.$$ Similarly, you speak of polynomial growth. In other words, growth understands some comparison class of functions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351576566696167, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/covering-spaces?page=3&sort=newest&pagesize=15
# Tagged Questions The covering-spaces tag has no wiki summary. 2answers 143 views ### Are fundamental groups of Riemann surfaces always finitely generated For any finite subset $B\subset \mathbf{P}^1$, the fundamental group of the Riemann surface $\mathbf{P}^1-B$ is finitely generated. Is this true if we replace $\mathbf P^1$ by a higher genus compact ... 1answer 77 views ### Are these two notions of Galois morphism the same Let $f:X\to Y$ be a finite morphism of integral schemes. Let $G$ be the automorphism group of $X$ over $Y$. Are the following two conditions equivalent? The function field extension \$K(Y)\subset ... 1answer 113 views ### Galois covers of Riemann surfaces Let $G$ be a finite abelian group, and $C$ a compact Riemann surface (algebraic curve) of genus $g$. I am interested in topological Galois $G$-covers $X \to C$, aka \'etale $G$-principal bundles over ... 1answer 94 views ### Basic covering space question Given a path connected metric space $X$ and a cover $\tilde{X}$ which is also a path connected metric space with covering map $E$, then is $E$ a local isometry? 0answers 95 views ### What is the Hurwitz number of an elliptic curve One can associate a Hurwitz number to any rational function $f:X\to \mathbf{P}^1$ on a compact connected Riemann surface $X$ which ramifies over precisely FOUR points. Suppose that $X$ is an elliptic ... 2answers 166 views ### Does a morphism between covering spaces define a covering? My question involves topological spaces $X$, $Y$ and $Z$, two coverings $p : Y \rightarrow X$ and $q: Z \rightarrow X$ of $X$ and a morphism $f: Y \rightarrow Z$ of coverings, i.e. a map which ... 1answer 64 views ### What is the length of the following local ring Let $f:Y\to X$ be a finite etale cover of smooth projective connected varieties. (Or, just a finite degree connected topological cover of connected Riemann surfaces.) Let $y\in Y$ and let $x=f(y)$. ... 5answers 575 views ### Covering spaces - why are they useful? As someone who trained as a physicist, I have known for ages that $\operatorname{SU}(2)$ is a double cover of $\operatorname{SO}(3)$, so, during an idle day at the office I decided to look up what ... 2answers 147 views ### metric on the universal covering of a geometric manifold We know that the universal covering of a closed hyperbolic 3-manifold can be identified with the hyperbolic space $\mathbb{H}^3$. Now, what is not very clear to me is how this identification has to be ... 0answers 107 views ### Hyperbolic Universal Covering Space I have been working with Ricci flow in the euclidean and hyperbolic space but have been having considerable trouble determining how to generate a universal covering space for complex hyperbolic ... 1answer 149 views ### discriminant of an étale cover of an elliptic curve Let $\pi:X\to E$ be a finite étale morphism, where $E$ is an elliptic curve over a number field $K$. Assume $X$ to be connected, and to be of genus 1. Edit: Assume $X$ and $E$ have semi-stable ... 1answer 113 views ### Number of ramification points in a simple cover Let $f:X\to \mathbf{P}^1$ be a simple cover of the Riemann sphere. This means that $f$ is a branched cover, and that each fibre has at least $\deg f-1$ points in it. Is it true that the number of ... 2answers 211 views ### Question about two simple problems on covering spaces Here are two problems that look trivial, but I could not prove. i) If $p:E \to B$ and $j:B \to Z$ are covering maps, and $j$ is such that the preimages of points are finite sets, then the composite ... 3answers 612 views ### Another Question in Hatcher First of all, I apologize for asking yet another question about the hypotheses of a problem in Hatcher, but the statement of one of his problems has stumped me again. The problem is 1.3.15. It reads ... 2answers 109 views ### An action of a group on a covering space We see $S_3$ as the quotient of the free group on two elements and the normal subgroup $R$ generated by $\langle\sigma^3,\tau^2,\sigma\tau\sigma\tau\rangle$ where $\sigma$ and $\tau$ are the ... 1answer 201 views ### The covering space of a bouquet of 2 circles corresponding to a normal subgroup Consider $S_3$ with this presentation: $S_3=\left\langle\sigma,\tau:\sigma^2=1, \sigma\tau=\tau^{-1}\sigma\right\rangle$. Let F be the free group with two generators $s,t$ and $R$ the minimal normal ... 0answers 877 views ### The simply connected coverings of two homotopy equivalent spaces are homotopy equivalent This is exercise 1.3.8 in Hatcher: Let $\tilde{X}$ and $\tilde{Y}$ be simply-connected covering spaces of path connected, locally path-connected spaces $X$ and $Y$. Show that if $X\simeq Y$ then ... 1answer 332 views ### Composition of coverings of path connected spaces Do there exist covering maps $p:X\rightarrow Y$ and $q:Y\rightarrow Z$ such that $X$ is path connected and the composition $q\circ p$ fails to be a covering map? 1answer 226 views ### Irregular covering space of $\mathbb{R}P^2\vee\mathbb{R}P^2$ This was on my final last semester (to find such a cover), and I missed it. Here are my thoughts on it since then: I know that the universal cover of $X = \mathbb{R}P^2\vee\mathbb{R}P^2$ is (loosely) ... 2answers 413 views ### Calculating monodromy I'm right now learning about Monodromy from self-studying Rick Miranda's fantastic book "Algebraic Curves and Riemann surfaces". Today, I read about monodromy, and the monodromy representation of a ... 1answer 63 views ### G-complexes and regular covering Suppose $X$ a free $G$-complex (i.e. a CW-complex with a free $G$-action that permutes the cells). I would like to show that the projection $$X\overset{p}{\to}X/G$$ is a regular covering spaces with ... 1answer 181 views ### Monodromy correspondence Lately I've been studying monodromy and covering maps (in particular ramified covering mapos of Riemann surfaces), and I came across something I didn't fully understand. Let $V$ be a connected real ... 0answers 41 views ### How to construct certain cover given in Mumford's Abelian Varities book In chapter I, appendix to section 2 of the book "abelian varieties" by Mumford, we consider a discrete group $G$ acting freely and discontinuously on a good topological space $X$ (i.e., \$\forall x \in ... 2answers 276 views ### Why is the Long Line not a covering space for the Circle I know of several reasons why the long line can't be a covering space for the circle, but I'm more curious in what exactly goes wrong with the following covering map. Let $L$ be the long line and ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9208767414093018, "perplexity_flag": "head"}
http://scicomp.stackexchange.com/questions/5047/finding-the-distribution-histogram-of-eigenvalues-for-large-sparse-matrices
# Finding the distribution (histogram) of eigenvalues for large sparse matrices Are there any existing programs that are able to compute the (approximate) distribution of eigenvalues for very large (symmetric) sparse matrices? Note that I do not need the eigenvalues themselves, only their distribution (finding all the eigenvalues is a more difficult problem), and I am primarily looking for existing software, not only the description of algorithms that I need to implement myself. My matrices are adjacency matrices of undirected graphs (or closely related matrices). I found some papers discussing how this can be done in practice, so I was hoping that a working implementation exists somewhere. - What specifically do you mean by their distribution? – Geoffrey Irving Jan 18 at 23:47 @GeoffreyIrving Like a histogram. How many eigenvalues are there in a given interval $[a,b)$, or more practically, how many are there between `[0,1)`, between `[1,2)`, etc. Does this answer your question? – Szabolcs Jan 19 at 0:13 Does the term "approximate" refer to the binning of the eigenvalues (the larger the bins, the coarser the approx of the true distribution), or are you also looking to just an estimate/bound of the count of the eigenvalues in a bin? – Stefano M Jan 20 at 16:18 @Stefano The second one. I'm also looking for an estimate if the estimate is considerably easier (= faster) to compute for very large sparse matrices. – Szabolcs Jan 20 at 16:54 ## 2 Answers In structural mechanics the number of eigenvalues of a matrix $K$ in a given range $(\alpha,\beta)$ is computed via the "Sturm sequence check", i. e. computing the $LDL^T$ factorizations of $K-\alpha I$ and $K-\beta I$ and counting the difference in the number of negative pivots. If you have reasonably large bins, can be applied to your problem, and should be pretty straightforward to implement. (A search on Lanczos shifted block algorithms should give more info, since this technique is often used in that context to check for missed eigenvalue/eigenvector pairs.). This count is exact, and expensive for large $K$, so your request of an "estimate" or approximate count is still open. Please post any finding. Edit/update The authors of "A Padé-based factorization-free algorithm for identifying the eigenvalues missed by a generalized symmetric eigensolver" claim To the best of the authors’ knowledge, no post-processing technique that does not require the factorization of a matrix related to $K$ and/or $M$ is currently available for checking whether an eigensolver applied to the solution of problem (1) has missed some eigenvalues in an arbitrary range of interest $[ \sigma_L , \sigma_R ]$. This means that for obtaining an exact count of the eigenvalues in a given bin you have to use the classical Sturm sequence method (see also Sylvester inertia law). General advice for implementing this approach in your case cannot be given, without an analysis of the properties of your adjacency matrices (dimensions, number on non zero entries, fill-ins after reordering, condition number of principal minors...). Nevertheless I would suggest starting with a simple no-brainer approach, and see if you experience breakdowns of the implementation (assuming that this computation is not mission critical). I suggest to use the wonderful SuiteSparse by Tim Davis. 1. reorder your matrix to reduce fill-in, (e.g. calling SYMAMD from COLAMD on $A - \alpha_0 I$). 2. compute $L_i D_i L_i^T = A - \alpha_i I$, where $\alpha_i$, $i=0\dots m$ are the boundaries of your bins. (Try first a simple implementation like LDL without pivoting, and go for a pivoting approach only if you experience numerical difficulties.) (Note that without pivoting the symbolic factorization step can be recycled for all $i$.) 3. the count of negative diagonal terms in $D$ gives you the number of eigenvalues $\lambda < \alpha_i$. This approach is effective only if $m$ is small or the factorization time is negligible with respect to eigendecomposition time: you have to perform some experiments to find out. Good luck. - Thanks for the pointers! I also found a reference to Sturm sequences (the one I linked in the question). Do you know of any software that already implements these things in a way that's usable for a sparse matrix of size larger than 50000? What I was really hoping for was a ready made software so I do not need to completely understand implement the method myself. If there's no existing public software, of course I'll have to do that, but it would be really good to be able to save some time ... – Szabolcs Jan 20 at 16:55 @Szabolcs I'm no expert in this area, but "Sturm seq. check" is common in large scale eigenvalue solvers for structural mechanics problems, so you can try to extrapolate code from there. You can also search for libraries for computing "modal density", which is what you are looking for in structural mechanics lingo. – Stefano M Jan 20 at 21:10 Thanks for the edit, I just noticed it. There's a lot of useful information here, I'll need some time to go through it. – Szabolcs Jan 23 at 0:08 I would imagine that something of this kind can be inferred from the pseudospectrum function. See the work by Marc Embree, for example: http://www.caam.rice.edu/~embree/ - 3 Computing pseudospectra is more expensive than computing eigenvalues. There are good reasons to compute pseudospectra, but it's different information from a "histogram" and not fast. – Jed Brown Jan 19 at 7:43 1 The term 'pseudospectrum' is new to me, but what I found about it online seems to say that they're used to deal more easily with non-Hermitian matrices. All my matrices are Hermitian here. Can this still be of use? – Szabolcs Jan 19 at 15:59 If it's applicable to a general class of matrices (non-Hermitian) then it's of course also applicable to a subset of those (Hermitian). @JedBrown is probably right that it's expensive to compute -- I was simply pointing out that there may be information in the pseudospectrum that, possibly, could be used to evaluate the number of eigenvalues in a subset of the complex plane (or real axis) without actually having to compute the eigenvalues themselves, in much the same way as a contour integral in the complex plane informs about singularities of a function without having to know their locations. – Wolfgang Bangerth Jan 20 at 17:02 1 2-norm $\epsilon$-pseudospectra of Hermitian matrices tend to be a little boring: disks of radius $\epsilon$ (in the complex plane) centered around the eigenvalues (aligned on the real axis). – Stefano M Jan 20 at 23:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9198235273361206, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/23681/comparing-symbolic-and-analog-descriptions
# Comparing symbolic and analog descriptions I've never seen the following comparison before. Let me start with a specific example: Given a finite structure with two symmetric binary relations, i.e. a graph $G$ with one vertex set $V$ and two edge sets $E_1$, $E_2$. Giving an explicit description of $G$ in a formal language can be seen as a special case of defining a function $f$ from a set $T$ with a betweenness relation $B$ to the set $V\ \cup \big( E_1 \times \lbrace E_1 \rbrace \big) \cup \big( E_2 \times \lbrace E_2 \rbrace \big)$ such that $(v,w) \in E_i$ iff $$(\exists x,y,z) f(x) = v \wedge f(y) = ((v,w),E_i) \wedge f(z) = w \wedge B(x,y,z)$$ Such a tuple $(x,y,z)$ represents the "sentence" that $(v,w) \in E_i$. The sentences may overlap and reuse "symbols", but this can be avoided. For example ($T = \mathbb{N}$): ````|u|v|vwE|uwE|w| | | |... ```` represents the sentences $(u,w) \in E$ and $(v,w) \in E$. But also does: ````|u|uwE|w|v|vwE|w| | | |... ```` In fact, it's enough to consider functions $f$ from $T$ to $V\ \cup \lbrace E_1 \rbrace \cup \lbrace E_2 \rbrace$ such that $(v,w) \in E_i$ iff $$(\exists x,y,z) f(x) = v \wedge f(y)=E_i \wedge f(z) = w \wedge B(x,y,z)$$ This comes closer to the normal usage of formal languages. So ````|u|E|w|v|E|w| | | |... ```` represents the sentences $(u,w) \in E$ and $(v,w) \in E$, which are oftenly written as $uEw$ and $vEw$. This is what "symbolic" description essentially is: a "structure preserving" function from a medium to the structure. (Using intermediate symbols from an alphabet isn't essential: the elements of the structure may symbolize themselves.) In contrast, "analog" description essentially is a "structure preserving" function from the structure to a medium: Consider a function $f$ from the set $V\ \cup \big( E_1 \times \lbrace E_1 \rbrace \big) \cup \big( E_2 \times \lbrace E_2 \rbrace \big)$ to a set $T$ with a betweenness relation $B$ such that $(v,w) \in E_i$ iff $f((v,w),E_i))$ is between $f(v)$ and $f(w)$, or: $$(\exists x,y,z) f(v) = x \wedge f(((v,w),E_i)) = y \wedge f(w) = z \wedge B(x,y,z)$$ For example ($T = \mathbb{N}^2$): ````|u|uwE|w| | |vwE| | |v| | | ```` Again, we can ignore the specific edges and write/draw for short: ````|u|E|w| | |E| | |v| | | ```` We even can ignore the specific vertices and write/draw for short (note, how this looks like a unlabelled graph!): ````|V|E|V| | |E| | |V| | | ```` Analog description can be generalized for relations of arbitrary arity, but that's a bit tedious. In principle, it works. Is this comparison in any sense enlightening, or is it just baublery? As I said before, I've never seen this comparison made explicit. So can any references be given? - – Qiaochu Yuan Feb 25 '11 at 12:34 Below is what I understand, considering the first example. Your $T$ encodes bits of sentences and $f$ decodes them. What other data $T$ contains? $B$ distinguish a tuple of bits of some syntactically correct sentence. Here the question “what is a sentence, precisely?” arise. If it is a logical formula, like in first-order logic, then $B$ is too rigid to handle it. $B$ checks only 3 bits. On the contrary, carriers of term algebras are sets of terms. BTW, you can simplify the formula in the second example $B(f(v), f(((v,w),E_i)), f(w))$. – beroal Feb 25 '11 at 17:21 I wanted to show the similarity between the two formulas. – Hans Stricker Feb 25 '11 at 19:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9212068319320679, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-software/154369-nmaximize-mathematica-print.html
# NMaximize in Mathematica Printable View Show 40 post(s) from this thread on one page • August 25th 2010, 01:54 AM xdu NMaximize in Mathematica Hi all, When I use NMaximize in mathematica, I always get the error like this: NMaximize::nrnum: The function value 10613.354-3.142 I is not a real number at {r,x,\[Alpha]} = {0.1909383,0.7839172,1.676922}. My function is complicated because it has hypergeometric function, so does anyone know how to solve this problem or avoid it? Any suggestions are very appreciated, xdu • August 30th 2010, 05:15 AM Ackbeet Well, I can't say I'm an expert with hypergeometric functions, though I have seen them from time to time. I do know something about optimization, and I know something about Mathematica. Could you post a little more background? What's the original problem you're trying to solve? • September 2nd 2010, 02:55 AM xdu Hi Adrian, I made a model and tried to using maximize log likelihood method to estimate the parameters of it. The function of my model is:f (r,x,a)=(r*x)^n *Gamma(a+1)*Gamma(n)*Hpergeometric2F1[n,n,n+a,x-x*r]/(x*n*Gamma(n+a)*(HypergeometricPFQ[1,1,1,2,1+a,x]-(1-r)*HypergeometricPFQ[1,1,1,2,1+a,x-x*r])). where n is a vector, e.g.n=[1,1,2,3,4,4,...150,532]. and then using maximize log likelihood method to estimate the parameters' values, that are r, x, a. the log likelihood function is L(x,r,a)=sum(log(f(r,x,a))) that is sum all the elements of vector n. So I use the NMaximize command: NMaximize[{L(x,r,a),0<x<1,0<r<1,a>0},{x,r,a}] The problem is that I got the error "NMaximize::nrnum: The function value 10613.354-3.142 I is not a real number at", sometime I could not get the answer. Also I got different answers when I used different WorkingPrecision and MaxIterations. I think there are some values are indeterminate or complex number, do you know how to delete these points when it is running NMaximize? Or you have other method to solve it? Cheers, Xiaoguang • September 2nd 2010, 05:15 AM Ackbeet So, just to be clear, you've got a vector $\vec{n}$. 1. Are the elements of $\vec{n}$ all integers? 2. You've got a model function $\displaystyle{f(r,x,a)=\frac{(rx)^{\vec{n}}\,\Gamm a(a+1)\,\Gamma(\vec{n})\, _{2}F_{1}(\vec{n},\vec{n},\vec{n}+a,x-rx)}{x\,\vec{n}\,\Gamma(\vec{n}+a)\,( _{p}F_{q}(1,1,1,2,1+a,x) -(1-r)\, _{p}F_{q}(1,1,1,2,1+a,x-rx))}}.$ Is this correct? 3. I'm assuming that notations like $(rx)^{\vec{n}}$ indicate the vector that is the same length as $\vec{n}$ and whose $j$th components are equal to $(rx)^{n_{j}}.$ Is that correct? 4. Same with the gamma function? 5. How about the $_{2}F_{1}$ function: same there? 6. How are we supposed to interpret $\vec{n}\,\Gamma(\vec{n})?$ Is that a dot product? 7. Are $r, x, a$ all scalars? 8. Your goal is to estimate the parameters $r, x, a$ using the log likelihood method. Must you use the log likelihood method? Or could you use least squares? 9. Of what physical system is this a model? • September 2nd 2010, 05:39 AM xdu Hi Adrian, Thanks, and here are your questions: 1. Are the elements of all integers? Yes, they are positive integers. 2. You've got a model function f(x,r,a) ... It is correct. but here the vector n should be the element of n, that is n(i). so the function f(x,r,a) should be a vaule rather than a vector. Thus the questions number 3,4,5 are clear. all of them are not vector. the parameters x,r,a are also numbers rather than scalars. The only place which is used the vector n is the log likelihood function which is the sum of all the elements of the vector n. It's better use maximize log likelihood method because I've done something before this. Of course if it does not work, we can also use least squares method. The model I am doing is discrete markov chains model I am not sure if it is clear now? Cheers, Xiaoguang • September 2nd 2010, 05:41 AM xdu Hi Adrian, Thanks, and here are your questions: 1. Are the elements of all integers? Yes, they are all positive integers. 2. You've got a model function f(x,r,a) ... It is correct. but here the vector n should be the element of n, that is n(i). so the function f(x,r,a) should be a value rather than a vector. Thus the questions number 3,4,5 are clear. all of them are not vector. the parameters x,r,a are also numbers rather than scalars. The only place which is used the vector n is the log likelihood function which is the sum of all the elements of the vector n. It's better use maximize log likelihood method because I've done something before this. Of course if it does not work, we can also use least squares method. The model I am doing is discrete markov chains model I am not sure if it is clear now? Cheers, Xiaoguang • September 2nd 2010, 06:18 AM Ackbeet So you've really got $\displaystyle{f_{j}(r,x,a)=\frac{(rx)^{n_{j}}\,\Ga mma(a+1)\,\Gamma(n_{j})\, _{2}F_{1}(n_{j},n_{j},n_{j}+a,x-rx)}{x\,n_{j}\,\Gamma(n_{j}+a)\,( _{p}F_{q}(1,1,1,2,1+a,x) -(1-r)\, _{p}F_{q}(1,1,1,2,1+a,x-rx))}}.$ Is that correct? Then your log likelihood function is $\displaystyle{L(r,x,a)=\sum_{j=1}^{\text{length}(\ vec{n})}\ln(f_{j}(r,x,a)).}$ Is that correct? Quote: It's better use maximize log likelihood method because I've done something before this. What is the "something" you've done before this? Are you saying you've done this method before, and know it better? • September 2nd 2010, 06:22 AM xdu Yes, It's correct. I do not know which method is better, I said I have done something, which means I did something simple model using Maximize log likelihood method. • September 2nd 2010, 06:40 AM Ackbeet Ok. Thanks for the clarifications. I'm still needing to ask questions, though, so please bear with me. 1. Is the vector $\vec{n}$ known, or are you trying to find it? Or, perhaps, maybe you are given a number of different vectors $\vec{n}$ and you have to solve this problem for each one? 2. Going along with question #1 in this post, are you sure you have the right likelihood function? That's a genuine question to which I don't know the answer. 2. The number of arguments for the generalized hypergeometric function $_{p}F_{q}$ is equal to $p+q+1$. Do you know what $p$ and $q$ are? They would appear to sum to 5. Is that correct? Thanks for your patience in answering these questions! • September 2nd 2010, 09:55 AM xdu Hi Adrian, I should say thanks to you. That's ok to make it clear. 1.The vector n is known, actually it is the real data set. the data is the species abundance which larger than 0, i.e. 1,2,....or more than one thousand. 2. I'm pretty sure the function is correct, I did that using my hand and also got the this result by Mathematica. I know it's complicate. 3 the general hypergeometric function is pFq[a(1),a(2),...a(p);b(1)b(2)...b(q)]. you can have a look at Hypergeometric Function -- from Wolfram MathWorld you are right, the number of the arguments in this function is equal to p+q+1 where p=3,q=2 that is a(1)=1,a(2)=1 a(3)=1 and b(1)=2, b(2)=1+a. Is it clear now? Cheers xiaoguang • September 2nd 2010, 11:13 AM Ackbeet Ah. Thanks. So the most simplified version of your function is this: $\displaystyle{f_{j}(r,x,a)=\frac{(rx)^{n_{j}}\,\Ga mma(a+1)\,\Gamma(n_{j})\, _{2}F_{1}(n_{j},n_{j},n_{j}+a,x-rx)}{x\,n_{j}\,\Gamma(n_{j}+a)\,( _{3}F_{2}(1,1,1,2,1+a,x) -(1-r)\, _{3}F_{2}(1,1,1,2,1+a,x-rx))}}.$ You then defined, as mentioned above, $\displaystyle{L(r,x,a)=\sum_{j=1}^{\text{length}(\ vec{n})}\ln(f_{j}(r,x,a)).}$ And your goal is to maximize $L,$ correct? Looking back at your post # 3, I think may have seen some of your problems. I think they might be in your syntax. When defining your functions, you have to use correct Mathematica syntax. Here's the generalized hypergeometric function in Mathematica: Code: `HypergeometricPFQ[{1,1,1},{2,1+a},x-r x].` The extra braces, I would guess, are important. In addition, I don't think Mathematica understands elided constraints of the form Code: `...0<x<1...` . Instead, list that as two separate constraints: Code: `...,0<x, x<1,...` Unfortunately, I do not have a version of Mathematica that includes the NMaximize command, and I really don't see how to do this problem on WolframAlpha, because of the need to define several things before actually maximizing. However, I will give you the exact syntax that I believe should work, assuming this problem is doable in this manner: Code: ```f[j_,r_,x_,a_] := ((r x)^(n[[j]])Gamma[a + 1]Gamma[n[[j]]] Hypergeometric2F1[n[[j]],n[[j]], n[[j]] + a, x - r x])/(x n[[j]] Gamma[n[[j]] +             a](HypergeometricPFQ[{1, 1, 1}, {2, 1 + a},x] - (1 - r)HypergeometricPFQ[{1, 1, 1}, {2, 1 + a}, x - r x])) L[r_,x_,a_] := Sum[Log[f[j, r, x, a]], {j, 1, Length[n]}] n = {1, 1, 2, 3, 4, 4, 150, 532} NMaximize[{L(x,r,a),0<x,x<1,0<r,r<1,a>0},{x,r,a}]``` Here the third line definition of n should be whatever your data actually is. Try that and let me know how it goes. • September 4th 2010, 06:35 AM xdu Thank you. I tried it but for some data it did not work, I got something like this: SystemException["MemoryAllocationFailure", {\!\(\* TagBox[ RowBox[{"NMaximize", "[", RowBox[{ RowBox[{"{", RowBox[{ RowBox[{ RowBox[{"135", " ", RowBox[{"Log", "[", RowBox[{ RowBox[{"(", RowBox[{"x", " ", • September 4th 2010, 09:52 AM Ackbeet Did it work for any data sets? • September 4th 2010, 02:04 PM xdu Yes, for some data sets, It works. • September 6th 2010, 02:47 PM Ackbeet Are the data sets for which it works the bigger sets? Or is there no discernible pattern? Show 40 post(s) from this thread on one page All times are GMT -8. The time now is 06:43 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193274974822998, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/58227?sort=newest
## How to solve the linearized Navier-Stokes equations in L^P? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\Omega\subset \mathbb{R}^3$ be an open set with smooth boundary $\partial \Omega$. Consider the following linearized Navier-Stokes equations in $Q_T=\Omega\times (0,T)$ for an arbitrarily fixed $T\in (0,\infty)$, $$u_t-\Delta u+a(x,t)u+b\cdot \nabla u+\nabla p=f(x,t),\text{div } u=0$$ with the initial and boundary conditions $u(x,0)=0, \left.u(x,t)\right|_{\partial \Omega\times (0,T)}=0$. Here $u(x,t)=(u^1(x,t),u^2(x,t),u^3(x,t))$ and $p(x,t)$ denote the unknown velocity and pressure respectively, $a(x,t)$ and $b(x,t)$ denote the given coefficients. Question: Suppose that $$a\in L^r(0,T; L^s(\Omega)), b\in L^{r_1}(0,T; L^{s_1}(\Omega)),$$ where $2/r+3/s<2$, $2/r_1+3/s_1<1$, and $f(x,t)\in C_0^\infty(\Omega\times (0,T))$, can we solve the above equations in arbitrary $L^p$? Can we get the estimates such as $$\|u_t\|_{L^p(Q_T)}+\|D^2 u\|_{L^p(Q_T)}+\|u\|_{L^p(Q_T)}\leq \|f\|_{L^p(Q_T)}?$$ Solonnikov dealed with this problem in his paper "Estimates for solution of nonstationary Navier-Stokes equations" (http://www.springerlink.com/index/N8374858XNT22P11.pdf). However, I can not verify his proof (Page 487 to Page 489). Who can help me? Any comment will be deeply appreciated. - ## 1 Answer Have a look at the following review article and the relevant references therein: Yoshikazu Giga, Weak and strong solutions of the Navier-Stokes initial value problem, Publ. RIMS. Kyoto Univ., 19:887-910, 1983. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8666303753852844, "perplexity_flag": "head"}
http://terrytao.wordpress.com/tag/log-sobolev-inequality/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘log-Sobolev inequality’ tag. ## 285G, Lecture 8: Ricci flow as a gradient flow, log-Sobolev inequalities, and Perelman entropy 24 April, 2008 in 285G - poincare conjecture, math.AP, math.CA, math.DG | Tags: gradient flow, least eigenvalue, log-Sobolev inequality, Nash entropy, non-collapsing, Perelman entropy, Poincare inequality, semigroup method | by Terence Tao | 9 comments It is well known that the heat equation $\dot f = \Delta f$ (1) on a compact Riemannian manifold (M,g) (with metric g static, i.e. independent of time), where $f: [0,T] \times M \to {\Bbb R}$ is a scalar field, can be interpreted as the gradient flow for the Dirichlet energy functional $\displaystyle E(f) := \frac{1}{2} \int_M |\nabla f|_g^2\ d\mu$ (2) using the inner product $\langle f_1, f_2 \rangle_\mu := \int_M f_1 f_2\ d\mu$ associated to the volume measure $d\mu$. Indeed, if we evolve f in time at some arbitrary rate $\dot f$, a simple application of integration by parts (equation (29) from Lecture 1) gives $\displaystyle \frac{d}{dt} E(f) = - \int_M (\Delta f) \dot f\ d\mu = \langle -\Delta f, \dot f \rangle_\mu$ (3) from which we see that (1) is indeed the gradient flow for (3) with respect to the inner product. In particular, if f solves the heat equation (1), we see that the Dirichlet energy is decreasing in time: $\displaystyle \frac{d}{dt} E(f) = - \int_M |\Delta f|^2\ d\mu$. (4) Thus we see that by representing the PDE (1) as a gradient flow, we automatically gain a controlled quantity of the evolution, namely the energy functional that is generating the gradient flow. This representation also strongly suggests (though does not quite prove) that solutions of (1) should eventually converge to stationary points of the Dirichlet energy (2), which by (3) are just the harmonic functions (i.e. the functions f with $\Delta f = 0$). As one very quick application of the gradient flow interpretation, we can assert that the only periodic (or “breather”) solutions to the heat equation (1) are the harmonic functions (which, in fact, must be constant if M is compact, thanks to the maximum principle). Indeed, if a solution f was periodic, then the monotone functional E must be constant, which by (4) implies that f is harmonic as claimed. It would therefore be desirable to represent Ricci flow as a gradient flow also, in order to gain a new controlled quantity, and also to gain some hints as to what the asymptotic behaviour of Ricci flows should be. It turns out that one cannot quite do this directly (there is an obstruction caused by gradient steady solitons, of which we shall say more later); but Perelman nevertheless observed that one can interpret Ricci flow as gradient flow if one first quotients out the diffeomorphism invariance of the flow. In fact, there are infinitely many such gradient flow interpretations available. This fact already allows one to rule out “breather” solutions to Ricci flow, and also reveals some information about how Poincaré’s inequality deforms under this flow. The energy functionals associated to the above interpretations are subcritical (in fact, they are much like $R_{\min}$) but they are not coercive; Poincaré’s inequality holds both in collapsed and non-collapsed geometries, and so these functionals are not excluding the former. However, Perelman discovered a perturbation of these functionals associated to a deeper inequality, the log-Sobolev inequality (first introduced by Gross in Euclidean space). This inequality is sensitive to volume collapsing at a given scale. Furthermore, by optimising over the scale parameter, the controlled quantity (now known as the Perelman entropy) becomes scale-invariant and prevents collapsing at any scale – precisely what is needed to carry out the first phase of the strategy outlined in the previous lecture to establish global existence of Ricci flow with surgery. The material here is loosely based on Perelman’s paper, Kleiner-Lott’s notes, and Müller’s book. Read the rest of this entry » ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 10, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9055182337760925, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/10/19/higher-differentials-and-composite-functions/?like=1&source=post_flair&_wpnonce=f636fc8316
# The Unapologetic Mathematician ## Higher Differentials and Composite Functions Last time we saw an example of what can go wrong when we try to translate higher differentials the way we did the first-order differential. Today I want to identify exactly what goes wrong, and I’ll make use of the summation convention to greatly simplify the process. So, let’s take a function $f$ of $n$ variables $\left\{y^j\right\}_{j=1}^n$ and a collection of $n$ functions $\left\{g^j\right\}_{j=1}^n$, each depending on $m$ variables $\left\{x^i\right\}_{j=1}^m$. We can think of these as the components of a vector-valued function $g:X\rightarrow\mathbb{R}^n$ which has continuous second partial derivatives on some region $X\subseteq\mathbb{R}^m$. If the function $f:Y\rightarrow\mathbb{R}$ has continuous second partial derivatives on some region $Y\subseteq\mathbb{R}^n$ containing the image $g(X)$, then we can compose the two functions to give a single function $h=f\circ g:X\rightarrow\mathbb{R}$, and we’re going to investigate the second differential of $h$ with respect to the variables $x^i$. To that end, we want to calculate the second partial derivative $\displaystyle\frac{\partial^2h}{\partial x^b\partial x^a}=\frac{\partial}{\partial x^b}\frac{\partial}{\partial x^a}h$ First, we take the derivative in terms of $x^a$, and we use the chain rule to write $\displaystyle\frac{\partial h}{\partial x^a}=\frac{\partial g^j}{\partial x^a}\frac{\partial f}{\partial y^j}$ Now we have to take the derivative in terms of $x^b$. Luckily, this operation is linear, so we don’t have to worry about the hidden summations in the notation. We do, however, have to use the product rule to handle the multiplications $\displaystyle\begin{aligned}\frac{\partial}{\partial x^b}\frac{\partial h}{\partial x^a}&=\frac{\partial}{\partial x^b}\left(\frac{\partial g^j}{\partial x^a}\frac{\partial f}{\partial y^j}\right)\\&=\frac{\partial^2g^j}{\partial x^b\partial x^a}\frac{\partial f}{\partial y^j}+\frac{\partial g^j}{\partial x^a}\frac{\partial^2f}{\partial x^b\partial y^j}\\&=\frac{\partial^2g^j}{\partial x^b\partial x^a}\frac{\partial f}{\partial y^j}+\frac{\partial g^j}{\partial x^a}\frac{\partial g^k}{\partial x^b}\frac{\partial^2f}{\partial y^k\partial y^j}\end{aligned}$ where we’ve used the chain rule again to convert a derivative in terms of $x^b$ into one in terms of $y^k$. And here we’ve come to the problem itself. For we can write out the second differential in terms of the $x^i$ $\displaystyle\begin{aligned}d^2h&=\frac{\partial^2h}{\partial x^b\partial x^a}dx^adx^b\\&=\frac{\partial^2f}{\partial y^k\partial y^j}\left(\frac{\partial g^j}{\partial x^a}dx^a\right)\left(\frac{\partial g^k}{\partial x^b}dx^b\right)+\frac{\partial f}{\partial y^j}\frac{\partial g^j}{\partial x^b\partial x^a}dx^adx^b\\&=\frac{\partial^2f}{\partial y^k\partial y^j}dg^jdg^k+\frac{\partial f}{\partial y^j}d^2g^j\end{aligned}$ The first term here is the second differential in terms of the $y^j$. If there were an analogue of Cauchy’s invariant rule, this would be all there is to the formula. But we’ve got another term — one due to the product rule — based on the second differentials of the functions $g^j$ themselves. This is the term that ruins the nice transformation properties of higher differentials, and which makes them unsuitable for many of our purposes. Notice, though, that we have not contradicted Clairaut’s theorem here. Indeed, as long as $f$ and all the $g^j$ have continuous second partial derivatives, then so will $h$. Further, the formula we derived for the second partial derivatives of $h$ is manifestly symmetric between the two derivatives, and so the mixed partials commute. ### Like this: Posted by John Armstrong | Analysis, Calculus ## 2 Comments » 1. [...] Like I said yesterday, because of extraneous terms the higher differentials don’t transform well, and so [...] Pingback by | October 20, 2009 | Reply 2. [...] where I’ve used the physicists’ convention on the variables instead of the common one in multivariable calculus classes. Then we could plug these expressions for , , and into our function , and get a composite function of the variables and , which we can then attack with the tools from the last couple days, being careful about when we can and can’t trust Cauchy’s invariant rule, since the second differential can transform oddly. [...] Pingback by | November 25, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302834868431091, "perplexity_flag": "head"}
http://mathhelpforum.com/number-theory/104695-twin-prime-conjecture-visual-proof-print.html
# Twin Prime Conjecture - Visual Proof Printable View • September 27th 2009, 04:45 PM vengy Twin Prime Conjecture - Visual Proof http://www.funkidshows.com/grid.png There are infinitely many primes p such that p + 2 is also prime. First off, I'm a noob at maths and always reverse my logic! Now, a little glimpse into my madness. ;) I saw this unsolved twin prime conjecture several days ago and tried to solve it. After 15 mins I constructed the above visual proof. Here's how I thought it should work: Let line P represent the Infinitude of Primes. To be a member of P, the number must be prime. All other numbers are excluded. Next, pick a prime p and draw a 2D lattice L with sides sqrt(2). Notice how P is the on the lattice diagonal. At this stage, I knew at least one lattice square would intersect P at two distinct primes, say p and p+2 (if it exists!?). Now to prove that p+2 exists, I decided to use Polya's 2D random walk which proves that any point is reachable on a 2D lattice. I intentionally allowed the lattice point to represent p+2. So, let the random walk (shown in red) proceed and eventually it'll reach the lattice prime point p+2. Thus, if it's possible to reach p+2 from p, then p+2 must exist and is prime since it lies on P. Q.E.D. Okay, where did I go wrong? Thanks. • September 27th 2009, 08:39 PM Bruno J. Are you serious? I don't see you using a single property of primes. Try modifying the argument to show that there are infinitely many primes $p$ for which $p+1$ is also prime; if you succeed, and I don't doubt you will, then certainly your proof is flawed. • September 28th 2009, 04:47 AM vengy Hi Bruno, p+1 would not be prime. Isn't p+2 the smallest except for 2,3. I do use Euclid's theorem that there are an infinite number of primes. ;) Think of it this way ... "line" P is simply a representation of a prime boundary P={2,3,5,7,11,13,...}, so that superimposing a 2D grid will cross that boundary at two prime points, p and p+2 (if it exists?) The primary goal is to prove that p+2 exists. I tried to prove p+2 exists by finding a path from point p to p+2 by using a random walk by Polya. The key "trick" is to align prime p and a lattice point to create a "prime lattice point" p+2 so that I could say p+2 is prime (since it lies on P) and also simultaneously state that p+2 exists as it's a lattice point. Combining them together, p+2 is prime and exists! ;) Perhaps my proof is flawed by a construction argument? • September 28th 2009, 04:50 AM mr fantastic Quote: Originally Posted by vengy Hi Bruno, p+1 would not be prime. Isn't p+2 the smallest except for 2,3. I do use Euclid's theorem that there are an infinite number of primes. ;) Think of it this way ... "line" P is simply a representation of a prime boundary P={2,3,5,7,11,13,...}, so that superimposing a 2D grid will cross that boundary at two prime points, p and p+2 (if it exists?) The primary goal is to prove that p+2 exists. I tried to prove p+2 exists by finding a path from point p to p+2 by using a random walk by Polya. The key "trick" is to align prime p and a lattice point to create a "prime lattice point" p+2 so that I could say p+2 is prime (since it lies on P) and also simultaneously state that p+2 exists as it's a lattice point. Combining them together, p+2 is prime and exists! ;) Perhaps my proof is flawed by a construction argument?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9562283158302307, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/80754-automorphism-field.html
# Thread: 1. ## automorphism of field I have been given this question: Find Aut (F_2[x]/(x^3+x+1)) ? I can do this question easily if the question asked: Find Aut (F_2[x]/(x^3+x+1))* where (F_2[x]/(x^3+x+1))* denotes the group This is my work: F_2 = {0,1} Set x^3 + x + 1 = 0 then x^3 = -x -1 = x + 1 There are 8 elements in F_2[x]/(x^3+x+1): 0, 1, x, x+1, x^2, x^2 +x, x^2 +1, x^2 +x +1 Since x^3 + x + 1 is irreducible over F_2 , so (F_2[x]/(x^3+x+1)) is a field If we exclude 0 then (F_2[x]/(x^3+x+1)) becomes a group, and denote this by (F_2[x]/(x^3+x+1))* = G Since org (G) = |(F_2[x]/(x^3+x+1))*| = 7 then we conclude that (F_2[x]/(x^3+x+1))* =~ C_7 where x is the generator of G So: Aut (F_2[x]/(x^3+x+1))* =~ Aut(C_7) = C_6 (since 7 is prime) BUT in this case the question is different. So I am wondering if Aut(R) and Aut(G) is the same? ( R = ring and G =group) If I am wrong then, how do you solve this question. Thank you for reading my thread. 2. Originally Posted by knguyen2005 I have been given this question: Find Aut (F_2[x]/(x^3+x+1)) ? I can do this question easily if the question asked: Find Aut (F_2[x]/(x^3+x+1))* where (F_2[x]/(x^3+x+1))* denotes the group This is my work: F_2 = {0,1} Set x^3 + x + 1 = 0 then x^3 = -x -1 = x + 1 There are 8 elements in F_2[x]/(x^3+x+1): 0, 1, x, x+1, x^2, x^2 +x, x^2 +1, x^2 +x +1 Since x^3 + x + 1 is irreducible over F_2 , so (F_2[x]/(x^3+x+1)) is a field If we exclude 0 then (F_2[x]/(x^3+x+1)) becomes a group, and denote this by (F_2[x]/(x^3+x+1))* = G Since org (G) = |(F_2[x]/(x^3+x+1))*| = 7 then we conclude that (F_2[x]/(x^3+x+1))* =~ C_7 where x is the generator of G So: Aut (F_2[x]/(x^3+x+1))* =~ Aut(C_7) = C_6 (since 7 is prime) BUT in this case the question is different. So I am wondering if Aut(R) and Aut(G) is the same? ( R = ring and G =group) If I am wrong then, how do you solve this question. Thank you for reading my thread. $x^3 + x + 1$ is irreducible over $\mathbb{F}_2.$ thus $\frac{\mathbb{F}_2[x]}{<x^3 + x + 1>} \cong \mathbb{F}_8.$ so the automorphism group is $C_3,$ the cyclic group of order 3. the generator of this group is the Frobenius map. 3. Thanks alot 4 quick reply , but i dont understand why so the automorphism group is How do you get that? Sorry, I haven't learnt about Frobenius map yet. Can you explain to me please? Thanks so much, I am really appreciated 4. Originally Posted by knguyen2005 Thanks alot 4 quick reply , but i dont understand why so the automorphism group is How do you get that? Sorry, I haven't learnt about Frobenius map yet. Can you explain to me please? Thanks so much, I am really appreciated The polynomial $x^3+x+1$ is irreducible over $\mathbb{F}_2$ therefore, $\mathbb{F}_2[x]/(x^3+x+1)$ is a field with $2^3 = 8$ elements, that is why it is $\mathbb{F}_8$. Now, $\text{Aut}(\mathbb{F}_8) = \text{Gal}(\mathbb{F}_8/\mathbb{F}_2)$ where $\mathbb{F}_2$ is its prime subfield. However, this Galois group is cyclic of degree $3$ and generated by $\sigma$ where $\sigma: \mathbb{F}_8 \to \mathbb{F}_8$ is defined by $\sigma(x) = x^2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8760902285575867, "perplexity_flag": "middle"}
http://nrich.maths.org/1816/solution?nomenu=1
## 'Ladybirds in the Garden' printed from http://nrich.maths.org/ ### Show menu From Jacob, Luc and William at The Hall School in London we had the following, very thorough solution sent in, We looked at the number of spots that could be produced when you have a four spotted ladybird or a seven spotted ladybird or a combination of both. We were asked if it was possible to make $16$ and we found it is, by using four of the four spotted ladybirds. We discovered that the smallest number of spots we could produce was $4$. We were then asked what number of spots between $4$ and $35$ could be produced. We started off by writing a list of those numbers that did and didn't work. Then we made a table to show how we made the numbers of spots that were possible. We found seventeen different numbers could be made and three that could be made in two different ways. These were $28, 32$ and $35$. We have drawn a table to show the only number of spots that can be made between $4$ and $35$. For each number of spots that it is possible to make we have shown the number of four spotted and seven spotted ladybirds that make up the number of spots. We also had some ideas sent in from Christian at Heronsgate School, Olivia from Risley Lower Primary School both in England. From Australia we had a solution sent in from Maths Group $2$ at Brunswick South Primary School. Well done everyone!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9840472936630249, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/138839-finding-exact-solutions.html
# Thread: 1. ## Finding exact solutions For a) this is my work: cos 2θ + cos θ = 0 2 cos^2θ - 1 + cos θ = 0 2cos^2θ + 1 cos θ = 1 1) cos θ = 1 2) 2cos θ + 1 = 1 cos θ = 0 I got those two equations above from factoring out the problem. However The answer in the back of the book says, 60, 180, 300 degrees. I know that what I got for an answer will not come out with those degrees. So can you help find what I did wrong? Thank you so much. 2. Originally Posted by florx For a) this is my work: cos 2θ + cos θ = 0 2 cos^2θ - 1 + cos θ = 0 2cos^2θ + 1 cos θ = 1 1) cos θ = 1 2) 2cos θ + 1 = 1 cos θ = 0 I got those two equations above from factoring out the problem. However The answer in the back of the book says, 60, 180, 300 degrees. I know that what I got for an answer will not come out with those degrees. So can you help find what I did wrong? Thank you so much. Only this part of your work is good: $cos^{2}\theta + cos\theta = 0$ $2 cos^{2}\theta - 1 + cos\theta = 0$ $2cos^2\theta +cos\theta - 1 = 0$............(1) you need to find the value of $cos\theta$. For that, suppose $x=cos\theta$ in equation (1) to get: $2x^2+ x - 1 =0$ Solve this quadratic equation for x, which is $cos\theta$ 3. Thank you for another yet excellent and helpful post. I had figured I made a mistake in typing up my initial problem solving. It was actually: cos 2θ + cos θ = 0 2 cos^2θ - 1 + cos θ = 0 2cos^2θ + cos θ = 1 (move the 1 over to the right side) cosθ(2cosθ + 1) = 1 (factor out the left side) And thus we would have 1) cos θ = 1 2) 2cos θ + 1 = 1 cos θ = 0 But it would have given a wrong answer none the less. I am wondering how come we leave the 1 on the left side and then factor out the left side and set the right side 0? I thought we would have to move over the 1 to the right side first and then factor out the left side and set it equal to 1... 4. Originally Posted by florx Thank you for another yet excellent and helpful post. I had figured I made a mistake in typing up my initial problem solving. It was actually: cos 2θ + cos θ = 0 2 cos^2θ - 1 + cos θ = 0 2cos^2θ + cos θ = 1 (move the 1 over to the right side) cosθ(2cosθ + 1) = 1 (factor out the left side) And thus we would have 1) cos θ = 1 2) 2cos θ + 1 = 1 cos θ = 0 THIS IS WRONG But it would have given a wrong answer none the less. I am wondering how come we leave the 1 on the left side and then factor out the left side and set the right side 0? I thought we would have to move over the 1 to the right side first and then factor out the left side and set it equal to 1... Without moving the 1 to the right side, you have: $2cos^{2}\theta +cos\theta - 1 = 0$ Let $u = cos\theta$ , which gives you: $2x^2+x-1=0$ This is a quadratic equation fo the form $ax^2+bx+c=0$ where $a = 2; b=1; c=-1$ x is given by $\frac{-b \pm \sqrt{b^{2}-4ac}}{2a}$ Now solve for x. Solving for x will give you the value of $cos\theta$. From there, you can find out what the value(s) of $\theta$ is(are). First of all try finding x from the quadratic equation, which will be equal to $cos\theta$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258986115455627, "perplexity_flag": "middle"}
http://cogsci.stackexchange.com/questions/1809/what-kinds-of-maths-to-learn-for-understanding-dynamical-systems-in-cognitive-sc
# What kinds of maths to learn for understanding dynamical systems in cognitive science? [closed] A current trend in cognitive science is to view the mind as a dynamical system (e.g., Continuity of Mind by Spivey, in which cognition is understood as a "continuous and often recurrent trajectory through a state space"). Although I'd like to critically evaluate this trend, I'm embarresed to admit that I've never taken even a basic calculus course. Yet since I don't intend to build dynamical systems models myself, what is the bare minimum of maths learning that I need to accomplish in order to understand dynamical systems in the context of psychology? Remember, I'm a total novice! - 2 Let me preface this by saying I think this question might be a bit too open ended for the site, but to give you a head start... Well, don't be embarrassed you haven't taken calculus, everyone has to start somewhere. However, I would say that a solid course in differential equations is a prereq for anything "dynamical", and that normally requires a couple of semesters of single variable, and usually a semester of multivariable calculus to stomach it all. Of course, you could just start studying the dynamical systems and fill in whatever math you run across, which will give you a – Chuck Sherrington Nov 2 '12 at 1:49 more intuitive grasp of things, but might be more painful in the short-term. – Chuck Sherrington Nov 2 '12 at 1:50 I've taken advanced calculus and have a few classes on state space and control systems, and still, can barely understand even the simplest state space equations. If I remember correctly, a lot of them involve matrices and linear algebra to be able to understand what's happening with the states. – Alex Stone Nov 2 '12 at 6:38 8 Tyler, you have posted this question on seven different Stack Exchange sites: here, Linguistics, Computational Science, Mathematics, Computer Science, CS Theory, and Philosophy. Cross posting once is frowned upon, let alone six times! Which site do you want this question on? – Josh Gitlin♦ Nov 3 '12 at 13:16 Sorry Tyler, I never heard back from you and so I cast the final close vote on this question. Please comment back if you'd like to discuss this with me. I'd be happy to help explain what went wrong here and help you out! – Josh Gitlin♦ Nov 9 '12 at 15:15 ## closed as not constructive by Chuck Sherrington, Artem Kaznatcheev, Ben Brocka, Jeff, Josh Gitlin♦Nov 9 '12 at 15:09 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, see the FAQ for guidance. ## 1 Answer Unfortunately, in psychology and cognitive sciences (and some parts of neuroscience) absolutely no mathematical training is given beyond the highschool level (intro stats, basics of linear algebra in $\mathbb{R}^2$ and $\mathbb{R}^3$, and intro calc). To make this relatable, I will compare understanding dynamics sytems to literature, where you have 3 levels: (1) being able to read, (2) being able to assess, (3) being able to write. 1. Reading level: a basic course in dynamic systems should be enough. If you understand math at the level of Strogatz's "Nonlinear dynamics and chaos" (usually used in a first undergrad course on dynamic systems), then you know how to read a paper on dynamic systems in cognitive science. 2. Assessment level: you need to achieve the basics of math that everybody in the 'hard' sciences or (non-software) engineering has: linear algebra, ordinary differential equations, introductory PDEs (at the level of calc 3 or 4), logic, and introductory discrete math. More importantly, you would need the extremely vague notion of mathematical maturity. Unfortunately, it is hard to explain how to achieve this. I don't know of any equivalent concept in the cognitive sciences. There is no shortcut to achieving mathematical maturity, and it is not domain specific. Mathematical maturity is something you reach from doing lots of different kinds of basic math and proofs. 3. Writing level: the step from (2) to (3) is not as big as from (1) to (2), all you need is creativity and breadth of reading in the relevant domain: i.e. cognitive science. The lack of a large gap from (2) to (3) is why you often see mathematicians and theoretical physicists cross domains and start contributing to the theory branches of various fields (biology, neuroscience, psychology, etc). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421902298927307, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/28496/techniques-for-forming-square-factorizations?answertab=votes
# Techniques for forming square factorizations Say you have the polynomial $$x^4 + 2 + x^{-4}$$ Looking at it, you see you can do $$\begin{align*} x^4 + 1 + 1 + x^{-4} & =x^2( x^2 + x^{-2} ) + x^{-2}( x^2 + x^{-2} )\\ &= \left( x^2 + x^{-2} \right)^2. \end{align*}$$ Another one is $$\begin{align*} x^2 + \frac{1}{2} + \frac{1}{16x^2} &= x^2 + \frac{1}{4} + \frac{1}{4} + \frac{1}{16x^2} \\ &=x^2\left( 1 + \frac{1}{4x^2} \right) + \frac{1}{4}\left( 1 + \frac{1}{4x^2} \right)\\ &=\left( x^2 + \frac{1}{4}\right) \left( 1 + \frac{1}{4x^2} \right)\\ (x^2) \left( 1 + \frac{1}{4x^2} \right)^2. \end{align*}$$ So the question is, I've been doing this by "inspection" - are there any techniques for recognizing when this type of factorization is possible or how to do more easily? - 1 You have $A(x) + B + C(x)$, set $a = \sqrt{A(x)}, b = \sqrt{C(x)}$ and check if $2ab = B$. (So that $(a+b)^2 = a^2 + 2ab + b^2$.) In your fist example $a=x^2, b=x^{-2}$, in the second $a=x, b=\frac{1}{4x}$. – Eelvex Mar 22 '11 at 14:37 – anonymous Mar 22 '11 at 15:02 ## 1 Answer For trinomials: • If you have $(p(x))^2 + K + (q(x))^2$, then check to see if $K = \pm 2p(x)q(x)$. if so, then $(p(x))^2 + K + (q(x))^2 = (p(x)\pm q(x))^2$. • In your first example, you have $$x^4 + 2 + x^{-4} = (x^2)^2 + 2 + (x^{-2})^2.$$ Taking $p(x) = x^2$, $q(x) = x^{-2}$, we have $2p(x)q(x) = 2$, which is precisely the middle term, so $x^2+2+x^{-4} = (x^2 + x^{-2})^2$. • In your second example, you have $$x^2 + \frac{1}{2} + \frac{1}{16x^2} = (x)^2 + \frac{1}{2} + \left(\frac{1}{4x}\right)^2.$$ Taking $p(x) = x$ and $q(x) = \frac{1}{4x}$, you have $2p(x)q(x) = \frac{2x}{4x} = \frac{1}{2}$, again exactly the middle term, so you get $$x^2 + \frac{1}{2} + \frac{1}{16x^2} = \left(x + \frac{1}{4x}\right)^2.$$ • More generally, if you have $(p(x))^2 + Kp(x) + L$, then see if you can find two expressions, $s(x)$ and $t(x)$, which when multiplied give $L$ and when added give $K$, $s(x)t(x) = L$, $s(x)+t(x) = K$. Then $(p(x))^2 + Kp(x) +L = (p(x)+s(t))(p(x)+t(x))$. • For example, $$2x^4 + (\sqrt{2}-1)x^2 + x.$$ You may notice that the leading term is $(\sqrt{2}x^2)^2$, and that you have a "middle term" of $\sqrt{2}x^2$. So rewriting a bit you get $$(\sqrt{2}x^2)^2 + (\sqrt{2}x^2)1 + (x-x^2).$$ This suggests looking for $s(x)$ and $t(x)$ with $s(x)+t(x) = 1$ and $s(x)t(x)=x-x^2 = x(1-x)$. This in turn quickly suggests $s(x) = x$ and $t(x)=1-x$, which gives $$2x^4 + (\sqrt{2}-1)x^2 + x = \left(\sqrt{2}x^2 + x\right)\left(\sqrt{2}x^2 + (1-x)\right).$$ But generally, there is some substantial amount of "inspect and notice" going on. The more of these you do, the more you will notice patterns and be able to "notice" the things that one needs to notice. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343445301055908, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/68032/economics-formula/68047
# Economics formula This is an economics question, but if I can get the correct answer to this formula I can answer the question. Any help is appreciated, thanks! If the demand $Q_x^d$ for a product given the price $P_x$ is $$\ln Q_x^d = 10 – 5 \ln P_x$$ then product $x$ is: 1. Elastic 2. Inelastic. 3. Unitary elastic. - 2 A few things. First, welcome to math.SE! I tagged this as homework as that's how the question reads. HW questions are fine, but if it's not, feel free to retag. Second, the community here is more than willing to help you learn, but not to just give you the answers. If you provide some more context and offer an explication of your work this far, people we be much more willing to help. Thirdly, I left you '1n' in paraens in your equation because I'm not sure what you're referring to. Any additional information w.r.t to that part of equation will be helpful. – Drew Christianson Sep 27 '11 at 21:27 1 Oh, and this is math.SE not econ.SE, so you may want to define elastic/inelastic in the question. – Drew Christianson Sep 27 '11 at 21:28 1 Is "$1n$" supposed to be $\ln$ (natural log)? – Zev Chonoles♦ Sep 27 '11 at 21:33 1 Chris: your equations are unreadable and you appear to be merely requesting the answer in lieu of actual understanding, hence the downvotes. I suggest you clarify the question and add what your thoughts on it are so far. @Drew: Even if it's blatantly obvious, I don't think you're supposed to tag other people's questions as [homework] for them without permission (though other tags are fine), maybe I'm misremembering some point of etiquette though. Zev: Yeah, probably. – anon Sep 27 '11 at 21:40 2 – Charles Sep 27 '11 at 21:47 show 2 more comments ## 1 Answer The price elasticity $E$ of a product which has price $P$ given a demand $Q$ is defined as the percentage change in demand for each percentage change in price, i.e. $$E = \frac{d \ln Q}{d \ln P}\qquad (1)$$ or alternatively $$E = \frac{P}{Q} \frac{dQ}{dP}\qquad (2)$$ Since in your case you have $$\ln Q = 10 - 5\ln P$$ You should be able to differentiate $\ln Q$ with respect to $\ln P$ (equation (1)) to get the result you require. Alternatively, you could express $Q$ in terms of $P$ by exponentiating both sides of the formula, giving $$Q = e^{10 - 5\ln P} = \frac{e^{10}}{P^5}$$ and apply equation (2) to find $E$, which may be easier if you're not that experienced with differentiation. Once you have calculated $E$, you can decide whether the good in question is elastic or inelastic by noting that elastic goods have $|E|>1$ and inelastic goods have $|E|<1$. - 1 ... and presumably unitary elastic goods have $|E|=1$. – Henry Sep 27 '11 at 23:43 (+1)... even though this is quite generous (considering how terrible the OP was, and that the other Chris has done zero to contribute to the solution of his own problem in the past two hours!) – The Chaz 2.0 Sep 28 '11 at 0:05 I feel like it's better to show first-timers the benefit of the doubt (fix their questions, give them good answers etc) since they don't know the ropes yet, and there are many reasons that someone might not look at a question they've posted for hours or even days after the original posting. – Chris Taylor Sep 28 '11 at 7:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9500935673713684, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/5572/in-rsa-encryption-does-the-value-of-e-need-to-be-random/5573
# In RSA encryption, does the value of e need to be random? I am a novice programmer and am just finishing up an RSA encryption program that I am writing for practice. Currently I have the program generate a relatively small random value for the public key e. When adding the finishing touches, I realized that there was no point for e to be random. Is this thinking correct? - ## 1 Answer Yes, this thinking is correct; there is no requirement that the public exponent $e$ to be random. After all, it doesn't matter whether $e$ can be guessed by an attacker; we'll be including that value in the public key anyways. Common practice is currently to use the fixed value $65537 =2^{16} +1$ for $e$. Any odd value of $e > 1$ will work; however, smaller values of $e$ will tend to make the system brittle against errors in performing the RSA padding. - some combinations of $e$ and $n$ don't work. $e$ and φ(n) must be coprime. – CodesInChaos Dec 4 '12 at 22:23 You could also add that the reason we use values of the form $2^n + 1$ for $e$ is that they have low Hamming weight, which makes modular exponentiation (and thus, encryption) very efficient. – Thomas Dec 5 '12 at 2:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341284036636353, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Dilation_(metric_space)
# Dilation (metric space) In mathematics, a dilation is a function $f$ from a metric space into itself that satisfies the identity $d(f(x),f(y))=rd(x,y)$ for all points $(x, y)$, where $d(x, y)$ is the distance from $x$ to $y$ and $r$ is some positive real number. In Euclidean space, such a dilation is a similarity of the space. Dilations change the size but not the shape of an object or figure. Every dilation of a Euclidean space that is not a congruence has a unique fixed point that is called the center of dilation. Some congruences have fixed points and others do not.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202514886856079, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/157544-interior-angle-sum.html
Thread: 1. Interior Angle Sum If l1 and l2 are lines are lines on the plane such that they are parallel transports along a transversal l, then they are parallel transports along any transversal. Use the above theorem to prove that the interior angle sum of a triangle is 180. 2. The angles marked $\alpha^{\circ}$ are equal because they are alternate (Z) angles. The angles marked $\gamma^{\circ}$ are equal because they are alternate (Z) angles.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302050471305847, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/204343-plane-geometry-problem.html
# Thread: 1. ## Plane geometry problem Hey everyone, Recently, I've found an interesting task from plane geometry and I have no idea how to solve it Here it is (I translate it from my mother tongue, so sorry for any mistakes): There is a rhomboid ABCD with acute angle at A vertex.We suppose that circumcircle on triangle ABD intersects side CB in point K and side CD in point L (K and L are different from vertexes). Segment AN is a diameter of this circle. Prove that point N is a centre of circumcircle on triangle CKL. I add an image of this figure so that it's easier for you to understand the task: Thanks for help in advance! If you have any questions (e.g. my translation is unclear), feel free to post them. Regards Lukasz 2. ## Re: Plane geometry problem Start by removing that line from B to D, it's distracting, but add lines from K and L to the centre of the big circle O. Call the angle at D $\theta$, then the angle AOL (the B side) will be $2\theta$, (for angles on the same arc, the angle at the centre will be twice the angle at the circumference). That means the angle LON will be $2\theta - 180.$ Repeat the procedure for the other side of the figure and hence show that the angle KON is also $2\theta-180.$ It follows that the triangles LON and KON are congruent in which case LN will be the same length as KN. Since N is equidistant from L and K, it will lie on the perpendicular bisector of LK, which will be a diameter of the small circle. The base angles of the triangles LON and KON will be $(180-(2\theta-180))/2=180-\theta,$, in which case the angle LNK will be $360-2\theta.$ Finally, the angle at C is $180-\theta$ which is double the angle LNK. Putting this with the fact that N lies on a diameter, it follows that N is the centre of the small circle. 3. ## Re: Plane geometry problem Exquisite proof! Thank you very much BTW- why don't you take a look at my second topic: Prove that n is square or doubled square Since you are so good, you may help me in solving it :P
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241615533828735, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/192234-generalizing-r-n-bound-lebesgue-measure.html
# Thread: 1. ## Generalizing to R^n a bound for a lebesgue measure. A standard proposition in measure theory is that if $E$ is any Lebesgue measurable set in $\mathbb{R}$ with $\lambda(E)>0$, then for any $\epsilon>0$ there is a finite, nontrivial interval $J=[a,b]$ such that $\lambda(E\cap J)>(1-\epsilon)\lambda(J)$. To generalize this, suppose $E\subseteq\mathbb{R}^n$ with $\lambda(E)>0$. Then why for any $\epsilon>0$ is there some box $J=(a_1,a_1+\delta]\times\cdots\times(a_n,a_n+\delta]$ such that $\lambda(E\cap J)>(1-\epsilon)\lambda(J)$? Thank you. 2. ## Re: Generalizing to R^n a bound for a lebesgue measure. Do you understand the proof of the R^1 case? Have you tried to generalize the argument to R^n? If not, that's a good place to start. If so, where did you get stuck? 3. ## Re: Generalizing to R^n a bound for a lebesgue measure. I tried adapting the argument like so: Assume $\lambda(E)<\infty$ and $\epsilon<1$. Then take finite intervals $J_m=(a_{1,m},a_{1,m}+\delta]\times\cdots\times(a_{p,m},a_{p,m}+\delta]$ such that $E\subset\bigcup_m J_m$ and $\sum_{m\geq 1}\lambda(J_m)\leq\lambda(E)/(1-\epsilon)$. Then $\lambda(E)\leq\sum_{n\geq 1}\lambda(J_m\cap E)$ and for some $m$ $(1-\epsilon)\lambda(J_m)\leq\lambda(J_m\cap E).$ Is this the correct idea? I'm essentially following the approach in the $\mathbb{R}^1$ case that I've read, but I'm unsure of some steps. For instance, why is it possible to find such $J_m$ such that $\sum_{m\geq 1}\lambda(J_m)\leq\lambda(E)/(1-\epsilon)$? It seems to be asking a lot from such sets, and I don't see why they should exist.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456093907356262, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/143359/three-dimensional-art-galleries
# Three-dimensional art galleries The well-known art gallery problem starts with an "art gallery" (a simple polygon in the plane, not necessarily convex) and asks for the minimum number of "guards" (points on the polygon) required to "observe the whole gallery" (to have the property that for any point in the interior of the polygon, there is a line segment from that point to a "guard" that lies entirely within the polygon). Chvatal showed that if the polygon has $n$ vertices, then $\lfloor n/3\rfloor$ guards are sufficient, and sometimes necessary, to observe the whole gallery. If you forget about trying to minimize the number of guards, and simply want to place guards so that they see the whole gallery, it is reasonably clear that if you place a guard at each vertex of a simple polygon, they will be able to observe the whole gallery. • One way to see this is to note that the assertion is clear for triangles, and then to recall (or to convince oneself) that any simple polygon can be triangulated without adding vertices. • If visualizing an entire triangulation of the polygon is too "global", one can think "locally" as follows. Fix any point $p$ in the gallery interior. Choose a point $q$ on the polygon such that the distance from $p$ to $q$ is minimized. The line segment from $p$ to $q$ lies within the gallery. If $q$ is a vertex, we are done. Otherwise, $q$ is on the interior of an edge. Pick a direction on that edge and move $q$ along the edge in that direction. Eventually, one of two things will happen: (1) the point $q$ becomes a vertex, or (2) there is a first time at which the line segment from $p$ to $q$ intersects the polygon somewhere besides $q$. In case (1) we are done, and in case (2), we can convince ourselves that at the time that this happens, the closest point to $p$ on the intersection of the polygon and the line segment must be a vertex, and we are again done. Now switch from two dimensions to three so an "art gallery" is now a polyhedron. If you place a guard at each vertex, can they observe the whole gallery? The answer, in general, is no. It may not be clear why it is no, but it is relatively clear that the arguments just given do not generalize in any simple way. • There are polyhedra that cannot be "triangulated" into tetrahedra without adding vertices. A famous example of this is the Schoenhardt polyhedron. (Yet: experimenting with this applet convinced me that the vertices of this polyhedron do see all of its interior.) • The "given $p$, pick a closest point $q$ on the polyhedron, and then move $q$ in some direction" idea clearly cannot work (at least without judicious choice of direction), because in the case (2) there is no reason for the closest point on the intersection of the line segment from $p$ to $q$ with the polyhedron to be a vertex in the three-dimensional case. It can pretty obviously be on the interior of some edge. So it's not counterintuitive, to me, that there are polyhedra whose vertices cannot observe their interiors. But I'd like a better mental image of what such a polyhedron can actually "look like." (A better image, for example, than what I get from the picture on the Wikipedia entry for the art gallery problem.) Can somebody describe a polyhedron, in such a way that it is in some sense "obvious" that its vertices cannot see all of its interior? So that it is possible to form a clear mental picture of what it would look like to be inside such a polyhedron, at a point where you can't see the vertices? (What do you see?) - You've looked into Joseph O'Rourke's book on this, by any chance? – J. M. May 10 '12 at 5:31 For added convenience I added a direct link to the picture. – Brian M. Scott May 10 '12 at 5:33 @J.M. page 255 of O'Rourke's book gives the example of a "Seidel polyhedron", but not very many pictures. I was hoping for a more pictorial "view" of this polyhedron, or perhaps a "simpler" polyhedron that is easier to visualize (the $n$-vertex member of the Seidel family of polyhedra has the property that $\Omega(n^{3/2})$ guards are necessary to view its interior; my hope was that if one did not ask for this, it might be possible to give a "simpler" family of examples). Of course "simpler" in visual matters is extremely relative. – leslie townes May 10 '12 at 5:39 1 ## 3 Answers I've actually printed the example J.M. mentions from my Art Gallery book on a 3D printer. :-) The interior consists of many nearly cubical cells, each surrounded by beams above, below, left, right, front, back. Each "beam" derives from an indentation. The cells are not closed--there are cracks because the beams just miss one another. If you imagine standing in one of those cubical cells, you cannot see far, and certainly you cannot see a vertex. Incidentally, it was discovered by William Thurston independently and at about the same time as Raimund Seidel. I agree that T.S. Michael's book is a great source here. - Beautiful, thanks! I can see it now. – leslie townes May 22 '12 at 5:10 I didn't find the Wikipedia image you linked to difficult to understand, so perhaps it would help you if I just explained it a little. Here's how I see it. Start with a rhombicuboctahedron, which has the same topology as the given figure. We will be manipulating the six of its faces which are axis-aligned squares. Consider the top square face. Clearly, the center of the polyhedron can see all of its vertices. Now take the face and elongate it in the left-right direction. If you stretch it enough, eventually its vertices will get hidden behind the squares on the left and right. A slice through the middle would look like this: Now you do the same for the bottom face. The rest of the faces get the same treatment, except in the two orthogonal directions. In the end, you've hidden the vertices of all the axis-aligned faces behind each other. And since every vertex of the polyhedron is a vertex of some axis-aligned face, you're done. From the inside, each triple of adjacent axis-aligned faces form three mutually orthogonal rectangles that are blocking each other's vertices. Kind of like this, where one vertex of each rectangle is hidden. In the real thing, the other vertices will be hidden by other rectangles, but it's hard to depict them all simultaneously without a full 360° panoramic display. - Thanks--- I understand that example now, although it took me a lot of effort (I think I was a subset of $\mathbb{R}^2$ in a previous life--- thinking in 3D is just too hard for me). Thanks also for the image of the Octoplex (of all the examples I have seen, it is the one that I could most easily imagine actually being built and used as an art gallery...) – leslie townes May 22 '12 at 5:14 There's a book by T S Michael, How to Guard an Art Gallery. Only one chapter of the book is actually about guarding art galleries, but that chapter has a section on the three-dimensional case and diagrams of the Octoplex and the Megaplex that you might find helpful. EDIT: If you have access to The College Mathematics Journal, Michael has a paper, Guards, Galleries, Fortresses, and the Octoplex, Vol. 42, No. 3 (May 2011) (pp. 191-200). EDIT2: Here's the description of the Octoplex from the book (but the picture in the book is worth 1,000 words). Start with a $20\times20\times20$ cube. Remove a rectangular channel 12 units wide and 6 units deep from the center of the front face (the channel runs from the top of the cube to the bottom). Remove an identical channel from the back face. Also make channels in the left and right faces, going from the front of the cube to the back, 6 units wide and 3 units deep. Finally make channels in the top and bottom faces, running left to right, 6 units by 6 units. What's left is the Octoplex: eight $4\times7\times7$ theaters connected to each other and to a central lobby by passageways one unit wide. And the claim is that even if you post a guard at each of the 56 corners there is a small region at the center of the Octoplex that no one is guarding. EDIT3 by Rahul Narain: Here is a picture of the Octoplex. - 1 Inspired by your statement that the picture is worth 1,000 words, I took the liberty of editing in a picture of the Octoplex that I modeled just now. I hope that's okay! – Rahul Narain May 11 '12 at 4:55 Many thanks.${}$ – Gerry Myerson May 11 '12 at 6:15 Thanks for all of the references! They are very helpful. – leslie townes May 22 '12 at 5:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547759294509888, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/51235/are-isolated-many-particle-quantum-systems-always-in-a-pure-state/51289
# Are isolated many-particle quantum systems always in a pure state? I am trying to understand pure and mixed states better. If I have N quantum particles in an isolated system. The many-particle state is a superposition of the product of single-particle states by the appropriate statistics (bosons, fermions, or distinguishable). Would this state be still considered pure since there is no interaction with the environment? Does that mean isolated systems or microcanonical ensembles are always pure? - ## 1 Answer In the case of an isolated system (more precisely, a system that is not entangled with another system), whether a pure or a mixed state is assigned depends only on the state of knowledge of the observer. In principle, the system will be in a pure state. However, if the observer is in any way uncertain about which pure state the system is in, the state is mixed. If you are doing statistical mechanics, that means you have incomplete knowledge about the state of the system and the state will probably be mixed. Let's take your example of the microcanonical ensemble. This describes a situation where the observer has knowledge of some macroscopic constraints, namely that the system is isolated and that it has total energy $E$, but knows nothing else. Then the correct procedure is to assign equal probability to all microscopic states $|\psi_i\rangle$ that have energy $E$ (equal a priori probabilities). The state of the system relative to the observer is therefore $$\rho = \sum_{i=1}^N \frac{1}{N} |\psi_i\rangle\langle\psi_i|,$$ where I assumed that there are $N$ states which all have the same energy expectation value $E$. This state is clearly a mixed one. Nevertheless, the system is actually in just one of these (pure) states. The appearance of a mixed state in the description merely reflects the classical uncertainty that the observer has about the system. With entanglement the situation becomes more complicated. If a system $A$ interacts strongly with another system $B$, the total state will be entangled in general. Obviously this cannot occur in an isolated system, but I will describe what happens here to give some contrast. An entangled state cannot be written as a simple tensor product: $$|\psi_{AB}\rangle \neq |\phi_A\rangle\otimes|\chi_B\rangle.$$ Instead, an entangled state takes the form $$|\psi_{AB}\rangle = \sum\limits_i \lambda_i |\phi_{A,i}\rangle\otimes|\chi_{B,i}\rangle.$$ The state of system $A$ alone is obtained by tracing out system $B$: $$\rho_A = \mathrm{Tr}_B|\psi_{AB}\rangle\langle\psi_{AB}| = \sum\limits_i |\lambda_i|^2 |\phi_{A,i}\rangle\langle\phi_{A,i}|,$$ using the orthonormality of the $B$ states: $\langle\chi_{B,i}|\chi_{B,j}\rangle = \delta_{i,j}.$ So here we necessarily obtain a mixed state due to the entanglement between systems $A$ and $B$. It is important to note that the uncertainty represented by the mixed state is not dependent on the observer's state of knowledge. Rather, it corresponds to the unavoidable quantum uncertainty embodied by the commutation relations. EDIT: However, as Peter Shor points out, unless you have access to system $B$, the outcomes of your measurements on the mixed state $\rho_A$ are identical to those you would obtain from the pure state $$|\psi_A\rangle = \sum\limits_i\lambda_i |\phi_{A,i}\rangle.$$ You can only tell if $A$ is entangled by comparing measurement outcomes on $A$ with the measurement outcomes on $B$, in which case you will see Bell inequality-violating correlations that would not be present otherwise. So in short, the state of an isolated system is not necessarily mixed, unlike the state of a system that interacts with and is thereby entangled with another system. However, when the observer's knowledge of the system is incomplete, which is the case when statistical mechanics is applicable, the state of an isolated system will indeed be mixed. The only exception to this last sentence that I can think of right now is at zero temperature with no ground state degeneracy, in which case a statistical mechanics system may still be in a unique pure state: the ground state. - Thanks Mark! A further clarification. I want to understand the difference between classical uncertainty and quantum uncertainty in the following isolated case. Lets say I have N bosons in a specific energy state. The multi-particle state is a permutation of single-particle product states. This implies there is degeneracy. I think this is inherent uncertainty so any of these states although a linear superposition of basis product states is still a pure state. Am I right? So classical uncertainty comes into play only when I don't know the expected energy of a state but only the total energy? – Sankaran Jan 15 at 16:46 1 One point that should be made is that if you don't have access to the system $B$ that system $A$ is entangled with, there is no way to tell a pure state on $A$ from a mixed state on $A$. – Peter Shor Jan 15 at 16:51 @Sankaran You do indeed describe a system in a pure state. Any linear superposition of basis kets is again a pure state. However, there is no degeneracy in the usual sense, because all of the states that appear in the sum are really the same state. You have to put in the sum 'by hand', as it were, in order to enforce the permutation symmetry that is broken when you choose an arbitrary ordering of your indistinguishable particles (which you must do in order to write down the state). – Mark Mitchison Jan 15 at 17:15 (contd.) This is why the Fock space 'occupation number' representation is generally preferable in a many-body system; it makes the permutation (anti)symmetry manifest. I am not quite sure what you mean by the last sentence of you comment. I would phrase it thus: The classical uncertainty comes into play when you are ignorant about the state (or equivalently, ignorant about the microscopic details of how the system was prepared), but you know the total energy. – Mark Mitchison Jan 15 at 17:16 @PeterShor Good point, thanks. Will add a comment on that. – Mark Mitchison Jan 15 at 17:17 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414533972740173, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Clique_(graph_theory)
# Clique (graph theory) A graph with 23 1-vertex cliques (its vertices), 42 2-vertex cliques (its edges), 19 3-vertex cliques (the light and dark blue triangles), and 2 4-vertex cliques (dark blue areas). The six edges not associated with any triangle and the 11 light blue triangles form maximal cliques. The two dark blue 4-cliques are both maximum and maximal, and the clique number of the graph is 4. In the mathematical area of graph theory, a clique ( or ) in an undirected graph is a subset of its vertices such that every two vertices in the subset are connected by an edge. Cliques are one of the basic concepts of graph theory and are used in many other mathematical problems and constructions on graphs. Cliques have also been studied in computer science: finding whether there is a clique of a given size in a graph (the clique problem) is NP-complete, but despite this hardness result many algorithms for finding cliques have been studied. Although the study of complete subgraphs goes back at least to the graph-theoretic reformulation of Ramsey theory by Erdős & Szekeres (1935),[1] the term "clique" comes from Luce & Perry (1949), who used complete subgraphs in social networks to model cliques of people; that is, groups of people all of whom know each other. Cliques have many other applications in the sciences and particularly in bioinformatics. ## Definitions A clique in an undirected graph G = (V, E) is a subset of the vertex set C ⊆ V, such that for every two vertices in C, there exists an edge connecting the two. This is equivalent to saying that the subgraph induced by C is complete (in some cases, the term clique may also refer to the subgraph). A maximal clique is a clique that cannot be extended by including one more adjacent vertex, that is, a clique which does not exist exclusively within the vertex set of a larger clique. A maximum clique is a clique of the largest possible size in a given graph. The clique number ω(G) of a graph G is the number of vertices in a maximum clique in G. The intersection number of G is the smallest number of cliques that together cover all edges of G. The opposite of a clique is an independent set, in the sense that every clique corresponds to an independent set in the complement graph. The clique cover problem concerns finding as few cliques as possible that include every vertex in the graph. A related concept is a biclique, a complete bipartite subgraph. The bipartite dimension of a graph is the minimum number of bicliques needed to cover all the edges of the graph. ## Mathematics Mathematical results concerning cliques include the following. • Turán's theorem (Turán 1941) gives a lower bound on the size of a clique in dense graphs. If a graph has sufficiently many edges, it must contain a large clique. For instance, every graph with $n$ vertices and more than $\scriptstyle\lfloor\frac{n}{2}\rfloor\cdot\lceil\frac{n}{2}\rceil$ edges must contain a three-vertex clique. • Ramsey's theorem (Graham, Rothschild & Spencer 1990) states that every graph or its complement graph contains a clique with at least a logarithmic number of vertices. • According to a result of Moon & Moser (1965), a graph with 3n vertices can have at most 3n maximal cliques. The graphs meeting this bound are the Moon–Moser graphs K3,3,..., a special case of the Turán graphs arising as the extremal cases in Turán's theorem. • Hadwiger's conjecture, still unproven, relates the size of the largest clique minor in a graph (its Hadwiger number) to its chromatic number. • The Erdős–Faber–Lovász conjecture is another unproven statement relating graph coloring to cliques. Several important classes of graphs may be defined by their cliques: • A chordal graph is a graph whose vertices can be ordered into a perfect elimination ordering, an ordering such that the neighbors of each vertex v that come later than v in the ordering form a clique. • A cograph is a graph all of whose induced subgraphs have the property that any maximal clique intersects any maximal independent set in a single vertex. • An interval graph is a graph whose maximal cliques can be ordered in such a way that, for each vertex v, the cliques containing v are consecutive in the ordering. • A line graph is a graph whose edges can be covered by edge-disjoint cliques in such a way that each vertex belongs to exactly two of the cliques in the cover. • A perfect graph is a graph in which the clique number equals the chromatic number in every induced subgraph. • A split graph is a graph in which some clique contains at least one endpoint of every edge. • A triangle-free graph is a graph that has no cliques other than its vertices and edges. Additionally, many other mathematical constructions involve cliques in graphs. Among them, • The clique complex of a graph G is an abstract simplicial complex X(G) with a simplex for every clique in G • A simplex graph is an undirected graph κ(G) with a vertex for every clique in a graph G and an edge connecting two cliques that differ by a single vertex. It is an example of median graph, and is associated with a median algebra on the cliques of a graph: the median m(A,B,C) of three cliques A, B, and C is the clique whose vertices belong to at least two of the cliques A, B, and C.[2] • The clique-sum is a method for combining two graphs by merging them along a shared clique. • Clique-width is a notion of the complexity of a graph in terms of the minimum number of distinct vertex labels needed to build up the graph from disjoint unions, relabeling operations, and operations that connect all pairs of vertices with given labels. The graphs with clique-width one are exactly the disjoint unions of cliques. • The intersection number of a graph is the minimum number of cliques needed to cover all the graph's edges. Closely related concepts to complete subgraphs are subdivisions of complete graphs and complete graph minors. In particular, Kuratowski's theorem and Wagner's theorem characterize planar graphs by forbidden complete and complete bipartite subdivisions and minors, respectively. ## Computer science Main article: Clique problem In computer science, the clique problem is the computational problem of finding a maximum clique, or all cliques, in a given graph. It is NP-complete, one of Karp's 21 NP-complete problems (Karp 1972). It is also fixed-parameter intractable, and hard to approximate. Nevertheless, many algorithms for computing cliques have been developed, either running in exponential time (such as the Bron–Kerbosch algorithm) or specialized to graph families such as planar graphs or perfect graphs for which the problem can be solved in polynomial time. ## Free software for searching maximum clique Name (alphabetically) NetworkX BSD Python approximate solution, see the routine max_clique maxClique CRAPL Java exact algorithms and DIMACS instances OpenOpt BSD Python exact and approximate solutions, possibility to specify nodes that have to be included / excluded; see MCP class for more details and examples ## Applications The word "clique", in its graph-theoretic usage, arose from the work of Luce & Perry (1949), who used complete subgraphs to model cliques (groups of people who all know each other) in social networks. For continued efforts to model social cliques graph-theoretically, see e.g. Alba (1973), Peay (1974), and Doreian & Woodard (1994). Many different problems from bioinformatics have been modeled using cliques. For instance, Ben-Dor, Shamir & Yakhini (1999) model the problem of clustering gene expression data as one of finding the minimum number of changes needed to transform a graph describing the data into a graph formed as the disjoint union of cliques; Tanay & Sharan (Shamir) discuss a similar biclustering problem for expression data in which the clusters are required to be cliques. Sugihara (1984) uses cliques to model ecological niches in food webs. Day & Sankoff (1986) describe the problem of inferring evolutionary trees as one of finding maximum cliques in a graph that has as its vertices characteristics of the species, where two vertices share an edge if there exists a perfect phylogeny combining those two characters. Samudrala & Moult (1998) model protein structure prediction as a problem of finding cliques in a graph whose vertices represent positions of subunits of the protein. And by searching for cliques in a protein-protein interaction network, Spirin & Mirny (2003) found clusters of proteins that interact closely with each other and have few interactions with proteins outside the cluster. Power graph analysis is a method for simplifying complex biological networks by finding cliques and related structures in these networks. In electrical engineering, Prihar (1956) uses cliques to analyze communications networks, and Paull & Unger (1959) use them to design efficient circuits for computing partially specified Boolean functions. Cliques have also been used in automatic test pattern generation: a large clique in an incompatibility graph of possible faults provides a lower bound on the size of a test set.[3] Cong & Smith (1993) describe an application of cliques in finding a hierarchical partition of an electronic circuit into smaller subunits. In chemistry, Rhodes et al. (2003) use cliques to describe chemicals in a chemical database that have a high degree of similarity with a target structure. Kuhl, Crippen & Friesen (1983) use cliques to model the positions in which two chemicals will bind to each other. ## Notes 1. The earlier work by Kuratowski (1930) characterizing planar graphs by forbidden complete and complete bipartite subgraphs was originally phrased in topological rather than graph-theoretic terms. ## References • Alba, Richard D. (1973), "A graph-theoretic definition of a sociometric clique", Journal of Mathematical Sociology 3 (1): 113–126, doi:10.1080/0022250X.1973.9989826 . • Barthélemy, J.-P.; Leclerc, B.; Monjardet, B. (1986), "On the use of ordered sets in problems of comparison and consensus of classifications", Journal of Classification 3 (2): 187–224, doi:10.1007/BF01894188 . • Ben-Dor, Amir; Shamir, Ron; Yakhini, Zohar (1999), "Clustering gene expression patterns.", Journal of Computational Biology 6 (3–4): 281–297, doi:10.1089/106652799318274, PMID 10582567 . • J., Cong; M., Smith (1993), "A parallel bottom-up clustering algorithm with applications to circuit partitioning in VLSI design", Proc. 30th International Design Automation Conference, pp. 755–760, doi:10.1145/157485.165119 . • Day, William H. E.; Sankoff, David (1986), "Computational complexity of inferring phylogenies by compatibility", Systematic Zoology 35 (2): 224–229, doi:10.2307/2413432, JSTOR 2413432 . • Doreian, Patrick; Woodard, Katherine L. (1994), "Defining and locating cores and boundaries of social networks", Social Networks 16 (4): 267–293, doi:10.1016/0378-8733(94)90013-2 . • Erdős, Paul; Szekeres, George (1935), "A combinatorial problem in geometry", Compositio Math. 2: 463–470 . • Graham, R.; Rothschild, B.; Spencer, J. H. (1990), Ramsey Theory, New York: John Wiley and Sons, ISBN 0-471-50046-1 . • Hamzaoglu, I.; Patel, J. H. (1998), "Test set compaction algorithms for combinational circuits", Proc. 1998 IEEE/ACM International Conference on Computer-Aided Design, pp. 283–289, doi:10.1145/288548.288615 . • Karp, Richard M. (1972), "Reducibility among combinatorial problems", in Miller, R. E.; Thatcher, J. W., Complexity of Computer Computations, New York: Plenum, pp. 85–103 . • Kuhl, F. S.; Crippen, G. M.; Friesen, D. K. (1983), "A combinatorial algorithm for calculating ligand binding", Journal of Computational Chemistry 5 (1): 24–34, doi:10.1002/jcc.540050105 . • Kuratowski, Kazimierz (1930), "Sur le probléme des courbes gauches en Topologie", Fundamenta Mathematicae (in French) 15: 271–283 . • Luce, R. Duncan; Perry, Albert D. (1949), "A method of matrix analysis of group structure", Psychometrika 14 (2): 95–116, doi:10.1007/BF02289146, PMID 18152948 . • Moon, J. W.; Moser, L. (1965), "On cliques in graphs", Israel J. Math. 3: 23–28, doi:10.1007/BF02760024, MR 0182577 . • Paull, M. C.; Unger, S. H. (1959), "Minimizing the number of states in incompletely specified sequential switching functions", IRE Trans. on Electronic Computers EC–8 (3): 356–367, doi:10.1109/TEC.1959.5222697 . • Peay, Edmund R. (1974), "Hierarchical clique structures", Sociometry 37 (1): 54–65, doi:10.2307/2786466, JSTOR 2786466 . • Prihar, Z. (1956), "Topological properties of telecommunications networks", 44 (7): 927–933, doi:10.1109/JRPROC.1956.275149 . • Rhodes, Nicholas; Willett, Peter; Calvet, Alain; Dunbar, James B.; Humblet, Christine (2003), "CLIP: similarity searching of 3D databases using clique detection", Journal of Chemical Information and Computer Sciences 43 (2): 443–448, doi:10.1021/ci025605o, PMID 12653507 . • Samudrala, Ram; Moult, John (1998), "A graph-theoretic algorithm for comparative modeling of protein structure", Journal of Molecular Biology 279 (1): 287–302, doi:10.1006/jmbi.1998.1689, PMID 9636717 . • Spirin, Victor; Mirny, Leonid A. (2003), "Protein complexes and functional modules in molecular networks", 100 (21): 12123–12128, doi:10.1073/pnas.2032324100, PMC 218723, PMID 14517352 . • Sugihara, George (1984), "Graph theory, homology and food webs", in Levin, Simon A., Population Biology, Proc. Symp. Appl. Math. 30, pp. 83–101 . • Tanay, Amos; Sharan, Roded; Shamir, Ron (2002), "Discovering statistically significant biclusters in gene expression data", Bioinformatics 18 (Suppl. 1): S136–S144, doi:10.1093/bioinformatics/18.suppl_1.S136, PMID 12169541 . • Turán, Paul (1941), "On an extremal problem in graph theory", Matematikai és Fizikai Lapok (in Hungarian) 48: 436–452
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.836553156375885, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Weighted_average
# Weighted arithmetic mean (Redirected from Weighted average) "Weighted mean" redirects here. For the geometric mean, see weighted geometric mean. For the harmonic mean, see weighted harmonic mean. The weighted mean is similar to an arithmetic mean (the most common type of average), where instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics. If all the weights are equal, then the weighted mean is the same as the arithmetic mean. While weighted means generally behave in a similar fashion to arithmetic means, they do have a few counterintuitive properties, as captured for instance in Simpson's paradox. ## Examples ### Basic example Given two school classes, one with 20 students, and one with 30 students, the grades in each class on a test were: Morning class = 62, 67, 71, 74, 76, 77, 78, 79, 79, 80, 80, 81, 81, 82, 83, 84, 86, 89, 93, 98 Afternoon class = 81, 82, 83, 84, 85, 86, 87, 87, 88, 88, 89, 89, 89, 90, 90, 90, 90, 91, 91, 91, 92, 92, 93, 93, 94, 95, 96, 97, 98, 99 The straight average for the morning class is 80 and the straight average of the afternoon class is 90. The straight average of 80 and 90 is 85, the mean of the two class means. However, this does not account for the difference in number of students in each class (20 versus 30); hence the value of 85 does not reflect the average student grade (independent of class). The average student grade can be obtained by averaging all the grades, without regard to classes (add all the grades up and divide by the total number of students): $\bar{x} = \frac{4300}{50} = 86.$ Or, this can be accomplished by weighting the class means by the number of students in each class (using a weighted mean of the class means): $\bar{x} = \frac{(20\times80) + (30\times90)}{20 + 30} = 86.$ Thus, the weighted mean makes it possible to find the average student grade in the case where only the class means and the number of students in each class are available. ### Convex combination example Since only the relative weights are relevant, any weighted mean can be expressed using coefficients that sum to one. Such a linear combination is called a convex combination. Using the previous example, we would get the following: $\frac{20}{20 + 30} = 0.4\,$ $\frac{30}{20 + 30} = 0.6\,$ $\bar{x} = \frac{(0.4\times80) + (0.6\times90)}{0.4 + 0.6} = 86.$ This simplifies to: $\bar{x} = (0.4\times80) + (0.6\times90) = 86.$ ## Mathematical definition It has been suggested that portions of be moved or incorporated into this section. (Discuss) Formally, the weighted mean of a non-empty set of data $\{x_1, x_2, \dots , x_n\},$ with non-negative weights $\{w_1, w_2, \dots, w_n\},$ is the quantity $\bar{x} = \frac{ \sum_{i=1}^n w_i x_i}{\sum_{i=1}^n w_i},$ which means: $\bar{x} = \frac{w_1 x_1 + w_2 x_2 + \cdots + w_n x_n}{w_1 + w_2 + \cdots + w_n}.$ Therefore data elements with a high weight contribute more to the weighted mean than do elements with a low weight. The weights cannot be negative. Some may be zero, but not all of them (since division by zero is not allowed). The formulas are simplified when the weights are normalized such that they sum up to $1$, i.e. $\sum_{i=1}^n {w_i} = 1$. For such normalized weights the weighted mean is simply $\bar {x} = \sum_{i=1}^n {w_i x_i}$. Note that one can always normalize the weights by making the following transformation on the weights $w_i' = \frac{w_i}{\sum_{j=1}^n{w_j}}$. Using the normalized weight yields the same results as when using the original weights. Indeed, $\bar{x} = \sum_{i=1}^n w'_i x_i= \sum_{i=1}^n \frac{w_i}{\sum_{j=1}^n w_j} x_i = \frac{ \sum_{i=1}^n w_i x_i}{\sum_{j=1}^n w_j} = \frac{ \sum_{i=1}^n w_i x_i}{\sum_{i=1}^n w_i}.$ The common mean $\frac {1}{n}\sum_{i=1}^n {x_i}$ is a special case of the weighted mean where all data have equal weights, $w_i=w$. When the weights are normalized then $w_i'=\frac{1}{n}.$ ## Statistical properties The weighted sample mean, $\bar{x}$, with normalized weights (weights summing to one) is itself a random variable. Its expected value and standard deviation are related to the expected values and standard deviations of the observations as follows, If the observations have expected values $E(x_i )=\bar {x_i},$ then the weighted sample mean has expectation $E(\bar{x}) = \sum_{i=1}^n {w_i \bar{x_i}}.$ Particularly, if the expectations of all observations are equal, $\bar {x_i}=c$, then the expectation of the weighted sample mean will be the same, $E(\bar{x})= c. \,$ For uncorrelated observations with standard deviations $\sigma_i$, the weighted sample mean has standard deviation $\sigma(\bar x)= \sqrt {\sum_{i=1}^n {w_i^2 \sigma^2_i}}.$ Consequently, when the standard deviations of all observations are equal, $\sigma_i=d$, the weighted sample mean will have standard deviation $\sigma(\bar x)= d \sqrt {V_2}$. Here $V_2$ is the quantity $V_2=\sum_{i=1}^n {w_i^2},$ such that $1/n \le V_2\le 1$. It attains its minimum value for equal weights, and its maximum when all weights except one are zero. In the former case we have $\sigma(\bar x)=d/ \sqrt {n}$, which is related to the central limit theorem. Note that due to the fact that one can always transform non-normalized weights to normalized weights all formula in this section can be adapted to non-normalized weights by replacing all $w_i$ by $w_i' = \frac{w_i}{\sum_{i=1}^n{w_i}}$. ## Dealing with variance See also: Least squares#Weighted least squares For the weighted mean of a list of data for which each element $x_i\,\!$ comes from a different probability distribution with known variance ${\sigma_i}^2\,$, one possible choice for the weights is given by: $w_i = \frac{1}{\sigma_i^2}.$ The weighted mean in this case is: $\bar{x} = \frac{ \sum_{i=1}^n (x_iw_i)}{\sum_{i=1}^n w_i},$ and the variance of the weighted mean is: $\sigma_{\bar{x}}^2 = \frac{ 1 }{\sum_{i=1}^n w_i},$ which reduces to $\sigma_{\bar{x}}^2 = \frac{ {\sigma_0}^2 }{n}$, when all $\sigma_i = \sigma_0.\,$ The significance of this choice is that this weighted mean is the maximum likelihood estimator of the mean of the probability distributions under the assumption that they are independent and normally distributed with the same mean. ### Correcting for over- or under-dispersion Weighted means are typically used to find the weighted mean of experimental data, rather than theoretically generated data. In this case, there will be some error in the variance of each data point. Typically experimental errors may be underestimated due to the experimenter not taking into account all sources of error in calculating the variance of each data point. In this event, the variance in the weighted mean must be corrected to account for the fact that $\chi^2$ is too large. The correction that must be made is $\sigma_{\bar{x}}^2 \rightarrow \sigma_{\bar{x}}^2 \chi^2_\nu \,$ where $\chi^2_\nu$ is $\chi^2$ divided by the number of degrees of freedom, in this case n − 1. This gives the variance in the weighted mean as: $\sigma_{\bar{x}}^2 = \frac{ 1 }{\sum_{i=1}^n 1/{\sigma_i}^2} \times \frac{1}{(n-1)} \sum_{i=1}^n \frac{ (x_i - \bar{x} )^2}{ \sigma_i^2 };$ when all data variances are equal, $\sigma_i = \sigma_0$, they cancel out in the weighted mean variance, $\sigma_{\bar{x}}^2$, which then reduces to the standard error of the mean (squared), $\sigma_{\bar{x}}^2 = \sigma^2/n$, in terms of the sample standard deviation (squared), $\sigma^2 = \sum_{i=1}^n (x_i - \bar{x} )^2 / (n-1)$. ## Weighted sample variance Typically when a mean is calculated it is important to know the variance and standard deviation about that mean. When a weighted mean $\mu^*$ is used, the variance of the weighted sample is different from the variance of the unweighted sample. The biased weighted sample variance is defined similarly to the normal biased sample variance: $\sigma^2\ = \frac{ \sum_{i=1}^N{\left(x_i - \mu\right)^2} }{ N }$ $\sigma^2_\mathrm{weighted} = \frac{\sum_{i=1}^N w_i \left(x_i - \mu^*\right)^2 }{V_1}$ where $V_1 = \sum_{i=1}^n w_i$, which is 1 for normalized weights. For small samples, it is customary to use an unbiased estimator for the population variance. In normal unweighted samples, the N in the denominator (corresponding to the sample size) is changed to N − 1. While this is simple in unweighted samples, it is not straightforward when the sample is weighted. If each $x_i$ is drawn from a Gaussian distribution with variance $1/w_i$, the unbiased estimator of a weighted population variance is given by:[1] $s^2\ = \frac {V_1} {V_1^2-V_2} \sum_{i=1}^N w_i \left(x_i - \mu^*\right)^2,$ where $V_2 = \sum_{i=1}^n {w_i^2}$ as introduced previously. The degrees of freedom of the weighted, unbiased sample variance vary accordingly from N − 1 down to 0. The standard deviation is simply the square root of the variance above. If all of the $x_i$ are drawn from the same distribution and the integer weights $w_i$ indicate frequency of occurrence in the sample, then the unbiased estimator of the weighted population variance is given by $s^2\ = \frac {1} {V_1 - 1} \sum_{i=1}^N w_i \left(x_i - \mu^*\right)^2,$ If all $x_i$ are unique, then $N$ counts the number of unique values, and $V_1$ counts the number of samples. For example, if values $\{2, 2, 4, 5, 5, 5\}$ are drawn from the same distribution, then we can treat this set as an unweighted sample, or we can treat it as the weighted sample $\{2, 4, 5\}$ with corresponding weights $\{2, 1, 3\}$, and we should get the same results. ## Vector-valued estimates The above generalizes easily to the case of taking the mean of vector-valued estimates. For example, estimates of position on a plane may have less certainty in one direction than another. As in the scalar case, the weighted mean of multiple estimates can provide a maximum likelihood estimate. We simply replace $\sigma^2$ by the covariance matrix:[2] $W_i = \Sigma_i^{-1}.$ The weighted mean in this case is: $\bar{\mathbf{x}} = \left(\sum_{i=1}^n \Sigma_i^{-1}\right)^{-1}\left(\sum_{i=1}^n \Sigma_i^{-1} \mathbf{x}_i\right),$ and the covariance of the weighted mean is: $\Sigma_{\bar{\mathbf{x}}} = \left(\sum_{i=1}^n \Sigma_i^{-1}\right)^{-1},$ For example, consider the weighted mean of the point [1 0] with high variance in the second component and [0 1] with high variance in the first component. Then $\mathbf{x}_1 := [1 0]^\top, \qquad \Sigma_1 := \begin{bmatrix}1 & 0\\ 0 & 100\end{bmatrix}$ $\mathbf{x}_2 := [0 1]^\top, \qquad \Sigma_2 := \begin{bmatrix}100 & 0\\ 0 & 1\end{bmatrix}$ then the weighted mean is: $\bar{\mathbf{x}} = \left(\Sigma_1^{-1} + \Sigma_2^{-1}\right)^{-1} \left(\Sigma_1^{-1} \mathbf{x}_1 + \Sigma_2^{-1} \mathbf{x}_2\right)$ $=\begin{bmatrix} 0.9901 &0\\ 0& 0.9901\end{bmatrix}\begin{bmatrix}1\\1\end{bmatrix} = \begin{bmatrix}0.9901 \\ 0.9901\end{bmatrix}$ which makes sense: the [1 0] estimate is "compliant" in the second component and the [0 1] estimate is compliant in the first component, so the weighted mean is nearly [1 1]. ## Accounting for correlations In the general case, suppose that $\mathbf{X}=[x_1,\dots,x_n]$, $\mathbf{C}$ is the covariance matrix relating the quantities $x_i$, $\bar{x}$ is the common mean to be estimated, and $\mathbf{W}$ is the design matrix [1, ..., 1] (of length $n$). The Gauss–Markov theorem states that the estimate of the mean having minimum variance is given by: $\sigma^2_\bar{x}=(\mathbf{W}^T \mathbf{C}^{-1} \mathbf{W})^{-1},$ and $\bar{x} = \sigma^2_\bar{x} (\mathbf{W}^T \mathbf{C}^{-1} \mathbf{X}).$ ## Decreasing strength of interactions Consider the time series of an independent variable $x$ and a dependent variable $y$, with $n$ observations sampled at discrete times $t_i$. In many common situations, the value of $y$ at time $t_i$ depends not only on $x_i$ but also on its past values. Commonly, the strength of this dependence decreases as the separation of observations in time increases. To model this situation, one may replace the independent variable by its sliding mean $z$ for a window size $m$. $z_k=\sum_{i=1}^m w_i x_{k+1-i}.$ Range weighted mean interpretation Range (1–5) Weighted mean equivalence 3.34–5.00 Strong 1.67–3.33 Satisfactory 0.00–1.66 Weak ## Exponentially decreasing weights In the scenario described in the previous section, most frequently the decrease in interaction strength obeys a negative exponential law. If the observations are sampled at equidistant times, then exponential decrease is equivalent to decrease by a constant fraction $0<\Delta<1$ at each time step. Setting $w=1-\Delta$ we can define $m$ normalized weights by $w_i=\frac {w^{i-1}}{V_1}$, where $V_1$ is the sum of the unnormalized weights. In this case $V_1$ is simply $V_1=\sum_{i=1}^m{w^{i-1}} = \frac {1-w^{m}}{1-w}$, approaching $V_1=1/(1-w)$ for large values of $m$. The damping constant $w$ must correspond to the actual decrease of interaction strength. If this cannot be determined from theoretical considerations, then the following properties of exponentially decreasing weights are useful in making a suitable choice: at step $(1-w)^{-1}$, the weight approximately equals ${e^{-1}}(1-w)=0.39(1-w)$, the tail area the value $e^{-1}$, the head area ${1-e^{-1}}=0.61$. The tail area at step $n$ is $\le {e^{-n(1-w)}}$. Where primarily the closest $n$ observations matter and the effect of the remaining observations can be ignored safely, then choose $w$ such that the tail area is sufficiently small. ## Weighted averages of functions The concept of weighted average can be extended to functions.[3] Weighted averages of functions play an important role in the systems of weighted differential and integral calculus.[4] ## Notes 1. James, Frederick (2006). Statistical Methods in Experimental Physics (2nd ed.). Singapore: World Scientific. p. 324. ISBN 981-270-527-9. 2. G. H. Hardy, J. E. Littlewood, and G. Pólya. Inequalities (2nd ed.), Cambridge University Press, ISBN 978-0-521-35880-4, 1988. 3. Jane Grossman, Michael Grossman, Robert Katz. The First Systems of Weighted Differential and Integral Calculus, ISBN 0-9771170-1-4, 1980. ### Further reading • Bevington, Philip R (1969). Data Reduction and Error Analysis for the Physical Sciences. New York, N.Y.: McGraw-Hill. OCLC 300283069.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 110, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9016491174697876, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/21999-solved-equation-tangent-line.html
# Thread: 1. ## [SOLVED] equation of a tangent line Having a little bit of problem.... Let c be a nonzero real number. What is the equation of the tangent line to the graphof the function y=sin(x/x+c) at the point x=0. i get the derivate of cos(1/1+c) but i really dont think that it right!!! am i am differentiating properly!!!??? 2. $\frac{d}{{dx}}\left( {\sin \left( {\frac{x}{{x + c}}} \right)} \right) = \cos \left( {\frac{x}{{x + c}}} \right)\left( {\frac{c}{{\left( {x + c} \right)^2 }}} \right)$ 3. simply07! Let $c$ be a nonzero real number. What is the equation of the tangent line to the graph of the function $y \:=\:\sin\left(\frac{x}{x+c}\right)$ at the point $x=0.$ i get the derivate of $\cos\left(\frac{1}{1+c}\right)$ but i really dont think that it right! am i differentiating properly? . . . . no You'd better review the Chain Rule ... and the Quotient Rule. 4. ok i see the mistake... so now to find the value at x = 0, i would simply replace x for 0.... then my answer woul be 1/c ??? this make sence but is it right? thanks 5. Originally Posted by simply07 ok i see the mistake... so now to find the value at x = 0, i would simply replace x for 0.... then my answer woul be 1/c ??? this make sence but is it right? thanks That's the correct slope of the tangent line. So your tangent line has the slope-intercept form of: $y = \frac{1}{c} \cdot x + b$ It must pass through the point (0, y(0)), so plug this point in to find b. -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8896467685699463, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/rotational-dynamics+rotation
# Tagged Questions 2answers 232 views ### What is the proof that a force applied on a rigid body will cause it to rotate around its center of mass? Say I have a rigid body in space. I've read that if I during some short time interval apply a force on the body at some point which is not in line with the center of mass, it would start rotating ... 1answer 97 views ### Transform torque from Euler angles to infinitesimal Cartesian rotations For a certain pair of rigid bodies, I have the gradient of energy in terms of Euler angles. I want to transform this gradient to the gradient of energy in terms of rotations about the $x, y, z$ axes ... 1answer 175 views ### Non-commutative property of rotation Addition of angles are non-commutative in three dimensions. Hence some other angular vector quantities like angular velocity, momentum become non-commutative. What is the physical significance of this ... 3answers 257 views ### Aircraft Level Flight Trajectory An aircraft climbs to 15000 feet and enters 'level flight' phase. My basic knowledge of physics says that forces on the aircraft at this time are balanced - as seen in this diagram. ... 1answer 85 views ### What happens at the end of Coriolis Deflection Consider we launch a cannonball due south from a point at 45 degrees latitude in the Northern Hemisphere (e.g the point defined with the co-ordinate system on this diagram). The cannonball travels for ... 3answers 100 views ### Why is $F = mg - T$ in this case? The situation is as follows: I am told that $F_{net} = mg - T$ in this case, but doesn't that not take into account that $T$ isn't applied to the center of mass? Newton's second law is defined for ... 2answers 1k views ### Rotational kinetic energy during vertical circular motion of a particle Why is it not necessary to take into account rotational kinetic energy when using the Law of Conservation of Mechanical Energy to solve vertical circular motion problems? After all, the particle is ... 1answer 84 views ### Synchronising the Earth's rotation via mass redistribution How much material would have to be moved per year from mountain-tops to valleys in order to keep the Earth's rotation synchronised with UTC, thus removing the need for leap seconds to be periodically ... 2answers 685 views ### Will a boiled egg or a raw egg stop rolling first? If we roll a normal egg and a boiled egg at the same time on a floor 1) with friction 2) without friction which one will come to stop first (if they will stop at all) and why? Can anyone tell ... 1answer 180 views ### What “I” should use in Rotational Energy formula $(I \omega^2)/2$ $\text{Rotational Energy} = \frac{1}{2} I \omega^2$. What $I$ should be used? $I$ as a inertia tensor matrix = stepRotation * inverse moment of inertia * inverse stepRotation; Or I as moment of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178746342658997, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/33027/what-are-u-type-statistics
# What are U-type statistics? In an article, I recently came across the mention of first and second order U-type statistics without further detail. Does anyone know what U-type statistics are? References will be highly appreciated. - 5 – whuber♦ Jul 25 '12 at 19:42 Huber has found the Wikipedia description of U-statistics that come up in nonparametric statistics and were original found by Hoeffding. That is what I assumed you meant also when I saw the question. I don't think the term U-type is common though. – Michael Chernick Jul 25 '12 at 20:26 It looks like it. Any hint at what first and second order could be? – gui11aume Jul 25 '12 at 20:30 2 Probably averages over functions taking one argument vs. averages over all pairs for functions taking two arguments. A. van der Vaart's Asymptotic Statistics has a chapter that provides a lovely introduction to this topic. – cardinal Jul 25 '12 at 20:43 @cardinal yes that would make sense in the context. I will look it up. Thanks! – gui11aume Jul 25 '12 at 20:46 ## 2 Answers From the comments and the answer I got that "U-type statistics" is jargon for "U-statistics". Here are a couple of elements taken from the reference provided by @cardinal, and in the previous answer. A U-statistics of degree or order $r$ is based on a permutation symmetric kernel function $h$ of arity $r$ $$h(x_1, ..., x_r): \mathbb{X}^r \rightarrow \mathbb{R},$$ and is the average of that function taken over all possible subsets of observations from the sample. More formally $$U = \frac{1}{\left( \array{n\\r} \right)} \sum_{\Pi_r(n)}h(x_{\pi_1}, ..., x_{\pi_r}),$$ where the sum is taken over $\Pi_r$, the set of all unordered subsets chosen from $\{1, ..., n\}$. The interest of U-statistics is that they are asymptotically Gaussian provided $E \{ h^2(X_1, ..., X_r) \} < \infty$. Example 1: The sample mean is a first order U-statistics with $h(x) = x$. Example 2: The signed rank statistic is a second order U-statistics with $h(x_1, x_2) = 1_{\mathbb{R}^+}(x_1+x_2)$ (the function that is equal to $1$ if $x_1 + x_2 > 0$, and $0$ otherwise). $$U = \frac{1}{\left( \array{n\\2} \right)} \sum_{i=1}^{n-1} \sum_{j=i+1}^n 1_{\mathbb{R}^+}(x_i+x_i)$$ is the sum of pairs $(x_i, x_j)$ from the sample with positive sum $x_i+x_j > 0$ and can be used as test statistic for investigating whether the distribution of the observations is located at 0. Example 3: The unit definition space $\mathbb{X}$ of $h$ need not be real. Kendall's $\tau$ statistics is a second order U-statistics with $\frac{1}{2} h((x_1, y_1), (x_2, y_2)) = 1_{\mathbb{R}^+}((y_2-y_1)(x_2-x_1)) - 1$. $$\tau = \frac{2}{\left( \array{n\\2} \right)} \sum_{i=1}^{n-1} \sum_{j=i+1}^n 1_{\mathbb{R}^+}((y_2-y_1)(x_2-x_1)) - 1$$ is a measure of dependence between $X$ and $Y$ and counts the number of concordant pairs $(x_i, y_i)$ and $(x_j, y_j)$ in the observations. - We have established that U-statistics are what the OP is looking for. I will address his second question about orer of U-statistics. The theory of U-statistics can be found in many books on nonparametrics and I am sure also in the various statistical encyclopedias. Here is a nice article by Tom Ferguson that summarizes the theory. I think it is actually a class tutorial on it. Here is what he says about order. The rest you can find in the paper 5. Degeneracy. When using U-statistics for testing hypotheses, it occasionally happens that at the null hypothesis, the asymptotic distribution has variance zero. This is a degenerate case, and we cannot use Theorem 2 to find approximate cutoff points. The general definition of degeneracy for a U-statistic of order $m$ and variances, $\sigma_1^2 \leq \sigma_2^2 \leq ... \leq \sigma_m^2$ given by (19) is as follows. Definition 3. We say that a U-statistic has a degeneracy of order $k$ if $\sigma_1^2 = · · · = \sigma_k^2 = 0$ and $\sigma^2_{k+1} > 0$. http://www.math.ucla.edu/~tom/Stat200C/Ustat.pdf - @gui11aume Thanks for the nice editing job. I just fixed one thing (an extra 1 at the end after k+1). – Michael Chernick Jul 25 '12 at 21:19 Oops, sorry about that. Thanks for spotting. – gui11aume Jul 25 '12 at 21:22 Thank for you the reference, it was very useful. – gui11aume Jul 26 '12 at 17:33 1 @gu11aume. Do you know Tom Ferguson? He was a UCLA professor way back in the late 1970s when I was a graduate student. At Stanford would used his book in our graduate math stat course. It was a really good text and I think he writes very well. – Michael Chernick Jul 26 '12 at 18:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947344958782196, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/27807-find-linear-transformations.html
# Thread: 1. ## Find linear transformations Find linear transformations $U,T: F^2 \rightarrow F^2$ such that $UT = T_{0}$, but $TU \neq T_{0}$. Find matrices A and B such that AB = 0 but BA is not equal to 0. Solution. I'm a bit confused in this problem, I'm not quite sure what I can write for U and T.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9076859951019287, "perplexity_flag": "head"}
http://mathoverflow.net/questions/10934/class-number-measuring-the-failure-of-unique-factorization/10953
## Class number measuring the failure of unique factorization ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The statement that the class number measures the failure of the ring of integers to be a ufd is very common in books. ufd iff class number is 1. This inspires the following question: Is there a quantitative statement relating the class number of a number field to the failure of unique factorization in the maximal order - other than $h = 1$ iff $R$ is a ufd? In what sense does a maximal order of class number 3 "fail more" to be a ufd than a maximal order of class number 2? Is it true that an integer in a field of greater class number will have more distinct representations as the product of irreducible elements than an integer in a field with smaller class number? - 1 I'd say that the group tells you in what different ways does UFD fail in your ring, rather than that its size measures how bad the failure is. – Mariano Suárez-Alvarez Jan 6 2010 at 17:09 @Mariano: it sounds like you are hinting at the group structure of the class group as a measure of failure. This sounds as interesting as the original question. Can you elaborate on such a relationship? – Dror Speiser Jan 6 2010 at 17:24 ## 5 Answers Theorem (Carlitz, 1960): The ring of integers $\mathbb{Z}_F$ of an algebraic number field $F$ has class number at most $2$ iff for all nonzero nonunits $x \in \mathbb{Z}_F$, any two factorizations of $x$ into irreducibles have the same number of factors. A proof of this (and a 1990 generalization of Valenza) can be found in $\S 22.3$ of my commutative algebra notes. This paper has spawned a lot of research by ring theorists on half-factorial domains: these are rings in which every nonzero nonunit factors into irreducibles and such that the number of irreducible factors is independent of the factorization. To be honest though, I think there are plenty of number theorists who think of the class number as measuring the failure of unique factorization who don't know Carlitz's theorem (or who know it but are not thinking of it when they make that kind of statement). Here is another try [edit: this is essentially the same as Olivier's response, but said differently; I think it is worthwhile to have both]: when trying to solve certain Diophantine problems (over the integers), one often gets nice results if the class number of a certain number field is prime to a certain quantity. The most famous example of this is Fermat's Last Theorem, which is easy to prove for an odd prime $p$ for which the class number of $\mathbb{Q}(\zeta_p)$ is prime to $p$: a so-called "regular" prime. For an application to Mordell equations $y^2 + k = x^3$, see http://math.uga.edu/~pete/4400MordellEquation.pdf Especially see Section 4, where the class of rings "of class number prime to 3" is defined axiomatically and applied to the Mordell equation. (N.B.: These notes are written for an advanced undergraduate / first year grad student audience.) The Mordell equation is probably a better example than the Fermat equation because: (i) the argument in the "regular" case is more elementary than FLT in the regular case (the latter is too involved to be done in a first course), and (ii) when the "regularity" hypothesis is dropped, it is not just harder to prove that there are no nontrivial solutions, it is actually very often false! - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is a partial answer. In a UFD, the following statement is true : "either an element is prime or you can write it as a nontrivial product". In a Dedekind ring with finite class number h, it is not true, but you have the following "quantitative" statement : "either an element is prime or you can write its h-th power as a nontrivial product". - do you mean "unique up to order and units" nontrivial product? – Dror Speiser Jan 6 2010 at 17:18 Of course you do. And you are right! Unique factorization of ideals + h-th power is principal. Spot on. – Dror Speiser Jan 6 2010 at 17:20 By nontrivial, I meant without using units, of course ! – Olivier Benoist Jan 6 2010 at 17:21 For Dedekind domains, like the integers of a number field, PID iff UFD. There's definitely a quantitative statement relating the class number to failure of PIDness: the higher the class number, the smaller the density of principal prime ideals amongst the prime ideals; this is just Cebotarev plus standard facts about the Hilbert class field. - Class number $h(K)$ is exactly the quantitative measure of the failure of unique factorization: by its definition it measures "how many more ideas are there compared to numbers". To clarify: decomposition is always unique for ideals, so if the only ideals you have are numbers (that is, $h = 1$), then you don't have any problem decomposing numbers (so, you have PID). Furthermore, the more "leftover" ideals you have (ideal class group), the more possibilities of writing different decompositions of numbers exist. This vague statement can be turned into some precise ones. If you have different factorizations of number $x$, this means the prime ideals in the decomposition $x = \mathfrak p_1\mathfrak p_2\dots\mathfrak p_n$ are grouped in a different way.You can establish from here the bound on the number of possible different factorizations; may be (not sure here) it can be shown to be no more than $C(h)$. Another theorem that follows (mentioned by Olivier): $x^h$ must always have a decomposition into true numbers rather then ideals. Indeed, $x^n = \mathfrak p_1^h\mathfrak p_2^h\dots\mathfrak p_n^h$ and you need to use the fact that any element $p$ in abelian group of size $h$ has the property $p^h = 1$. - there is a theorem: we know that for every number field $K$ (with class number not necessarily 1) there exists a unique factorization domain $\mathcal R_K$, $\mathcal O_K\subset \mathcal R_K\subset K$ such that it's group of invertible elements is finitely generated. what is the reference for this result? - This follows immediately from proposition 11.6 in Algebraic Number Theory, Neukirch, English first edition, by taking any $X$ such that the map $\bigoplus_{\mathfrak{p}\not\in X}K^*/O_\mathfrak{p}^*\rightarrow Cl(O_K)$ is surjective, which is possible since $Cl(O_K)$ is finite. – Dror Speiser Jul 21 at 13:04 3 You can take for $R_K$ any ring of $S$-integers ${\mathcal O}_{K,S}$ where $S$ is a finite set of primes from ${\mathcal O}_K$ (plus the archimedean places) whose ideal classes generate the class group of $K$. – KConrad Jul 22 at 3:48 thank you...... – quaestion Jul 23 at 19:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271197319030762, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-math-topics/204432-angle-between-two-vectors.html
# Thread: 1. ## Angle Between Two Vectors Greetings, I would like to ask whether the cosine of the angle between two vectors can be derived from the knowledge of directional cosines. If that is the case, how can I prove the statement? I am attaching a scanned page from the book Vector and Tensor Analysis by Hay, which illustrates the concept beginning with the phrase, "By a formula from of analytic geometry..." and writes the formula, around which I have put a red box, below this statement. I have put a red rectangular box to emphasize my point. Thanks Attached Thumbnails 2. ## Re: Angle Between Two Vectors The theorem in the red box is pretty much a restatement of (7.1). Not particularly helpful. To find $\cos \theta$, draw vectors $\vec{a}$ and $\vec{b}$, along with their difference, $\vec{a} - \vec{b}$, forming a triangle. Note that the dot product obeys the distributive property, that is $(\vec{a} - \vec{b})^2 = (\vec{a})^2 + (\vec{b})^2 - 2(\vec{a} \cdot \vec{b}) = ||\vec{a}||^2 + ||\vec{b}||^2 - 2(\vec{a} \cdot \vec{b})$. However by the law of cosines, $(\vec{a} - \vec{b})^2 = ||\vec{a}||^2 + ||\vec{b}||^2 - 2||a|| \hspace{1 mm}||b|| \cos \theta$ Set these two expressions equal, cancel stuff, and you will obtain $\vec{a} \cdot \vec{b} = ||\vec{a}||\hspace{1 mm}||\vec{b}|| \cos \theta$, or $\cos \theta = \frac{\vec{a} \cdot \vec{b}}{||\vec{a}||\hspace{1 mm}||\vec{b}||}$ 3. ## Re: Angle Between Two Vectors Thanks, you are right, but I think you emphasize that no such formula exists in analytical geometry if I am not mistaken. However, I still wonder why the author wrote, "By a formula of analytical geometry..." do you have an explanation?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356836676597595, "perplexity_flag": "head"}
http://mathoverflow.net/questions/14888?sort=oldest
## Compact Hausdorff and C^*-algebra “objects” in a category. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is yet more on "algebraic objects in functional analysis". Since Compact Hausdorff spaces are algebraic over Set, it seems to follow that one can find "Compact Hausdorff objects" in any suitable category representing functors from that category to CompHaus. An obvious such functor is the spectrum of a unital $C^*$-algebra. This seems to imply that $\mathbb{C}$ is a compact Hausdorff object in the category of unital $C^*$-algebras. So: Question 1: Is this right? Followed by the obvious: Question 2: Are there any other interesting "Compact Hausdorff" objects in other categories? Similarly, $C^\ast$-algebras is algebraic, and whilst Banach spaces isn't algebraic then it embeds in an algebraic theory (of totally convex spaces). Again, to any compact Hausdorff space one can assign its $C^\ast$-algebra of continuous functions to $\mathbb{C}$. This suggests that $\mathbb{C}$ is a "$C^\ast$-algebra" object in CompHaus - except that $\mathbb{C}$ is not a compact Hausdorff space. However, we have a way out due to the way that $C^\ast$-algebras are algebraic: it's the unit ball that we should be thinking of and this is continuous functions to the closed unit disc in $\mathbb{C}$, which is compact Hausdorff. Thus $\{z \in \mathbb{C} : |z| \le 1\}$ seems to be a $C^\ast$-algebra object in Compact Hausdorff spaces. Again: Question 3: Is this right? and Question 4: Are there any other interesting "$C^\ast$-algebra" objects in other categories? and Question 5: Are there any "Banach space" objects (or "totally convex space" objects) floating around anywhere? - Andrew, I'm having trouble parsing the sentence that makes up your second paragraph. So, I don't understand what you mean by a "compact Hausdorff object" (though maybe I can guess). Could you clarify? – Tom Leinster Feb 10 2010 at 11:24 I was extrapolating from "group object" to compact Hausdorff so a "compact Hausdorff object" would represent a functor to CompHaus just as a "group object" represents a functor to Grp. As this is all reasonably new to me, perhaps there's something that is obvious but that I haven't yet "got" which says that whilst "group objects" are fine, "compact Hausdorff objects" aren't. Is there? – Andrew Stacey Feb 10 2010 at 11:42 Are you allowed to ask five (5!) questions in a single post? ;) – Chris Schommer-Pries Feb 10 2010 at 15:53 Of course I am! Well, they're all reflections of the same question so I figured it was better to ask them all in one go than clutter up the board with separate ones. – Andrew Stacey Feb 10 2010 at 18:54 +1 for linking to a nonexistent nLab page! – Reid Barton Feb 10 2010 at 20:37 show 3 more comments ## 3 Answers Question 1: If I understand you correctly, you're proposing that $\mathbb{C}$ should be a compact Hausdorff object in some category because it represents a functor from that category to the category CH of compact Hausdorff spaces (in something like the sense that the functor $Hom(-, \mathbb{C})$ into Set factors through the forgetful functor from CH to Set). But I don't see why this should be sufficient to make $\mathbb{C}$ a compact Hausdorff object. That is, presumably, from the approach of functorial semantics, a compact Hausdorff object in a category C should be a product-preserving functor from L to C, where L is the dual of the Kleisli category for the ultrafilter monad on Set (that is, L is the Lawvere theory whose category of (Set-)models is the category of compact Hausdorff spaces). I can see how, more generally, for any Lawvere theory L and category C, every C-model of L (i.e., a product-preserving functor F from L to C) induces a representable functor Hom(-, F(1)) from C to Set which factors through the forgetful functor from Set-models of L to Set. But it's not obvious to me that the converse of this holds as well (that every representable functor from C to Set with this factorization property arises from some C-model of L). Perhaps I'm missing something and your reasoning for $\mathbb{C}$ being a compact Hausdorff object is something more than this. Perhaps I'm hopelessly confused. But, tentatively, I think the answer to question 1 is "No" or at least "Not necessarily". (Edit: As seen below, the correspondence does go both ways, so the last line is retracted, leaving the second-to-last line...) - But it all works if I replace "compact Hausdorff" by "group", doesn't it? Maybe I'm missing something there as well. If so, perhaps I should ask the more basic question about the difference between finitary and infinitary theories first. – Andrew Stacey Feb 10 2010 at 11:43 Egads, no, you're right. The correspondence does go both ways. I failed to see it before, being so used to viewing things one way, but if M is a monad on Set, then product-preserving functors from the dual of M's Kleisli category to C (what I was thinking of as an M object) are in correspondence with tuples of the form <contravariant functor F from C to the category of Set-algebras of M, object c in C such that the product of any set of copies of c exists, and natural isomorphism between Hom_C(-, c) and UnderlyingSet(F(-))> (what you were thinking of as an M object). So, I retract my "No". – Sridhar Ramesh Feb 10 2010 at 19:31 (Why? The latter amounts to just putting the structure of a Set-algebra for M on Hom(x, c) for each x, such that precomposition is a homomorphism of such algebras. For every element in M(k), thought of as a k-ary operation, we obtain a morphism from c^k to c by applying that operation to the k many projections in Hom(c^k, c), from which the result of that operation on arbitrary Hom(x, c) is determined... [continued in next comment] – Sridhar Ramesh Feb 10 2010 at 19:50 Such morphisms will automatically satisfy the appropriate commutative diagrams (by virtue of the appropriate equations holding in each algebra Hom(c^k, c)). Thus, they can be combined into what I thought of as an M object. I haven't sat down and checked all the details, but I am quite confident now that it works and I was wrong.) – Sridhar Ramesh Feb 10 2010 at 19:51 Thanks for the clarifications. I wondered if there was some issue with finite/infinite stuff that I was completely unaware of. I feel reassured that at least I didn't miss something completely obvious - should I be feeling that? Or is there still something I've missed in my setup? – Andrew Stacey Feb 10 2010 at 20:48 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This "answer" doesn't even get as far as answering question 1, but I'll go ahead anyway. All I want to say is how I think "compact Hausdorff space object" should be defined. This should be equivalent to what Sridhar said, though I haven't stopped to think about it. Let $\mathcal{E}$ be a category with small products. A compact Hausdorff object in $\mathcal{E}$ should be an object $X$ of $\mathcal{E}$ together with, for each set $I$ and ultrafilter $U$ on $I$, a function $\xi_U: X^I \to X$ satisfying some axioms that I'm too lazy to write down, but will explain a bit in a moment. When $\mathcal{E} =$ Set, you can think of $\xi_U$ as specifying the $U$-limit of each $I$-indexed family of points of $X$. (That there's exactly one limit point is the compact Hausdorff property.) One axiom tells you what happens when $U$ is the principal ultrafilter on some $i \in I$: then $\xi_U$ sends a family x to $x_i$. A second says something about limits of limits. A third (and I think there are only three) says something about what happens when you have a map $I \to J$. This formulation doesn't come out of thin air, you won't be surprised to hear---there's a systematic process for taking a (suitable kind of) monad on Set and producing a definition of its "algebras" in any category with products. But I won't go into that now. - That's very much what I thought should happen. It sounds from this as though there is no problem with "compact Hausdorff objects" in any category with small products - do I have that right? In which case, there should be an equivalence between such and representable functors to CompHaus (my interpretation of Sridhar's answer is that he says this is only guaranteed in one direction, I expected both). So then (part of) my question is as to the existence of naturally occurring such objects - for some definition of "naturally", naturally. – Andrew Stacey Feb 10 2010 at 12:26 I agree that what you (Tom) said is presumably equivalent to what I said. However, I have a question about the part where you say "(suitable kind of) monad on Set". What do you mean by "suitable kind of"? It seems to me this general idea should work for every monad on Set. [That is, pulling it through the correspondence between monads on Set and categories with set-sized products generated by a single object [the (Set-)algebras of the former also corresponding to the Set-models of the latter], and then using the latter to give an account of such algebras in other categories with products] – Sridhar Ramesh Feb 10 2010 at 18:38 Sridhar, to be honest I was just hedging. I wasn't certain that it would work for all monads on Set, and I didn't want to overstate my case. But maybe it does work for all of them, in the way you describe. – Tom Leinster Feb 11 2010 at 0:59 Actually, it occurs to me that there's a much easier way to describe this than the detour through infinitary Lawvere theories I've been using. Given a monad $M$ on $Set$, an algebra for this in the category $C$ should be an algebra (in the standard sense) for the monad $M^{C^{op}}$ whose carrier is in the range of the Yoneda embedding, where $M^{C^{op}}$ is the monad on presheaves on $C$ induced by postcomposition with $M$. This should be equivalent to what we've both been saying, but seems to me now much clearer (though perhaps others will differ). – Sridhar Ramesh Feb 11 2010 at 1:38 The "Bohrification" paper arXiv:0905.2275 may be relevant to Question 4. As I understand, they discuss the notion of $C^\ast$-algebra objects in a given topos. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9558432698249817, "perplexity_flag": "middle"}
http://medlibrary.org/medwiki/Airy_disc
# Airy disc Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below: Computer-generated image of an Airy disk. The gray scale intensities have been adjusted to enhance the brightness of the outer rings of the Airy pattern. Surface plot of intensity in an Airy disk. Real Airy disk created by passing a laser beam through a pinhole aperture In optics, the Airy disk (or Airy disc) and Airy pattern are descriptions of the best focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light. The diffraction pattern resulting from a uniformly-illuminated circular aperture has a bright region in the center, known as the Airy disk which together with the series of concentric bright rings around is called the Airy pattern. Both are named after George Biddell Airy. The disk and rings phenomenon had been known prior to Airy; John Herschel described the appearance of a bright star seen through a telescope under high magnification for an 1828 article on light for the Encyclopedia Metropolitana: ...the star is then seen (in favourable circumstances of tranquil atmosphere, uniform temperature, &c.) as a perfectly round, well-defined planetary disc, surrounded by two, three, or more alternately dark and bright rings, which, if examined attentively, are seen to be slightly coloured at their borders. They succeed each other nearly at equal intervals round the central disc....[1] However, Airy wrote the first full theoretical treatment explaining the phenomenon (his 1835 "On the Diffraction of an Object-glass with Circular Aperture").[2] Mathematically, the diffraction pattern is characterized by the wavelength of light illuminating the circular aperture, and the aperture's size. The appearance of the diffraction pattern is additionally characterized by the sensitivity of the eye or other detector used to observe the pattern. The most important application of this concept is in cameras and telescopes. Owing to diffraction, the smallest point to which a lens or mirror can focus a beam of light is the size of the Airy disk. Even if one were able to make a perfect lens, there is still a limit to the resolution of an image created by this lens. An optical system in which the resolution is no longer limited by imperfections in the lenses but only by diffraction is said to be diffraction limited. The Airy disk is of importance in physics, optics, and astronomy. ## Size Far away from the aperture, the angle at which the first minimum occurs, measured from the direction of incoming light, is given by the approximate formula: $\sin \theta \approx 1.22 \frac{\lambda}{d}$ or, for small angles, simply $\theta \approx 1.22 \frac{\lambda}{d}$ where θ is in radians and λ is the wavelength of the light and d is the diameter of the aperture. Airy wrote this as $s = \frac{2.76}{a}$ where s was the angle of first minimum in seconds of arc, a was the radius of the aperture in inches, and the wavelength of light was assumed to be 0.000022 inches (the mean of visible wavelengths).[3] The Rayleigh criterion for barely resolving two objects that are point sources of light, such as stars seen through a telescope, is that the center of the Airy disk for the first object occurs at the first minimum of the Airy disk of the second. This means that the angular resolution of a diffraction limited system is given by the same formulae. However, while the angle at which the first minimum occurs (which is sometimes described as the radius of the Airy disk) depends only on wavelength and aperture size, the appearance of the diffraction pattern will vary with the intensity (brightness) of the light source. Because any detector (eye, film, digital) used to observe the diffraction pattern can have an intensity threshold for detection, the full diffraction pattern may not be apparent. In astronomy, the outer rings are frequently not apparent even in a highly magnified image of a star. It may be that none of the rings are apparent, in which case the star image appears as a disk (central maximum only) rather than as a full diffraction pattern. Furthermore, fainter stars will appear as smaller disks than brighter stars, because less of their central maximum reaches the threshold of detection.[4] While in theory all stars or other "point sources" of a given wavelength and seen through a given aperture have the same Airy disk radius characterized by the above equation (and the same size diffraction pattern), differing only in intensity (the "height" of the surface plot at upper right), the appearance is that fainter sources appear as smaller disks, and brighter sources appear as larger disks.[5] This was described by Airy in his original work: The rapid decrease of light in the successive rings will sufficiently explain the visibility of two or three rings with a very bright star and the non-visibility of rings with a faint star. The difference of the diameters of the central spots (or spurious disks) of different stars ... is also fully explained. Thus the radius of the spurious disk of a faint star, where light of less than half the intensity of the central light makes no impression on the eye, is determined by [s = 1.17/a], whereas the radius of the spurious disk of a bright star, where light of 1/10 the intensity of the central light is sensible, is determined by [s=1.97/a].[6] Despite this feature of Airy's work, the radius of the Airy disk is often given as being simply the angle of first minimum, even in standard textbooks.[7] In reality, the angle of first minimum is a limiting value for the size of the Airy disk, and not a definite radius. ## Examples Log-log plot of aperture diameter vs angular resolution at the diffraction limit for various light wavelengths compared with various astronomical instruments. For example, the blue star shows that the Hubble Space Telescope is almost diffraction-limited in the visible spectrum at 0.1 arcsecs, whereas the red circle shows that the human eye should have a resolving power of 20 arcsecs in theory, though normally only 60 arcsecs. ### Cameras If two objects imaged by a camera are separated by an angle small enough that their Airy disks on the camera detector start overlapping, the objects can not be clearly separated any more in the image, and they start blurring together. Two objects are said to be just resolved when the maximum of the first Airy pattern falls on top of the first minimum of the second Airy pattern (the Rayleigh criterion). Therefore the smallest angular separation two objects can have before they significantly blur together is given as stated above by $\sin \theta = 1.22\ \frac{\lambda}{d}$ Thus, the ability of the system to resolve detail is limited by the ratio of λ/d. The larger the aperture for a given wavelength, the finer the detail which can be distinguished in the image. Since θ is small we can approximate this by $\frac{x}{f} = 1.22\ \frac{\lambda}{d}$ where $x$ is the separation of the images of the two objects on the film and $f$ is the distance from the lens to the film. If we take the distance from the lens to the film to be approximately equal to the focal length of the lens, we find $x = 1.22\ \frac{\lambda f}{d}$ but $\frac{f}{d}$ is the f-number of a lens. A typical setting for use on an overcast day would be f/8.[8] For blue visible light, the wavelength λ is about 420 nanometers.[9] This gives a value for $x$ of about 4 µm. In a digital camera, making the pixels of the image sensor smaller than this would not actually increase image resolution. ### The human eye Longitudinal sections through a focused beam with (top) negative, (center) zero, and (bottom) positive spherical aberration. The lens is to the left. The fastest f-number for the human eye is about 2.1,[10] corresponding to a diffraction-limited point spread function with approximately 1 μm diameter. However, at this f-number, spherical aberration limits visual acuity, while a 3 mm pupil diameter (f/5.7) approximates the resolution achieved by the human eye.[11] The maximum density of cones in the human fovea is approximately 170,000 per square millimeter,[12] which implies that the cone spacing in the human eye is about 2.5 μm, approximately the diameter of the point spread function at f/5. ### Focused laser beam A circular laser beam with uniform intensity across the circle (a flat-top beam) focused by a lens will form an Airy disk pattern at the focus. The size of the Airy disk determines the laser intensity at the focus. ### Aiming sight Some weapon aiming sights (e.g. FN FNC) require the user to align a peep sight (rear, nearby sight, i.e. which will be out of focus) with a tip (which should be focussed and overlaid on the target) at the end of the barrel. When looking through the peep sight, the user will notice an Airy disk that will help center the sight over the pin.[13] ## Conditions for observation Light from a uniformly illuminated circular aperture (or from a uniform, flattop beam) will exhibit an Airy diffraction pattern far away from the aperture due to Fraunhofer diffraction (far-field diffraction). The conditions for being in the far field and exhibiting an Airy pattern are: the incoming light illuminating the aperture is a plane wave (no phase variation across the aperture), the intensity is constant over the area of the aperture, and the distance R from the aperture where the diffracted light is observed (the screen distance) is large compared to the aperture size, and the radius $a$ of the aperture is not too much larger than the wavelength $\lambda$ of the light. The last two conditions can be formally written as $R > a^2 / \lambda$ . In practice, the conditions for uniform illumination can be met by placing the source of the illumination far from the aperture. If the conditions for far field are not met (for example if the aperture is large), the far-field Airy diffraction pattern can also be obtained on a screen much closer to the aperture by using a lens right after the aperture (or the lens itself can form the aperture). The Airy pattern will then be formed at the focus of the lens rather than at infinity. Hence, the focal spot of a uniform circular laser beam (a flattop beam) focused by a lens will also be an Airy pattern. In a camera or imaging system an object far away gets imaged onto the film or detector plane by the objective lens, and the far field diffraction pattern is observed at the detector. The resulting image is a convolution of the ideal image with the Airy diffraction pattern due to diffraction from the iris aperture or due to the finite size of the lens. This leads to the finite resolution of a lens system described above. ## Mathematical details Diffraction from a circular aperture. The Airy pattern is observable when $R > a^2 / \lambda$ (i.e. in the far field) Diffraction from an aperture with a lens. The far field image will (only) be formed at the screen one focal length away, where R=f (f=focal length). The observation angle $\theta$ stays the same as in the lensless case. The intensity of the Fraunhofer diffraction pattern of a circular aperture (the Airy pattern) is given by the squared modulus of the Fourier transform of the circular aperture: $I(\theta) = I_0 \left ( \frac{2 J_1(ka \sin \theta)}{ka \sin \theta} \right )^2 = I_0 \left ( \frac{2 J_1(x)}{x} \right )^2$ where $I_0$ is the maximum intensity of the pattern at the Airy disc center, $J_1$ is the Bessel function of the first kind of order one, $k = {2 \pi}/{\lambda}$ is the wavenumber, $a$ is the radius of the aperture, and $\theta$ is the angle of observation, i.e. the angle between the axis of the circular aperture and the line between aperture center and observation point. $x = ka \sin \theta = \frac{2 \pi a}{\lambda} \frac{q}{R} = \frac{\pi q}{\lambda N}$, where q is the radial distance from the optics axis in the observation (or focal) plane and $N =R/d$ (d=2a is the aperture diameter, R is the observation distance) is the f-number of the system. If a lens after the aperture is used, the Airy pattern forms at the focal plane of the lens, where R = f (f is the focal length of the lens). Note that the limit for $\theta \rightarrow 0$ (or for $x \rightarrow 0$) is $I(0) = I_0$. The zeros of $J_1(x)$ are at $x = ka \sin \theta \approx 0, 3.8317, 7.0156, 10.1735, 13.3237, 16.4706...$. From this follows that the first dark ring in the diffraction pattern occurs where $ka \sin{\theta} = 3.8317...$, or $\sin \theta \approx \frac{3.83}{ka} = \frac{3.83 \lambda}{2 \pi a} = 1.22 \frac{\lambda}{2a} = 1.22 \frac{\lambda}{d}$. The radius $q_1$ of the first dark ring on a screen is related to $\theta$ and to the f-number by $q_1 = R \sin \theta \approx 1.22 {R} \frac{\lambda}{d} = 1.22 \lambda N$ where R is the distance from the aperture, and the f-number N = R/d is the ratio of observation distance to aperture size. The half maximum of the central Airy disk (where $J_1(x)= {x} /{2 \sqrt{2}}$) occurs at $x = 1.61633...$; the 1/e2 point (where $J_1(x)= {x} /{2 e}$) occurs at $x = 2.58383...$, and the maximum of the first ring occurs at $x = 5.13562...$. The intensity $I_0$ at the center of the diffraction pattern is related to the total power $P_0$ incident on the aperture by[14] $I_0 = \frac{\Epsilon_A^2 A^2}{2 R^2} = \frac{P_0 A}{\lambda^2 R^2}$ where $\Epsilon$ is the source strength per unit area at the aperture, A is the area of the aperture ($A=\pi a^2$) and R is the distance from the aperture. At the focal plane of a lens, $I_0 = (P_0 A)/(\lambda^2 f^2)$. The intensity at the maximum of the first ring is about 1.75% of the intensity at the center of the Airy disk. The expression for $I(\theta)$ above can be integrated to give the total power contained in the diffraction pattern within a circle of given size: $P(\theta) = P_0 [ 1 - J_0^2(ka \sin \theta) - J_1^2(ka \sin \theta) ]$ where $J_0$ and $J_1$ are Bessel functions. Hence the fractions of the total power contained within the first, second, and third dark rings (where $J_1(ka \sin \theta)=0$) are 83.8%, 91.0%, and 93.8% respectively.[15] The Airy Pattern on the interval kasinθ = [−10, 10] The encircled power graphed next to the intensity. ## Approximation using a Gaussian profile A radial cross-section through the Airy pattern (solid curve) and its Gaussian profile approximation (dashed curve). The abscissa is given in units of the wavelength $\lambda$ times the f-number of the optical system. The Airy pattern falls rather slowly to zero with increasing distance from the center, with the outer rings containing a significant portion of the integrated intensity of the pattern. As a result, the root mean square (RMS) spotsize is undefined (i.e. infinite). An alternative measure of the spot size is to ignore the relatively small outer rings of the Airy pattern and to approximate the central lobe with a Gaussian profile, such that $I(q) \approx I'_0 \exp \left( \frac{- q^2}{2\sigma^2} \right) \ ,$ where $I'_0$ is the irradiance at the center of the pattern, $q$ represents the radial distance from the center of the pattern, and $\sigma$ is the Gaussian RMS width (in one dimension). If we equate the peak amplitude of the Airy pattern and Gaussian profile, that is, $I'_0 = I_0$, and find the value of $\sigma$ giving the optimal approximation to the pattern, we obtain $\sigma \approx 0.42 \lambda N \ ,$ where N is the f-number. If, on the other hand, we wish to enforce that the Gaussian profile has the same volume as does the Airy pattern, then this becomes $\sigma \approx 0.45 \lambda N \ .$ In optical aberration theory, it is common to describe an imaging system as diffraction-limited if the Airy disk radius is larger than the RMS spotsize determined from geometric ray tracing (see Optical lens design). The Gaussian profile approximation provides an alternative means of comparison: using the approximation above shows that the RMS width $\sigma$ of the Gaussian approximation to the Airy disk is about one-third the Airy disk radius, i.e. $0.42 \lambda N$ as opposed to $1.22 \lambda N$. ## Obscured Airy pattern Similar equations can also be derived for the obscured Airy diffraction pattern[16][17] which is the diffraction pattern from an annular aperture or beam, i.e. a uniform circular aperture (beam) obscured by a circular block at the center. This situation is relevant to many common reflector telescope designs that incorporate a secondary mirror, including Newtonian telescopes and Schmidt–Cassegrain telescopes. $I(\theta) = \frac{I_0}{ (1 - \epsilon ^2)^2} \left ( \frac{2 J_1(x)}{x} - \frac{2 \epsilon J_1(\epsilon x)}{x}\right )^2$ where $\epsilon$ is the annular aperture obscuration ratio, or the ratio of the diameter of the obscuring disk and the diameter of the aperture (beam). $\left( 0 \le \epsilon < 1 \right)$, and x is defined as above: $x=ka\sin(\theta) \approx \frac {\pi R}{\lambda N}$ where $R$ is the radial distance in the focal plane from the optical axis, $\lambda$ is the wavelength and $N$ is the f-number of the system. The fractional encircled energy (the fraction of the total energy contained within a circle of radius $R$ centered at the optical axis in the focal plane) is then given by: $E(R) = \frac{1}{ (1 - \epsilon ^2) } \left( 1 - J_0^2(x) - J_1^2(x) + \epsilon ^2 \left[ 1 - J_0^2 (\epsilon x) - J_1^2(\epsilon x) \right] - 4 \epsilon \int_0^x \frac {J_1(t) J_1(\epsilon t)}{t}\,dt \right)$ For $\epsilon \rightarrow 0$ the formulas reduce to the unobscured versions above. ## Comparison to Gaussian beam focus A circular laser beam with uniform intensity profile, focused by a lens, will form an Airy pattern at the focal plane of the lens. The intensity at the center of the focus will be $I_{0,Airy} = (P_0 A)/(\lambda^2 f^2)$ where $P_0$ is the total power of the beam, $A= \pi D^2 / 4$ is the area of the beam ($D$ is the beam diameter), $\lambda$ is the wavelength, and $f$ is the focal length of the lens. A Gaussian beam with $1 / e^2$ diameter of D focused through an aperture of diameter D will have a focal profile that is nearly Gaussian, and the intensity at the center of the focus will be 0.924 times $I_{0,\mathrm{Airy}}$.[17] ## See also • Amateur astronomy • Fraunhofer diffraction • Optical unit • Point spread function • Strehl ratio • Light bloom, the effect of the Airy disk in photography. ## Notes and references 1. Hecht, Eugene (1987). Optics (2nd ed. ed.). Addison Wesley. ISBN 0-201-11609-X [Amazon-US | Amazon-UK]. Sect. 5.7.1 2. Optical System Design. McGraw-Hill Professional. 2000. ISBN 0-07-134916-2 [Amazon-US | Amazon-UK]. Text "Steve Chapman (editor)" ignored (help) 3. "Eye Receptor Density". Archived from the original on 2008-04-30. Retrieved 2008-09-20. 4. E. Hecht, Optics, Addison Wesley (2001) 5. M. Born and E. Wolf, Principles of Optics (Pergamon Press, New York, 1965) 6. Rivolta, Applied Optics, 25, 2404 (1986). 7. ^ a b Mahajan, J. Opt. Soc. Am. A, 3, 470 (1986). Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Airy disc", available in its original form here: http://en.wikipedia.org/w/index.php?title=Airy_disc • ## Finding More You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library. • ## Questions or Comments? If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. • ## About This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 79, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144175052642822, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/dirac-equation?sort=active
# Tagged Questions The dirac-equation tag has no wiki summary. learn more… | top users | synonyms 3answers 205 views ### Why is the Dirac equation not used for calculations? From what I understand the Dirac equation is supposed to be an improvement on the Schrödinger equation in that it is consistent with relativity theory. Yet all methods I have encountered for doing ... 1answer 86 views ### Sign Conventions Is it possible to have the Dirac sign convention, (-,+,+,+) and at the same time use the metric $dt^2-dx^2-dy^2-dz^2$i.e have opposing Dirac and metric tensor conventions? 3answers 258 views ### How does one interpret the Dirac equation with a self-field potential? EVERY QFT text I've ever examined states that if there is an external vector potential, $A_\mu$, then one writes the Dirac eq.(or Klein-Gordon eq.) using a covariant derivative to include this U(1) ... 2answers 352 views ### Lorentz transformations in Dirac equation Let's denote a spinor $\xi$. If $(\theta ,\phi)$ are the parameters of a rotation and pure Lorentz transformation, then how $\xi$ could be written as \xi ~\rightarrow~ \exp\left(\ i ... 3answers 99 views ### Problem involving Dirac's equation I'm stuck in an equation derivation of Ryder's QFT book. Starting with Dirac's equation: $$(i\gamma^\mu\partial_\mu-m)\psi=0$$ If I multiply by $i\gamma^\nu\partial_\nu$, I get: ... 3answers 549 views ### What is the difference between a spinor and a vector or a tensor? Why do we call a 1/2 spin particle satisfying the Dirac equation a spinor, and not a vector or a tensor? 1answer 195 views ### Energy spectrum of a Dirac electron How do you explain easily "The spectrum of an electron in a repulsive potential " and hence "bound state of charge conjugation" in Dirac hole theory ? 0answers 60 views ### Relativistic genarization of Quantum Harmonic Oscillator I am trying to find out relativistic description of a quantum harmonic oscillator. For a classical relativistic oscillator mass is a function of co-ordinates(http://arxiv.org/abs/1209.2876). ... 1answer 493 views ### Exact energies of spherical harmonic oscillator in Dirac equation The potential is given by: $$V(r) = {1\over 2} \omega^2 r^2$$ and we are solving the radial Dirac equation (in atomic units): c{d P(r)\over d r} + c {\kappa\over r} P(r) + Q(r) (V(r)-2mc^2) = E ... 1answer 204 views ### How is the Dirac adjoint generalized? I am wondering how one can generalize the Dirac adjoint to flat "spacetimes" of arbitrary dimension and signature. To be more specific, a standard situation would be to consider 4 dimensional ... 1answer 410 views ### Charge conjugation in Dirac equation I need to know the mathematical argument that how the relation is true $(C^{-1})^T\gamma ^ \mu C^T = - \gamma ^{\mu T}$ . Where $C$ is defined by $U=C \gamma^0$ ; $U$= non singular matrix , $T$= ... 0answers 40 views ### Lagrangians for non-local equations of motion Say I have a multicomponent field $X_a(x,t)$ such that I know it Fourier modes satisfy the following equation of motion, \$(\delta_{ab} \partial_t + \Omega_{ab}(t))X_b(k,t) = e^t \int \frac{d^3p ... 2answers 262 views ### Dirac equation in curved space-time I have seen the Dirac equation in curved space-time written as $$[i\bar{\gamma}^{\mu}\frac{\partial}{\partial x^{\mu}}-i\bar{\gamma}^{\mu}\Gamma_{\mu}-m]\psi=0$$ This ... 2answers 85 views ### Sign convention for basic Dirac equation The dirac equation;$$(i\gamma^\mu\partial_{\mu} - m)\psi=0$$ is just; $$(i\gamma^{0}\partial_{0} - i\gamma^{i}\partial_{i} - m)\psi=0$$ in a (+,---) metric right? 0answers 52 views ### WKB expression for Dirac equation? given the one dimensional Schroedinger equation $$- \frac{\hbar ^{2}}{2m} \frac{d^{2}}{dx^{2}}\Psi(x)+ V(x) \Psi(x) =E_{n}\Psi (x)$$ the WKB method for the energies is (n+1)2\pi \hbar ... 1answer 164 views ### Does Dirac's idea of filled negative energy states make sense? Please bear with me a bit on this. I know my title is controversial, but it's serious and detailed question about the explanation Dirac attached to his amazing equations, not the equations themselves. ... 1answer 452 views ### What is the relativistic particle in a box? I know people try to solve Dirac equation in a box. Some claim it cannot be done. Some claim that they had found the solution, I have seen three and they are all different and bizarre. But my main ... 1answer 293 views ### Dimension of Dirac $\gamma$ matrices While studying the Dirac equation, I came across this enigmatic passage on p. 551 in From Classical to Quantum Mechanics by G. Esposito, G. Marmo, G. Sudarshan regarding the $\gamma$ matrices: ... 1answer 143 views ### Dirac trace theorem I am unable to prove exactly one trace identity that appears in the appendix of Peskin and Schroeder's QFT book. Can someone help me? The theorem [Appendix A.4 eqn (A.28)] says that the order of ... 0answers 158 views ### Matrix manipulation for Dirac matrices From the Dirac equation in gamma matrices, we know that $$\gamma^i=\begin{pmatrix} 0 & \sigma^i \\ -\sigma^i & 0 \end{pmatrix}$$ and \gamma^0=\begin{pmatrix} I & 0 \\ 0 & -I ... 2answers 369 views ### Charge conjugation in Dirac equation According to Dirac equation we can write, \begin{equation} \left(i\gamma^\mu( \partial_\mu +ie A_\mu)- m \right)\psi(x,t) = 0 \end{equation} We seek an equation where $e\rightarrow -e$ and which ... 2answers 192 views ### Matrix operation in dirac matrices If we define $\alpha_i$ and $\beta$ as Dirac matrices which satisfy all of the conditions of spin 1/2 particles , p defines the momentum of the particle, then how can we get the matrix form ? ... 2answers 202 views ### Geometrical interpretation of the Dirac equation Is there a geometrical intuitive picture behind the Dirac equation, and the gamma matrices that it uses? I know the geometric algebra is a Clifford algebra. Can the properties of geometric algebra, be ... 2answers 98 views ### Spinors Under Spatial Reflection How eq(4.4) is a solution of eq(4.3) 1answer 143 views ### Higher dimension operator in free Dirac Lagrangian When discussing higher dimensional operators in a theory with fermions, why do I never see anyone ever talk about the dimension five operator $\partial_\mu\bar\psi\partial^\mu\psi$? How does the ... 2answers 187 views ### momentum four vector and dirac matrices $$c\left(\alpha _i\right.{\cdot P + \beta mc) \psi = E \psi }$$ From the above dirac equation it can be shown for zero momenta that spin and antimatter are associated with $\beta$. On the other ... 2answers 105 views ### A step in the derivation of the magnetic momentum of the electron in Zee's QFT book In chapter III.6 of his Quantum Field Theory in a Nutshell, A. Zee sets out to derive the magnetic moment of an electron in quantum electrodynamics. He starts by replacing in the Dirac equation the ... 0answers 62 views ### Translate a two dimensional classical Dirac theory to a (1+1)-dim quantum theory Suppose I have a two dimensional classical Dirac Hamiltonian with $\Psi=(\psi_1,\psi_2)^T$: $$H=\int \mathrm{d}x \mathrm{d}y \Psi^\dagger(\sigma^x i\partial_x+\sigma^y i\partial_y+m\sigma^z)\Psi.$$ ... 3answers 315 views ### Dirac equation as Hamiltonian system Let us consider Dirac equation $$(i\gamma^\mu\partial_\mu -m)\psi =0$$ as a classical field equation. Is it possible to introduce Poisson bracket on the space of spinors $\psi$ in such a way that ... 0answers 101 views ### Gordon decomposition of Dirac current in spherical coordinates Is there any meaningful analog of the Gordon decomposition of the Dirac current \$j^\mu = ... 0answers 101 views ### Dirac action and conventions I have a (possibly) fundamental question, which is driving me crazy. Notation When considering the Dirac action (say reading Peskin's book), one have \$\int ... 1answer 157 views ### What happens to the Lagrangian of the Dirac theory under charge conjugation? Consider a charge conjugation operator which acts on the Dirac field($\psi$) as $$\psi_{C} \equiv \mathcal{C}\psi\mathcal{C}^{-1} = C\gamma_{0}^{T}\psi^{*}$$ Just as we can operate the parity operator ... 4answers 669 views ### Where is spin in the Schroedinger equation of an electron in the hydrogen atom? In my current quantum mechanics, course, we have derived in full (I believe?) the wave equations for the time-independent stationary states of the hydrogen atom. We are told that the Pauli Exclusion ... 2answers 132 views ### numerical formulation of Dirac equation plus electromagnetic field I have the following equations describing the electron field in a (classic) electromagnetic field: $$c\left(\alpha _i\right.{\cdot (P - q(A + A_b) + \beta mc) \psi = E \psi }$$ where $A_b$ is ... 4answers 472 views ### Why would Klein-Gordon describe spin-0 scalar field while Dirac describe spin-1/2? The derivation of both Klein-Gordon equation and Dirac equation is due the need of quantum mechanics (or to say more correctly, quantum field theory) to adhere to special relativity. However, excpet ... 1answer 343 views ### Solution to Klein-Gordon equation always valid? We know that there is a relativistic version of Schrodinger equation called Klein-Gordon equation. However, it has some problems and due to these problems, there is Dirac equation that handles these ... 1answer 160 views ### Complete set and Klein-Gordon equation In http://www.physics.ucdavis.edu/~cheng/teaching/230A-s07/rqm2_rev.pdf, it says that when there is some external potential, the Klein-Gordon equation is altered, and it says the following: The ... 1answer 217 views ### Explanation of equation that shows a failed approach to relativize Schrodinger equation I'm reading the Wikipedia page for the Dirac equation: $\rho=\phi^*\phi\,$ ...... $J = -\frac{i\hbar}{2m}(\phi^*\nabla\phi - \phi\nabla\phi^*)$ with the conservation of probability ... 1answer 226 views ### How to obtain Dirac equation from Schrodinger equation and special relativity? I'm reading the Wikipedia page for the Dirac equation: The Dirac equation is superficially similar to the Schrödinger equation for a free massive particle: A) ... 1answer 211 views ### How did one get the defining equation of probability current and conservation of probability current and density? I'm reading the Wikipedia page for the Dirac equation: $$\rho=\phi^*\phi$$ and this density is convected according to the probability current vector J = ... 1answer 244 views ### How to construct the charge conjugation matrix for any given dimension? Generally, Gamma matrices could be constructed based on the Clifford algebra. \begin{equation} \gamma^{i}\gamma^{j}+\gamma^{j}\gamma^{i}=2h^{ij}, \end{equation} My question is how to generally ... 2answers 119 views ### Showing that electron and positrons have the same absolute charge In Zee's quantum field theory in a nutshell, 2nd edition, pg 551 he has the charge of a Dirac field written as \$Q=\int {d^3p \over (2\pi)^3(E_p/m)} \sum_s ... 1answer 130 views ### Charge and the Dirac field In Zee's quantum field theory in a nutshell, 2nd edition, pg 550 he has $Q=\int {d^3p \over (2\pi)^3(E_p/m)} \sum_s \{b^\dagger(p,s)b(p,s)-d^\dagger(p,s)d(p,s)\}$ showing clearly that $b$ ... 2answers 299 views ### Dirac equation as canonical quantization? First of all, I'm not a physicist, I'm mathematics phd student, but I have one elementary physical question and was not able to find answer in standard textbooks. Motivation is quite simple: let me ... 2answers 220 views ### Relation for Dirac-spinors of different helicities Assume that we have massless spin-1/2 particles. The Dirac-spinor is the solution of the Dirac equation: $$p^\mu \gamma_\mu u_\pm(p) = 0, \quad p^2 = 0$$ The subscripts $\pm$ denote two different ... 1answer 212 views ### Magnetic moment derivation from Dirac equation I am reading a text book where they show the electron has spin 1/2 using Dirac's equation. At one point in the derivation they define $\pi=P-qA/c$ where $P$ is the momentum operator and A is the ... 1answer 382 views ### Is Zitterbewegung an artefact of single-particle theory? I have seen a number of articles on Zitterbewegung claiming searches for it such as this one: http://arxiv.org/abs/0810.2186. Others such as the so-called ZBW interpretation by Hestenes seemingly ... 2answers 550 views ### What is negative about negative energy states in the Dirac equation? This question is a follow up to What was missing in Dirac's argument to come up with the modern interpretation of the positron? There still is some confusion in my mind about the so-called ... 2answers 150 views ### Finding wave-fuctions of a Dirac particle for given 4-momentum and spin 4-vector I've been reading through various materials on relativistic quantum mechanics, but I find the lack of simple examples disturbing. I'm acquainted with the general form the solutions to the Dirac ... 0answers 58 views ### Is it necessary to use all solutions when calculating an expectation value in a spin state? I'm given an spinor $\Psi$ which is solution of the Free Dirac equation, such that is an eigenfunction of $\hat{\vec{p}}$ and has positive energy. Then I'm asked to calculate the expectation value of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8921552300453186, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/62581/convert-coordinates-from-cartesian-system-to-non-orthogonal-axes
# Convert coordinates from Cartesian system to non-orthogonal axes I have a 2D coordinate system defined by two non-perpendicular axes. I wish to convert from a standard Cartesian (rectangular) coordinate system into mine. Any tips on how to go about it? - 1. What's the angle between your axes? 2. Does your system share an origin with the Cartesian one? – J. M. Sep 7 '11 at 16:00 The angle between the axes is arbitrary. If it helps, they could share an origin with the Cartesian one. – andygeers Sep 7 '11 at 16:07 – J. M. Sep 7 '11 at 16:11 In the standard Cartesian frame, the basis is $e_1,e_2$. Choose $I=(e_1,e_2)$. Now your basis is $a_1,a_2$, choose $A=(a_1,a_2)$. Then the relationship between the new coordinate $x'$ and old one $x$ is $Ix=Ax'$. So $x'=A^{-1}x$ – Shiyu Sep 8 '11 at 1:47 ## 3 Answers Sure. Let's approach this in a very elementary way: with matrix algebra. Suppose that our two new 'basis vectors' are given by $(\alpha _1, \alpha _2)$ and $(\beta _1, \beta_2)$, e.g. $(1,1)$ and $(1,0)$. Then our goal is to find a linear combination of them such that we can express some given vector, which we will imaginatively denote as $(x, y)$. In particular, this will allow us to find where a standard point is in our new, nonstandard coordinate system. But this is nothing more than solving the system: $$\left( \begin{array}{cc} \alpha_1 & \beta_1 \\ \alpha_2 & \beta_2 \end{array} \right) \left( \begin{array}{c} a \\ b \end{array}\right)= \left( \begin{array}{c} x \\ y\end{array}\right)$$ for the unknown a and b values. In particular, this means that if the matrix is invertible (meaning that your axes are not parallel), then you can immediately find where any particular points are in your system. Is that what you were looking for? - I'll make the assumption that the oblique coordinate system $(u,v)$ with angle $\varphi$ and the Cartesian system $(x,y)$ share an origin. Consider the following diagram: We have at once the relationship $x=u+h$. The acute angle formed by the $v$-axis and $h$ is $\varphi$, since alternate interior angles in a pair of parallel lines are congruent. We use trigonometry to deduce the relation $y=h\tan\,\varphi$. Eliminating $h$ gives the equation for $u$: $u=x-y\cot\,\varphi$. To find an expression for $v$, we see that this length is equal to the length of the hypotenuse of the right triangle with legs $y$ and $h$. This leads to the equation $y=v\sin\,\varphi$. Solving for $v$ gives the equation $v=y\csc\,\varphi$. Thus, the conversion formulae from Cartesian to oblique coordinates with angle $\varphi$ are $$\begin{align*}u&=x-y\cot\,\varphi\\v&=y\csc\,\varphi\end{align*}$$ For completeness, the formulae for converting from oblique to Cartesian coordinates are $$\begin{align*}x&=u+v\cos\,\varphi\\v&=v\sin\,\varphi\end{align*}$$ - Let us denote by "old" the usual cartesian system with orthogonal axes and by "new" the system with the skew axes $(\alpha_1, \alpha_2)^T, (\beta_1, \beta_2)^T$ (expressed in the old system). An old vector $(x,y)^T$ can be expressed as a linear combination of the skew vectors: $$\left( \begin{array}{c} x \\ y \end{array}\right) = a \left( \begin{array}{c} \alpha_1 \\ \alpha_2\end{array}\right) + b \left( \begin{array}{c} \beta_1 \\ \beta_2\end{array}\right) = \left( \begin{array}{cc} \alpha_1 & \beta_1 \\ \alpha_2 & \beta_2 \end{array} \right) \left( \begin{array}{c} a \\ b \end{array}\right) = A \left( \begin{array}{c} a \\ b \end{array}\right)$$ Using this equation we can transform vectors from the new skew system to the old orthogonal system. Note that the columns of the matrix A are built from the coordinates of the skew basis vectors. In order to transform from the the old orthogonal system to new skew system we need to invert the above equation and we get: $$\left( \begin{array}{c} a \\ b \end{array}\right) = A^{-1} \left( \begin{array}{c} x \\ y \end{array}\right) = \frac{1}{\alpha_1 \beta_2 - \alpha_2 \beta_1} \left( \begin{array}{cc} \beta_2 & \beta_1 \\ -\alpha_2 & \alpha_1 \end{array}\right) \left( \begin{array}{c} x \\ y \end{array}\right)$$ The above invverse matrix $A^{-1}$ should be computed once and then used for the transformation of all needed points. The inversion is possible when the respective denominator is not zero, i.e. when the new skew axes are not parallel. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9133733510971069, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3643577
Physics Forums ## Finding the Relationship between the Limit Method and Direct Integral Method for Area 1. The problem statement, all variables and given/known data I am in the process of studying integration and finding the areas under curves. So far, I know of two methods of finding the area under a curve: the limit method and the direct integral method. Could someone explain the relationship between these two methods? 2. Relevant equations $\int$f(x) dx = F(x)|$^{b}_{a}$ = F(b) - F(a) = Area $lim_{n→∞}$ $\sum^{n}_{i = 1}$ $f(x_{i})$Δx = Area 3. The attempt at a solution I noticed in the direct integration method for finding the area under a curve that the area under the curve is equal to the change in y of a more complicated function: the integral. I graphed it out on my calculator and I don't see exactly how this works. $lim_{n→∞}$ $\sum^{n}_{i = 1}$ $f(x_{i})$Δx = Δy of F(x) = Area I'm trying to seek an explanation as to why the limit method yields the same result as the direct integral method. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member Homework Help That is the fundamental theorem of calculus. You might start by reading here: http://en.wikipedia.org/wiki/Fundame...em_of_calculus Quote by LCKurtz That is the fundamental theorem of calculus. You might start by reading here: http://en.wikipedia.org/wiki/Fundame...em_of_calculus Ah yes, I read through a bit of it. I'm rather confused on the proof for the First Fundamental Theorem of Calculus where it is F(x) = $\int^{x}_{a}f(t) dt$ I've just never seen an antiderivative represented in this way before. Could you interpret this for me? Why does an antiderivative have an upper and lower bound? Recognitions: Gold Member Homework Help ## Finding the Relationship between the Limit Method and Direct Integral Method for Area Quote by vanmaiden Ah yes, I read through a bit of it. I'm rather confused on the proof for the First Fundamental Theorem of Calculus where it is F(x) = $\int^{x}_{a}f(t) dt$ I've just never seen an antiderivative represented in this way before. Could you interpret this for me? Why does an antiderivative have an upper and lower bound? Let's say you have a function f(x) and its antiderivative F(x) so you might have written $$F(x)=\int f(x)\, dx + C$$ where F'(x) = f(x). If you were going a definite integral you would write $$\int_a^b f(x)\,dx = (F(x)+C)|_a^b = F(b) - F(a)$$ and the C is usually omitted since it cancels out anyway. Now the x in that definite integral is a dummy variable, not affecting the answer, so that line could as well have been written$$\int_a^b f(t)\,dt = F(t)|_a^b = F(b) - F(a)$$ Since this is true for any a and b, let's choose to let b be a variable x:$$\int_a^x f(t)\,dt = F(t)|_a^x = F(x) - F(a)$$ Since these are equal you still have F'(x) = f(x) so the left side is an antiderivative of f(x). Since a can be anything, the F(a) is like the constant of integration in our first equation. Does that help answer your question? Quote by LCKurtz Since a can be anything, the F(a) is like the constant of integration in our first equation. Does that help answer your question? Yes!!! So, to make sure I have this correct, F(a) = C in this case, correct? Recognitions: Gold Member Homework Help Quote by vanmaiden Yes!!! So, to make sure I have this correct, F(a) = C in this case, correct? I would leave it as F(a). Here's an example. Suppose you are trying to find the function whose derivative is x2 and whose value at x = 0 is 4. You might do it this way:$$f(x) = \int x^2\, dx =\frac {x^3}{3}+C$$ Then you plug in x = 0 to require that f(0) = 4 and that tells you that C = 4 so your answer is$$f(x) = \frac{x^3} 3+4$$ Alternatively you could have solved the problem this way:$$f(x)-f(0)=\int_0^x t^2\, dt = \frac{t^3} 3 |_0^x =\frac {x^3} 3$$ where f(0) = 4, which you could have put in in the first place. Same answer, slightly different methods. If f(0) = 4, then shouldn't f(x) - f(0) = $\frac{x^{3}}{3}$ - 4? Recognitions: Gold Member Homework Help Quote by vanmaiden If f(0) = 4, then shouldn't f(x) - f(0) = $\frac{x^{3}}{3}$ - 4? f(x) - 4 = $\frac{x^3} 3$ Tags area, curve, direct integral, limit Thread Tools | | | | |------------------------------------------------------------------------------------------------------------|----------------------------------------------|---------| | Similar Threads for: Finding the Relationship between the Limit Method and Direct Integral Method for Area | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 4 | | | Engineering, Comp Sci, & Technology Homework | 2 | | | Mechanical Engineering | 0 | | | Linear & Abstract Algebra | 0 | | | Calculus | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412289261817932, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/234858/to-what-extent-do-functors-map-elements-of-an-object-to-elements-of-another?answertab=oldest
# To what extent do functors map “elements” of an object to “elements” of another object in concrete categories? To what extent do functors map "elements" of an object to "elements" of another object? They are usually described as just mapping one object to another. Thanks - What do you mean by 'element'? Are these concrete categories? – Berci Nov 11 '12 at 13:45 1 There are categories in practice where objects aren't just "sets with enriched structure." For those categories, there aren't really notions of "elements of an object"... Well, maybe one approach you can take is the functor of points approach: Choose an object $A$ in the category, and for each other object $X$, associate to $X$ the set of maps from $A$ to $X$. – only Nov 11 '12 at 13:57 3 Yes, the question is good, but it is not so easy because of the interpretation of 'elements'. Also, there is a concept of 'generalized element' of an object $X$ in a category, and that is none other than an arrow $A\to X$. Of course, in this sense, generalized elements are nicely preserved. – Berci Nov 11 '12 at 13:59 1 @ZhenLin: I certainly don't mean to say that I can define a (set-valued) functor on the essential image of a concretization functor. Of course this would depend on remembering how your sets came from objects in each category! All I mean is that if $U_1:\mathcal{C}_1 \rightarrow \mbox{Set}$ and $U_2:\mathcal{C}_2 \rightarrow \mbox{Set}$ are two "naturally-appearing" concrete categories and we have a "naturally-appearing" functor $F:\mathcal{C}_1 \rightarrow \mathcal{C}_2$, then chances are good that we will have functorial-in-$\mathcal{C}_1$ functions $f_X:U_1(X) \rightarrow U_2(FX)$. – Aaron Mazel-Gee Nov 11 '12 at 16:37 1 I don't agree. The reason why that claim sounds reasonable is because many functors of interest are part of an adjunction, in which case there is indeed such a natural transformation $U_1 \Rightarrow U_2 F$. But there are also important functors where there is no such natural transformation: consider, for example, the homology functors on the category of chain complexes, or more generally, derived functors... – Zhen Lin Nov 11 '12 at 17:13 show 5 more comments ## 1 Answer In a concrete category, the definition of concrete category tells you the sense. Even though a faithful functor need not be injective on objects, you can view any morphism as a function between sets by passing to the image of the faithful functor to $\mathrm{Set}$. Even though your comment mentioned concrete categories, it's important to note that if your category is not concrete, then even when you have a decent way of viewing objects as sets, the morphisms may not be describable in terms of mapping elements. The homotopy category of topological spaces has topological spaces as objects (which have underlying sets), but the morphisms are merely homotopy equivalence classes of continuous functions between underlying sets. These morphisms can't be viewed as mapping particular elements of the underlying sets because different members of the equivalence class would do different things to elements. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358493685722351, "perplexity_flag": "head"}
http://pediaview.com/openpedia/Frame_of_reference
# Frame of reference See also: Inertial frame and basis (linear algebra) Classical mechanics Branches Formulations Fundamental concepts Core topics Scientists In physics, a frame of reference (or reference frame) may refer to a coordinate system used to represent and measure properties of objects such as their position and orientation. It may also refer to a set of axes used for such representation. Alternatively, in relativity, the phrase can be used to refer to the relationship between a moving observer and the phenomenon or phenomena under observation. In this context, the phrase often becomes "observational frame of reference" (or "observational reference frame"). The context may itself include a coordinate system used to represent the observer and phenomenon or phenomena. ## Different aspects of "frame of reference"[] The need to distinguish between the various meanings of "frame of reference" has led to a variety of terms. For example, sometimes the type of coordinate system is attached as a modifier, as in Cartesian frame of reference. Sometimes the state of motion is emphasized, as in rotating frame of reference. Sometimes the way it transforms to frames considered as related is emphasized as in Galilean frame of reference. Sometimes frames are distinguished by the scale of their observations, as in macroscopic and microscopic frames of reference.[1] In this article, the term observational frame of reference is used when emphasis is upon the state of motion rather than upon the coordinate choice or the character of the observations or observational apparatus. In this sense, an observational frame of reference allows study of the effect of motion upon an entire family of coordinate systems that could be attached to this frame. On the other hand, a coordinate system may be employed for many purposes where the state of motion is not the primary concern. For example, a coordinate system may be adopted to take advantage of the symmetry of a system. In a still broader perspective, of course, the formulation of many problems in physics employs generalized coordinates, normal modes or eigenvectors, which are only indirectly related to space and time. It seems useful to divorce the various aspects of a reference frame for the discussion below. We therefore take observational frames of reference, coordinate systems, and observational equipment as independent concepts, separated as below: • An observational frame (such as an inertial frame or non-inertial frame of reference) is a physical concept related to state of motion. • A coordinate system is a mathematical concept, amounting to a choice of language used to describe observations.[2] Consequently, an observer in an observational frame of reference can choose to employ any coordinate system (Cartesian, polar, curvilinear, generalized, …) to describe observations made from that frame of reference. A change in the choice of this coordinate system does not change an observer's state of motion, and so does not entail a change in the observer's observational frame of reference. This viewpoint can be found elsewhere as well.[3] Which is not to dispute that some coordinate systems may be a better choice for some observations than are others. • Choice of what to measure and with what observational apparatus is a matter separate from the observer's state of motion and choice of coordinate system. Here is a quotation applicable to moving observational frames $\mathfrak{R}$ and various associated Euclidean three-space coordinate systems [R, R' , etc.]: [4] “ We first introduce the notion of reference frame, itself related to the idea of observer: the reference frame is, in some sense, the "Euclidean space carried by the observer". Let us give a more mathematical definition:… the reference frame is... the set of all points in the Euclidean space with the rigid body motion of the observer. The frame, denoted $\mathfrak{R}$, is said to move with the observer.… The spatial positions of particles are labelled relative to a frame $\mathfrak{R}$ by establishing a coordinate system R with origin O. The corresponding set of axes, sharing the rigid body motion of the frame $\mathfrak{R}$, can be considered to give a physical realization of $\mathfrak{R}$. In a frame $\mathfrak{R}$, coordinates are changed from R to R' by carrying out, at each instant of time, the same coordinate transformation on the components of intrinsic objects (vectors and tensors) introduced to represent physical quantities in this frame. ” and this on the utility of separating the notions of $\mathfrak{R}$ and [R, R' , etc.]:[5] “ As noted by Brillouin, a distinction between mathematical sets of coordinates and physical frames of reference must be made. The ignorance of such distinction is the source of much confusion… the dependent functions such as velocity for example, are measured with respect to a physical reference frame, but one is free to choose any mathematical coordinate system in which the equations are specified. ” and this, also on the distinction between $\mathfrak{R}$ and [R, R' , etc.]:[6] “ The idea of a reference frame is really quite different from that of a coordinate system. Frames differ just when they define different spaces (sets of rest points) or times (sets of simultaneous events). So the ideas of a space, a time, of rest and simultaneity, go inextricably together with that of frame. However, a mere shift of origin, or a purely spatial rotation of space coordinates results in a new coordinate system. So frames correspond at best to classes of coordinate systems. ” and from J. D. Norton:[7] “ In traditional developments of special and general relativity it has been customary not to distinguish between two quite distinct ideas. The first is the notion of a coordinate system, understood simply as the smooth, invertible assignment of four numbers to events in spacetime neighborhoods. The second, the frame of reference, refers to an idealized system used to assign such numbers … To avoid unnecessary restrictions, we can divorce this arrangement from metrical notions. … Of special importance for our purposes is that each frame of reference has a definite state of motion at each event of spacetime.…Within the context of special relativity and as long as we restrict ourselves to frames of reference in inertial motion, then little of importance depends on the difference between an inertial frame of reference and the inertial coordinate system it induces. This comfortable circumstance ceases immediately once we begin to consider frames of reference in nonuniform motion even within special relativity.…More recently, to negotiate the obvious ambiguities of Einstein’s treatment, the notion of frame of reference has reappeared as a structure distinct from a coordinate system. ” The discussion is taken beyond simple space-time coordinate systems by Brading and Castellani.[8] Extension to coordinate systems using generalized coordinates underlies the Hamiltonian and Lagrangian formulations[9] of quantum field theory, classical relativistic mechanics, and quantum gravity.[10][11][12][13][14] ### Coordinate systems[] Main article: Coordinate systems See also: Generalized coordinates and Axes conventions An observer O, situated at the origin of a local set of coordinates - a frame of reference F. The observer in this frame uses the coordinates (x, y, z, t) to describe a spacetime event, shown as a star. Although the term "coordinate system" is often used (particularly by physicists) in a nontechnical sense, the term "coordinate system" does have a precise meaning in mathematics, and sometimes that is what the physicist means as well. A coordinate system in mathematics is a facet of geometry or of algebra,[15][16] in particular, a property of manifolds (for example, in physics, configuration spaces or phase spaces).[17][18] The coordinates of a point r in an n-dimensional space are simply an ordered set of n numbers:[19][20] $\mathbf{r} =[x^1,\ x^2,\ \dots\ , x^n] \ .$ In a general Banach space, these numbers could be (for example) coefficients in a functional expansion like a Fourier series. In a physical problem, they could be spacetime coordinates or normal mode amplitudes. In a robot design, they could be angles of relative rotations, linear displacements, or deformations of joints.[21] Here we will suppose these coordinates can be related to a Cartesian coordinate system by a set of functions: $x^j = x^j (x,\ y,\ z,\ \dots)\ ,$    $j = 1, \ \dots \ , \ n\$ where x, y, z, etc. are the n Cartesian coordinates of the point. Given these functions, coordinate surfaces are defined by the relations: $x^j (x, y, z, \dots) = \mathrm{constant}\ ,$    $j = 1, \ \dots \ , \ n\ .$ The intersection of these surfaces define coordinate lines. At any selected point, tangents to the intersecting coordinate lines at that point define a set of basis vectors {e1, e2, …, en} at that point. That is:[22] $\mathbf{e}_i(\mathbf{r}) =\lim_{\epsilon \rightarrow 0} \frac{\mathbf{r}\left(x^1,\ \dots,\ x^i+\epsilon,\ \dots ,\ x^n \right) - \mathbf{r}\left(x^1,\ \dots,\ x^i,\ \dots ,\ x^n \right)}{\epsilon }\ ,$ which can be normalized to be of unit length. For more detail see curvilinear coordinates. Coordinate surfaces, coordinate lines, and basis vectors are components of a coordinate system.[23] If the basis vectors are orthogonal at every point, the coordinate system is an orthogonal coordinate system. An important aspect of a coordinate system is its metric gik, which determines the arc length ds in the coordinate system in terms of its coordinates:[24] $(ds)^2 = g_{ik}\ dx^i\ dx^k \ ,$ where repeated indices are summed over. As is apparent from these remarks, a coordinate system is a mathematical construct, part of an axiomatic system. There is no necessary connection between coordinate systems and physical motion (or any other aspect of reality). However, coordinate systems can include time as a coordinate, and can be used to describe motion. Thus, Lorentz transformations and Galilean transformations may be viewed as coordinate transformations. General and specific topics of coordinate systems can be pursued following the See also links below. ### Observational frames of reference[] Main article: Inertial frame of reference Three frames of reference in special relativity. Black frame is at rest. Primed frame moves at 40% of light speed, double primed at 80%. Note scissors-like change as speed increases. An observational frame of reference, often referred to as a physical frame of reference, a frame of reference, or simply a frame, is a physical concept related to an observer and the observer's state of motion. Here we adopt the view expressed by Kumar and Barve: an observational frame of reference is characterized only by its state of motion.[25] However, there is lack of unanimity on this point. In special relativity, the distinction is sometimes made between an observer and a frame. According to this view, a frame is an observer plus a coordinate lattice constructed to be an orthonormal right-handed set of spacelike vectors perpendicular to a timelike vector. See Doran.[26] This restricted view is not used here, and is not universally adopted even in discussions of relativity.[27][28] In general relativity the use of general coordinate systems is common (see, for example, the Schwarzschild solution for the gravitational field outside an isolated sphere[29]). There are two types of observational reference frame: inertial and non-inertial. An inertial frame of reference is defined as one in which all laws of physics take on their simplest form. In special relativity these frames are related by Lorentz transformations, which are parametrized by rapidity. In Newtonian mechanics, a more restricted definition requires only that Newton's first law holds true; that is, a Newtonian inertial frame is one in which a free particle travels in a straight line at constant speed, or is at rest. These frames are related by Galilean transformations. These relativistic and Newtonian transformations are expressed in spaces of general dimension in terms of representations of the Poincaré group and of the Galilean group. In contrast to the inertial frame, a non-inertial frame of reference is one in which fictitious forces must be invoked to explain observations. An example is an observational frame of reference centered at a point on the Earth's surface. This frame of reference orbits around the center of the Earth, which introduces the fictitious forces known as the Coriolis force, centrifugal force, and gravitational force. (All of these forces including gravity disappear in a truly inertial reference frame, which is one of free-fall.) ### Measurement apparatus[] A further aspect of a frame of reference is the role of the measurement apparatus (for example, clocks and rods) attached to the frame (see Norton quote above). This question is not addressed in this article, and is of particular interest in quantum mechanics, where the relation between observer and measurement is still under discussion (see measurement problem). In physics experiments, the frame of reference in which the laboratory measurement devices are at rest is usually referred to as the laboratory frame or simply "lab frame." An example would be the frame in which the detectors for a particle accelerator are at rest. The lab frame in some experiments is an inertial frame, but it is not required to be (for example the laboratory on the surface of the Earth in many physics experiments is not inertial). In particle physics experiments, it is often useful to transform energies and momenta of particles from the lab frame where they are measured, to the center of momentum frame "COM frame" in which calculations are sometimes simplified, since potentially all kinetic energy still present in the COM frame may be used for making new particles. In this connection it may be noted that the clocks and rods often used to describe observers' measurement equipment in thought, in practice are replaced by a much more complicated and indirect metrology that is connected to the nature of the vacuum, and uses atomic clocks that operate according to the standard model and that must be corrected for gravitational time dilation.[30] (See second, meter and kilogram). In fact, Einstein felt that clocks and rods were merely expedient measuring devices and they should be replaced by more fundamental entities based upon, for example, atoms and molecules.[31] ## Examples of inertial frames of reference[] ### Simple example[] Figure 1: Two cars moving at different but constant velocities observed from stationary inertial frame S attached to the road and moving inertial frame S' attached to the first car. Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 metres. The car in front is travelling at 22 metres per second and the car behind is travelling at 30 metres per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference" that we could choose. First, we could observe the two cars from the side of the road. We define our "frame of reference" S as follows. We stand on the side of the road and start a stop-clock at the exact moment that the second car passes us, which happens to be when they are a distance d = 200 m apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where $x_1(t)$ is the position in meters of car one after time t seconds and $x_2(t)$ is the position of car two after time t. $x_1(t)= d + v_1 t = 200\ + \ 22t\ ; \quad x_2(t)= v_2 t = 30t$ Notice that these formulas predict at t = 0 s the first car is 200 m down the road and the second car is right beside us, as expected. We want to find the time at which $x_1=x_2$. Therefore we set $x_1=x_2$ and solve for $t$, that is: $200 + 22 t = 30t \quad$ $8t = 200 \quad$ $t = 25 \quad \mathrm{seconds}$ Alternatively, we could choose a frame of reference S' situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of v2 − v1 = 8 m / s. In order to catch up to the first car, it will take a time of d /( v2 − v1) = 200 / 8 s, that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at 8 m / s. It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one is able to convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you are able to deduct five minutes from the time displayed on your watch in order to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three). ### Additional example[] Figure 2: Simple-minded frame-of-reference example For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving toward the right. However, for the person facing west, the car was moving toward the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system. For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the x-axis and the direction in front of him as the positive y-axis. To him, the car moves along the x axis with some velocity v in the positive x-direction. Alfred's frame of reference is considered an inertial frame of reference because he is not accelerating (ignoring effects such as Earth's rotation and gravity). Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive x-axis, and the direction in front of her as the positive y-axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving - for instance, as she drives past Alfred, she observes him moving with velocity v in the negative y-direction. If she is driving north, then north is the positive y-direction; if she turns east, east becomes the positive y-direction. Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be a in the negative x-direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity v is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred - in her frame of reference, a in the negative y-direction. However, if she is accelerating at rate A in the negative y-direction (in other words, slowing down), she will find Candace's acceleration to be a' = a - A in the negative y-direction - a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive y-direction (speeding up), she will observe Candace's acceleration as a' = a + A in the negative y-direction - a larger value than Alfred's measurement. Frames of reference are especially important in special relativity, because when a frame of reference is moving at some significant fraction of the speed of light, then the flow of time in that frame does not necessarily apply in another frame. The speed of light is considered to be the only true constant between moving frames of reference. ### Remarks[] It is important to note some assumptions made above about the various inertial frames of reference. Newton, for instance, employed universal time, as explained by the following example. Suppose that you own two clocks, which both tick at exactly the same rate. You synchronize them so that they both display exactly the same time. The two clocks are now separated and one clock is on a fast moving train, traveling at constant velocity towards the other. According to Newton, these two clocks will still tick at the same rate and will both show the same time. Newton says that the rate of time as measured in one frame of reference should be the same as the rate of time in another. That is, there exists a "universal" time and all other times in all other frames of reference will run at the same rate as this universal time irrespective of their position and velocity. This concept of time and simultaneity was later generalized by Einstein in his special theory of relativity (1905) where he developed transformations between inertial frames of reference based upon the universal nature of physical laws and their economy of expression (Lorentz transformations). It is also important to note that the definition of inertial reference frame can be extended beyond three dimensional Euclidean space. Newton's assumed a Euclidean space, but general relativity uses a more general geometry. As an example of why this is important, let us consider the geometry of an ellipsoid. In this geometry, a "free" particle is defined as one at rest or traveling at constant speed on a geodesic path. Two free particles may begin at the same point on the surface, traveling with the same constant speed in different directions. After a length of time, the two particles collide at the opposite side of the ellipsoid. Both "free" particles traveled with a constant speed, satisfying the definition that no forces were acting. No acceleration occurred and so Newton's first law held true. This means that the particles were in inertial frames of reference. Since no forces were acting, it was the geometry of the situation which caused the two particles to meet each other again. In a similar way, it is now believed that we exist in a four dimensional geometry known as spacetime. It is believed that the curvature of this 4D space is responsible for the way in which two bodies with mass are drawn together even if no forces are acting. This curvature of spacetime replaces the force known as gravity in Newtonian mechanics and special relativity. ## Non-inertial frames[] Main articles: Fictitious force, Non-inertial frame, and Rotating frame of reference Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below. An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. x' , y' , a' . The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r'. From the geometry of the situation, we get $\mathbf r = \mathbf R + \mathbf r'$ Taking the first and second derivatives of this, we obtain $\mathbf v = \mathbf V + \mathbf v'$ $\mathbf a = \mathbf A + \mathbf a'$ where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame. These equations allow transformations between the two coordinate systems; for example, we can now write Newton's second law as $\mathbf F = m\mathbf a = m\mathbf A + m\mathbf a'$ When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object's motion, the Coriolis effect). A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried). This arrangement leads to the equation (see Fictitious force for a derivation): $\mathbf a = \mathbf a' + \dot{\boldsymbol\omega} \times \mathbf r' + 2\boldsymbol\omega \times \mathbf v' + \boldsymbol\omega \times (\boldsymbol\omega \times \mathbf r') + \mathbf A_0$ or, to solve for the acceleration in the accelerated frame, $\mathbf a' = \mathbf a - \dot{\boldsymbol\omega} \times \mathbf r' - 2\boldsymbol\omega \times \mathbf v' - \boldsymbol\omega \times (\boldsymbol\omega \times \mathbf r') - \mathbf A_0$ Multiplying through by the mass m gives $\mathbf F' = \mathbf F_\mathrm{physical} + \mathbf F'_\mathrm{Euler} + \mathbf F'_\mathrm{Coriolis} + \mathbf F'_\mathrm{centripetal} - m\mathbf A_0$ where $\mathbf F'_\mathrm{Euler} = -m\dot{\boldsymbol\omega} \times \mathbf r'$ (Euler force) $\mathbf F'_\mathrm{Coriolis} = -2m\boldsymbol\omega \times \mathbf v'$ (Coriolis force) $\mathbf F'_\mathrm{centrifugal} = -m\boldsymbol\omega \times (\boldsymbol\omega \times \mathbf r')=m(\omega^2 \mathbf r'- (\boldsymbol\omega \cdot \mathbf r')\boldsymbol\omega)$ (centrifugal force) ## Notes[] 1. The distinction between macroscopic and microscopic frames shows up, for example, in electromagnetism where constitutive relations of various time and length scales are used to determine the current and charge densities entering Maxwell's equations. See, for example, Kurt Edmund Oughstun (2006). Electromagnetic and Optical Pulse Propagation 1: Spectral Representations in Temporally Dispersive Media. Springer. p. 165. ISBN 0-387-34599-X [Amazon-US | Amazon-UK].. These distinctions also appear in thermodynamics. See Paul McEvoy (2002). Classical Theory. MicroAnalytix. p. 205. ISBN 1-930832-02-8 [Amazon-US | Amazon-UK].. 2. In very general terms, a coordinate system is a set of arcs xi = xi (t) in a complex Lie group; see Lev Semenovich Pontri͡agin. L.S. Pontryagin: Selected Works Vol. 2: Topological Groups (3rd Edition ed.). Gordon and Breach. p. 429. ISBN 2-88124-133-6 [Amazon-US | Amazon-UK].. Less abstractly, a coordinate system in a space of n-dimensions is defined in terms of a basis set of vectors {e1, e2,… en}; see Edoardo Sernesi, J. Montaldi (1993). Linear Algebra: A Geometric Approach. CRC Press. p. 95. ISBN 0-412-40680-2 [Amazon-US | Amazon-UK]. As such, the coordinate system is a mathematical construct, a language, that may be related to motion, but has no necessary connection to motion. 3. J X Zheng-Johansson and Per-Ivar Johansson (2006). Unification of Classical, Quantum and Relativistic Mechanics and of the Four Forces. Nova Publishers. p. 13. ISBN 1-59454-260-0 [Amazon-US | Amazon-UK]. 4. Jean Salençon, Stephen Lyle (2001). Handbook of Continuum Mechanics: General Concepts, Thermoelasticity. Springer. p. 9. ISBN 3-540-41443-6 [Amazon-US | Amazon-UK]. 5. Patrick Cornille (Akhlesh Lakhtakia, editor) (1993). Essays on the Formal Aspects of Electromagnetic Theory. World Scientific. p. 149. ISBN 981-02-0854-5 [Amazon-US | Amazon-UK]. 6. Graham Nerlich (1994). What Spacetime Explains: Metaphysical essays on space and time. Cambridge University Press. p. 64. ISBN 0-521-45261-9 [Amazon-US | Amazon-UK]. 7. John D. Norton (1993). General covariance and the foundations of general relativity: eight decades of dispute, Rep. Prog. Phys., 56, pp. 835-7. 8. Katherine Brading & Elena Castellani (2003). Symmetries in Physics: Philosophical Reflections. Cambridge University Press. p. 417. ISBN 0-521-82137-1 [Amazon-US | Amazon-UK]. 9. Oliver Davis Johns (2005). Analytical Mechanics for Relativity and Quantum Mechanics. Oxford University Press. Chapter 16. ISBN 0-19-856726-X [Amazon-US | Amazon-UK]. 10. Donald T Greenwood (1997). Classical dynamics (Reprint of 1977 edition by Prentice-Hall ed.). Courier Dover Publications. p. 313. ISBN 0-486-69690-1 [Amazon-US | Amazon-UK]. 11. Matthew A. Trump & W. C. Schieve (1999). Classical Relativistic Many-Body Dynamics. Springer. p. 99. ISBN 0-7923-5737-X [Amazon-US | Amazon-UK]. 12. A S Kompaneyets (2003). Theoretical Physics (Reprint of the 1962 2nd Edition ed.). Courier Dover Publications. p. 118. ISBN 0-486-49532-9 [Amazon-US | Amazon-UK]. 13. M Srednicki (2007). Quantum Field Theory. Cambridge University Press. Chapter 4. ISBN 978-0-521-86449-7 [Amazon-US | Amazon-UK]. 14. Carlo Rovelli (2004). Quantum Gravity. Cambridge University Press. p. 98 ff. ISBN 0-521-83733-2 [Amazon-US | Amazon-UK]. 15. William Barker & Roger Howe (2008). Continuous symmetry: from Euclid to Klein. American Mathematical Society. p. 18 ff. ISBN 0-8218-3900-4 [Amazon-US | Amazon-UK]. 16. Arlan Ramsay & Robert D. Richtmyer (1995). Introduction to Hyperbolic Geometry. Springer. p. 11. ISBN 0-387-94339-0 [Amazon-US | Amazon-UK]. 17. According to Hawking and Ellis: "A manifold is a space locally similar to Euclidean space in that it can be covered by coordinate patches. This structure allows differentiation to be defined, but does not distinguish between different coordinate systems. Thus, the only concepts defined by the manifold structure are those that are independent of the choice of a coordinate system." Stephen W. Hawking & George Francis Rayner Ellis (1973). The Large Scale Structure of Space-Time. Cambridge University Press. p. 11. ISBN 0-521-09906-4 [Amazon-US | Amazon-UK]. A mathematical definition is: M is called an n-dimensional manifold if each point of M is contained in an open set that is homeomorphic to an open set in Euclidean n-dimensional space. 18. Shigeyuki Morita, Teruko Nagase, Katsumi Nomizu (2001). Geometry of Differential Forms. American Mathematical Society Bookstore. p. 12. ISBN 0-8218-1045-6 [Amazon-US | Amazon-UK]. 19. Granino Arthur Korn, Theresa M. Korn (2000). Mathematical handbook for scientists and engineers : definitions, theorems, and formulas for reference and review. Courier Dover Publications. p. 169. ISBN 0-486-41147-8 [Amazon-US | Amazon-UK]. 20. See Encarta definition. Archived 2009-10-31. 21. Katsu Yamane (2004). Simulating and Generating Motions of Human Figures. Springer. pp. 12–13. ISBN 3-540-20317-6 [Amazon-US | Amazon-UK]. 22. Achilleus Papapetrou (1974). Lectures on General Relativity. Springer. p. 5. ISBN 90-277-0540-2 [Amazon-US | Amazon-UK]. 23. Wilford Zdunkowski & Andreas Bott (2003). Dynamics of the Atmosphere. Cambridge University Press. p. 84. ISBN 0-521-00666-X [Amazon-US | Amazon-UK]. 24. A. I. Borisenko, I. E. Tarapov, Richard A. Silverman (1979). Vector and Tensor Analysis with Applications. Courier Dover Publications. p. 86. ISBN 0-486-63833-2 [Amazon-US | Amazon-UK]. 25. See Arvind Kumar & Shrish Barve (2003). How and Why in Basic Mechanics. Orient Longman. p. 115. ISBN 81-7371-420-7 [Amazon-US | Amazon-UK]. 26. Chris Doran & Anthony Lasenby (2003). Geometric Algebra for Physicists. Cambridge University Press. p. §5.2.2, p. 133. ISBN 978-0-521-71595-9 [Amazon-US | Amazon-UK].. 27. For example, Møller states: "Instead of Cartesian coordinates we can obviously just as well employ general curvilinear coordinates for the fixation of points in physical space.…we shall now introduce general "curvilinear" coordinates xi in four-space…." C. Møller (1952). The Theory of Relativity. Oxford University Press. p. 222 and p. 233. 28. A. P. Lightman, W. H. Press, R. H. Price & S. A. Teukolsky (1975). [[Problem book|Problem Book]] in Relativity and Gravitation. Princeton University Press. p. 15. ISBN 0-691-08162-X [Amazon-US | Amazon-UK]. Wikilink embedded in URL title (help) 29. Richard L Faber (1983). Differential Geometry and Relativity Theory: an introduction. CRC Press. p. 211. ISBN 0-8247-1749-X [Amazon-US | Amazon-UK]. 30. Richard Wolfson (2003). Simply Einstein. W W Norton & Co. p. 216. ISBN 0-393-05154-4 [Amazon-US | Amazon-UK]. 31. See Guido Rizzi, Matteo Luca Ruggiero (2003). Relativity in rotating frames. Springer. p. 33. ISBN 1-4020-1805-3 [Amazon-US | Amazon-UK].. ## Source Content is authored by an open community of volunteers and is not produced by or in any way affiliated with ore reviewed by PediaView.com. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Frame of reference", which is available in its original form here: http://en.wikipedia.org/w/index.php?title=Frame_of_reference • ## Finding More You are currently browsing the the PediaView.com open source encyclopedia. Please select from the menu above or use our search box at the top of the page. • ## Questions or Comments? If you have a question or comment about material in the open source encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. This open source encyclopedia supplement is brought to you by PediaView.com, the web's easiest resource for using Wikipedia content. All Wikipedia text is available under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Wikipedia® itself is a registered trademark of the Wikimedia Foundation, Inc.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 34, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8881146311759949, "perplexity_flag": "middle"}
http://www.openwetware.org/wiki/Physics307L:People/Phillips/Formal_Lab_Report
# Physics307L:People/Phillips/Formal Lab Report ### From OpenWetWare SJK 17:12, 7 December 2008 (EST) 17:12, 7 December 2008 (EST) This is a good first draft for the formal report, Michael. Many of the sections are looking very good and your abstract is very good. One major deficiency is the lack of citations to peer-reviewed primary research papers, as you will see with my comments below. Another major thing is the style overall is usually much too informal for this kind of publication. Please discuss with me if my comments below about that do not make sense. An important note about the further data this week: You may have heard that the apparatus seems to be dying. So, if you take further data, you may not be able to do any better and in fact it may be worse. This would still be interesting to compare to your original data. I would also be very interested in learning more about the cyan versus violet (versus invisible) beam path...if you're able to devise some kind of investigation into that, I think it would be a good part of this report (even though it doesn't result in any kind of e/m ratio measurement. ## Visually Measuring the Charge-to-Mass Ratio for Electrons SJK 23:01, 6 December 2008 (EST) 23:01, 6 December 2008 (EST) I like your title and very good contact information. Author: Michael R. Phillips Experimentalists: Michael R. Phillips & Stephen K. Martinez University of New Mexico: Physics and Astronomy Department, Albuquerque, NM e-Mail: crooked@unm.edu ### Abstract SJK 23:10, 6 December 2008 (EST) 23:10, 6 December 2008 (EST) This is a very good draft of the abstract. It really does sound very professional and it gives the reader a good idea of what they will find in the paper. Some comments on what you have: (1) phrase, "we were able to form linear data..." is a bit unclear, (2) italics on not not usually used in formal abstract, (3) I'd ditch the last digit on uncertainty...use 0.04 only, (4) The last sentence is too informal ("later")...you could change it to something like "We discuss reasons for the large discrepancy and possible improvements for future experiments. Also, I think it would be a better abstract if you added motivation sentence somewhere: why is charge/mass ratio important or why were these types of experiments revolutionary. I think it's good to include this in abstracts, although as is, your abstract flows very nicely. Using a Helmholtz Coil setup with variable current situated around a Helium-filled glass tube with an electron gun at the bottom with a variable accelerating voltage, we measured the diameter of electron beam paths formed into circles by the induced magnetic field from the Helmholtz Coil setup. Using theoretical predictions of the diameter as a function of the charge-to-mass ratio (e/m) for electrons, we were able to form linear data using a least squares fit and find the slope, which relates to this ratio and some constants. Our final measurement of (4.78 ± 0.041)·1010C/kg was not in very good agreement with the accepted value of 1.759·1011C/kg. The reasons for this large discrepancy will be discussed later. ### Introduction SJK 23:17, 6 December 2008 (EST) 23:17, 6 December 2008 (EST) What you have here for the introduction is very well written (though you'll want to add actual citations to the papers you're mentioning). However, it's too short and limited. It's very good background on the motivation for the experiment (though the language is a bit informal). So you need to add: (1) what are the best ways to measure this ratio nowadays? (adding citations to some references) (2) a brief introduction to what you'll show in this report, and (3) a short concluding statement of impact or future work or similar. Also, you will need citations to primary peer-reviewed publications. The value calculated during this lab, e/m, seems a strange goal without some inspection of some applications. It is immediately clear, without going through additional examples, that this value is useful because we could actually predict what our diameters would have been in this experiment, given different accelerating voltages and Helmholtz Coil currents, using the same theory we used to find e/m if we only had an accurate value for e/m beforehand. Something else that makes this lab particularly useful has to do with the fact that measuring the mass of a single electron is very difficult to measure. However, we can deduce this mass if we have accurate values for both this e/m ratio and the charge, e, of an electron, which is done in the Millikan Oil Drop experiment. This idea was probably what J.J. Thomson (1856-1940) had in mind when he fist did this experiment in a very similar way to how we did, and is described in detail in his 1897 paper. ### Methods and Materials #### Equipment: • Soar DC Power Supply Model Number PS-3630 - Power Supply to Helmholtz Coil setup • Soar DC Power Supply Model 7403 UNM 158374 - Power Supply to Electron Gun Heater Element • Gelman Instrument Company Delux Regulated Power Supply - Power Supply to Accelerating Electrodes • 2 x Amprobe 37xR-A - Multimeters • Fluke 111 True RMS Multimeter • e/m Apparatus (includes electron gun, Helmholtz coil setup, Helium-filled tube, power supply inputs, and a small reflective ruler mounted behind the tube) • Black Cloth Hood (to block out excess light while taking data) • Very Many “Banana”-tipped cords for the Power Supply connections #### Setup: SJK 23:40, 6 December 2008 (EST) 23:40, 6 December 2008 (EST) These are really great details as far as a "how to" for someone doing this lab. However, unfortunately the custom is to not include this level of detail in a formal publication (probably in large part because of the amount of paper that would have been used). In your future research career, I encourage you to keep taking detailed notes like this--both in your primary lab notebook and also "publishing" good "how to" articles on the internet in addition to your formal publications. OK, all that said, I think the correct thing to do with this whole "setup" section is to make a really good diagram (or a photograph with text superimposed to label components) (both a photo + a diagram would be great) and then have much less text and using the past tense "we set up the components as in figure ___ and according to reference X. We connected the apparatus (company name, city) to blah...". This probably is confusing, so we should discuss it this week. Following the setup described in Dr. Gold's Junior Lab Manual: (Steve Koch: You can make a link to Dr. Gold's manual one of the references and then refer to it with a number like you will with your other references.) We started off by organizing all of the power supplies and multimeters so that we would be able to connect everything in the way that is necessary. Before connecting any of the power supplies or multimeters to the apparatus, we needed to make sure we got the supplies set to the correct voltage or current for the corresponding element that each supply would power. After we got all of the dials tuned correctly, checked with corresponding multimeters, we could turn them all off and connect everything to the apparatus, then turn it all back on again. Since we did this lab in two separate days, we have values of initial voltages/currents for these two occasions, but they both fall into the range as suggested by Gold’s manual. After this initial power supply-type calibration, we connected everything as suggested by the manual. In connecting the power supply designated for the Helmholtz coils, we connected a multimeter (one of the Amprobe multimeters) in series with the coil power inputs on the apparatus so that we could measure the current that flows into the coils. This current is not only regulated by the dials on the power supply, but also by a dial on the apparatus itself. We will be using only the latter dial to vary our currents during the lab to ensure that we never exceed the 2.0 Amp ceiling suggested by Gold’s manual. Next, we connected the power supply that goes with the electron gun heater. This particular element of the apparatus does not ever need to be varied, it just needs to have a high enough voltage supplied to it that many electrons will be fed into the electron gun accelerating potential so that we can see the path of the electrons through the Helium tube clearly. If the voltage is too low on the heater element, too few electron will be emitted and the resulting beam would be far too dim for us to discern and measure visually, which is how we collected data (as described in detail later). Even though we did not vary the voltage supplied to the heater, we still connected a multimeter, the Fluke 111, in parallel with it so that we could be sure nothing else changed the value and burnt out the heater element. Finally, we connected a power supply to the apparatus inputs for the accelerating voltage across electrodes that are very near the electron gun’s heater element. This voltage was varied very much during the experiment, so we though ahead and connected a multimeter, the other Amprobe, in parallel to measure this voltage. Unlike the other multimeters, which were incorporated directly into the power supply-element circuit, this multimter was connected to a kind of output on the apparatus that was designed specifically to measure accelerating voltage easily. As with the heater, this element of the apparatus does not have any requirements for current, but has a minimum and a maximum voltage that essentially decides the speed of the electrons departing the electron gun. Therefore, we did not need to put in another multimeter to monitor current through this circuit. #### Procedure: SJK 23:42, 6 December 2008 (EST) 23:42, 6 December 2008 (EST) It seems to me that a lot of your procedure would fit in the "results." For example, the very interesting violet colored beam and hypotheses about what could be happening. I think you should probably move that to a new section of the results related to these things, which I think could be a very interesting part of your report. SJK 16:31, 7 December 2008 (EST) 16:31, 7 December 2008 (EST) Also, like the above section, this section is too informal, for example, the phrase "which took quite some time" would not be used in a formal publication. SJK 17:04, 7 December 2008 (EST) 17:04, 7 December 2008 (EST) You have good figures with good, descriptive captions. Figure 1: The Breakdown of the Electron Path through Helium with a Low Accelerating Potential and Strong Magnetic Field due to a High Current through Helmholtz Coils. Notice the violet color at the top of the loop and the way the electron path bends back into the accelerating area (bottom of the glass tube). After we got everything connected correctly, which took quite some time, we turned on all of the power supplies and multimeters and began to take data. To take decent data we turned off the lights in the room and covered the whole apparatus with a black cloth hood to minimize the amount of ambient light that was entering our eyes. To actually measure the diameter of the loop that was created, we had a small reflective ruler mounted behind the tube. The reason for it being reflective is that there would be very significant (a couple of centimeters, at least) systematic error in all of our diameter measurements if we did not line up the direct image we saw with a reflection from the ruler. Something that is interesting to note is that we did not get a circular cyan loop as we were expecting at first. After simply turning everything on and looking at our viewing area, we saw a very small loop of cyan and violet that was certainly not circular (see Figure 1). From what we could deduce right away, the electrons seemed to be leaving either without enough energy (speed) or we had a much too large current running through our Helmholtz coils that was making a very strong magnetic field that ended up bending the path of the electrons so much that they were pulled back into the accelerating voltage and shot out again. After adjusting the current to a lower value and raising the accelerating voltage by just a little, we were able to observe the circular path that we were looking for, and the color emitted by the Helium in the tube was only cyan in the area of the path taken by the electrons. However, upon closer inspection, the path taken by the electrons was not circular but helical. The cause of this was simple: the electron gun was not shooting electrons with an initial velocity parallel to the planes created by the Helmholtz coils. Therefore, to correct this, we simply rotated the Helium tube, which has the electron gun and accelerating plates mounted inside it, until the orientation of the electron gun was parallel with the Helmholtz planes. Now, finally, we could see exactly what we were looking for. Now that we obtained an ideal form to work off of, we started actually taking data. To do this, we varied the Helmholtz current and the accelerating voltage one at a time and measured the diameter of our circular loop for each pair of values. We could then use this quite large amount of data in Excel to create a best fit line using the least squares method and show the voltage as a function of the radius squared. The reason we used the voltage as a function instead of the current, since we could choose either or both for the cases when the other is constant, is because we found that small changes in the current make very small changes in the diameter of the loop, changes so small that our systematic error in looking at the loop would make the whole data set for varying current and constant voltage useless. However, the case for varying voltage and constant current led to decent results that changed the diameter at least enough to overcome our systematic error. SJK 16:28, 7 December 2008 (EST) 16:28, 7 December 2008 (EST) Also part of your methods section should be analysis methods, including listing software applications used, and any special algorithms (such as LINEST). You have a lot of this information below in the results section, but description of the analysis methods can go in the methods section. ### Results & Discussion Figure 2: The Plot from Excel's least-square fitting that shows all of our data for Accelerating Voltage vs. Radius Squared fitted to a line with slope k·e/m. Notice that there are some points that seem to be outliers, but they still follow the basic trend that theory predicts. We did all of our analysis using Excel. After entering all of our raw data, we made functions that described everything from the resulting magnetic field from the Helmholtz coils to the final and average value for e/m. In this file, found at the end of this section, there is also a plot (see Figure 2) which shows our final correlation between accelerating voltage and the radius squared. You will notice some stray points on the plot, which are due mainly to systematic error. You may also notice that we called this the “Best” plot of our data. The reason for this is that there were some stray data points that did not fit the rising trend in the radius from higher voltage. We concluded these data points were recorded after our eyes had become tired from staring at a rather bright light source in a very dark room. In other words, these points had very, very bad systematic errors associated with them so were not worth consideration. To deduce the value of e/m from this plot, we obtained a slope from the best-fit line using features within Excel. This slope, though not useful in itself, includes e/m in it, with a few constants as well. Since we know the values of all these constants, we can easily extract a correct value for e/m with an associated error from the error in creating the line in Excel (using a part of the LINEST function). Here is the equation that we use to calculate e/m from various voltages and radii $V=k\cdot\frac{e}{m}\cdot r^{2}$ where k is a constant that is a compilation of many fundamental constants, which are already known to a great deal of accuracy. Shown also in the file is a percent error, which was found to be 72.9%, which shows how close our measurement of e/m was to the accepted value. The formula for this is $%error=\frac{|Accepted-Measured|}{Accepted}\times100$ Here is the Excel file that we used: EM.xls SJK 16:59, 7 December 2008 (EST) 16:59, 7 December 2008 (EST) "Here is the Excel file..." is a bit informal. In a published paper, you would likely provide the excel file as "supplemental data." So, it's not clear what the best way to link to it is. You could put it in the references list and then cite it as "Original excel spreadsheets are provided (ref #)." ### Conclusions SJK 17:02, 7 December 2008 (EST) 17:02, 7 December 2008 (EST) You have some good thoughts in this section, but it is also a bit too informal. Your comparison with the accepted value should use statistical info (compare to error bars) as opposed to "not at all close." Also, you can mention better ways to measure e/m ... surely they exist, because we have really good information about this ratio! We found that our overall measurement of the charge-to-mass ratio for electrons was (4.78 ± 0.041)·1010C/kg, which is not at all in good agreement with the accepted value of 1.759·1011C/kg. Not even our somewhat considerable error bars make up for the discrepancy in these values. There are very many reasons for this discrepancy, however, and were mentioned briefly in the above sections. By far, our largest source of error was simply systematic. There is a huge problem with trying to measure anything visually in a very dark room with nothing but a glowing ring of electrons and a reflective ruler. It is quite hard to see anything at all, especially in the realm of low accelerating potentials when electrons don’t excite the Helium very much due to their own low energy. Also, the fact that the electrons are giving energy to the Helium inside the tube is a problem. This means that the electrons are certainly not traveling in a circle at all but rather something that looks like an egg, with the narrow portion just after leaving the electron gun and the wider portion near the end of their journey back to the electron gun’s heater element. This causes significant fluctuations in diameter for different potentials and gives us results that suggest that the constant value e/m somehow depends on accelerating potential, which is obviously not true. Another occurrence of systematic error showed itself right when we first turned on all of our power supplies (see Figure 1). In this case, we saw that the electrons were not escaping the pull of the accelerating plates until about three centimeters away. This is a very big problem, especially at low voltages or high Helmholtz currents because the radii of the electron paths were actually smaller than this three centimeters. That means that the radii we record for these values of voltage and current are deeply flawed. Upon examination of our data, it is easy to draw connections with fundamentally poor data points and radii that are smaller than or very close to three centimeters (or 0.03 meters when looking at our Excel sheet). In fact, as shown in Figure 1, when both the voltage and Helmholtz current are in the unacceptable range, the electron loop collapses in on itself and exhibits some very strange behavior (the violet at the top of the ‘ring’ is still puzzling to us). Because of all this systematic error, we saw why we did not get a value of e/m close to the accepted value. However, we could not think of a way to measure this quantity without using small electron paths or strong accelerating potentials inside of a gas that steals electron energy. ### Acknowledgements I would like to thank Stephen K. Martinez, my Junior Lab Partner, for his efforts in understanding this lab, and Dr. Steven Koch, my Junior Lab Instructor, along with Aram Gragossian, the Teaching Assistant for this course, for helping both Stephen and I understand what was happening during the breakdown of the electron path at low voltage/high magnetic field. SJK 16:23, 7 December 2008 (EST) 16:23, 7 December 2008 (EST) Of course, you will need a "references" section with references to original peer-reviewed research papers
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9619022011756897, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/213216-simplifying-trig-expression-i-think.html
1Thanks • 1 Post By Soroban Thread: 1. simplifying trig expression (I think) Not sure if this is a Calc or a trig question. But I wanted to take the derivative of this function: $y = -csc\,x - sin\,x$ Through simple steps I got: $-\frac{d}{dx}(csc\,x)\,-\,\frac{d}{dx}sin\,x$ $csc\,x\,cot\,x\,-\,cos\,x$ However the answer in the book is... $cos\,x\,cot^2\,x$ So presumably my calculus is wrong, or I didn't simplify enough. I have tried various transformations but never quite end up with that. Can anybody show me how to simplify to that? 2. Re: simplifying trig expression (I think) Hello, infraRed! $\text{Find the derivative of: }\:y \:=\: -\csc x - \sin x$ $\text{I got: }\:y' \:=\:\csc x\cot x - \cos x$ $\text{However, the answer in the book is: }\,\cos x \cot^2\!x$ $\text{So presumably my calculus is wrong, or I didn't simplify enough.}$ $\csc x\cot x - \cos x \;=\;\csc x\cdot\tfrac{\cos x}{\sin x} - \cos x$ . . . . . . . . . . . . . . . $=\;\cos x\left(\tfrac{\csc x}{\sin x} - 1\right)$ . . . . . . . . . . . . . . . $=\;\cos x\left(\frac{\csc x}{\frac{1}{\csc x}} - 1\right)$ . . . . . . . . . . . . . . . $=\;\cos x\underbrace{\left(\csc^2\!x - 1\right)}_{\text{This is }\cot^2x}$ . . . . . . . . . . . . . . . $=\;\cos x\cot^2\!x$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8445244431495667, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/70170-derivative-arcsine.html
# Thread: 1. ## derivative of arcsine I dont understand why when the derivative of arcsine is taken it is . Can someone help me understand please! Thank you! 2. Originally Posted by gabet16941 I dont understand why when the derivative of arcsine is taken it is . Can someone help me understand please! Thank you! Note that $\sin^{-1}x$ is differentiable on the interval $-1<x<1$ and $x=\sin y$ is differentiable on the interval $-\frac{\pi}{2}<y<\frac{\pi}{2}$ Let $y=\sin^{-1}x\implies x=\sin y$. Now, implicitly differentiating both sides, we have $1=\cos y\frac{\,dy}{\,dx}\implies\frac{\,dy}{\,dx}=\frac{ 1}{\cos y}$ We can divide through by $\cos y$ because $\cos y\neq0$ on the interval $-\frac{\pi}{2}<y<\frac{\pi}{2}$. Also $\cos y >0$ on the interval $-\frac{\pi}{2}<y<\frac{\pi}{2}$. Now, we can use the identity $\cos^2y+\sin^2 y =1$ to find $\cos y$: $\cos^2y+\sin^2y=1\implies \cos^2y=1-\sin^2y\implies \cos y=\sqrt{1-\sin^2y}$ Thus, we see that $\frac{\,dy}{\,dx}=\frac{1}{\sqrt{1-\sin^2y}}$ Since we let $x=\sin y$, it now implies that $\color{red}\boxed{\frac{\,d}{\,dx}\left(\sin^{-1}x\right)=\frac{1}{\sqrt{1-x^2}}}$ Does this make sense? 3. Yes thank you very much!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.908918559551239, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/31068/can-a-scientific-theory-ever-be-absolutely-proven/31181
# Can a scientific theory ever be absolutely proven? I personally cringe when people talk about scientific theories in the same way we talk about everyday theories. I was under the impression a scientific theory is similar to a mathematical proof; however a friend of mine disagreed. He said that you can never be absolutely certain and a scientific theory is still a theory. Just a very well substantiated one. After disagreeing and then looking into it, I think he's right. Even the Wikipedia definition says it's just very accurate but that there is no certainty. Just a closeness to potential certainty. I then got thinking. Does this mean no matter how advanced we become, we will never become certain of the natural universe and the physics that drives it? Because there will always be something we don't know for certain? - >We will never become certain of the natural universe and the physics that drives it. Mass of the Universe $\sim3.5\cdot10^{54}$ kg Mass of your brain $\sim 1.5$ kg What do you think, is it possible to squeeze information contained in the latter into the former? To me it is really remarkable that we are able to know at least something. – Kostya Jul 3 '12 at 9:15 – AdamRedwine Jul 3 '12 at 11:38 1 @AdamRedwine: I'm not sure how related this is, given that it applies only in certain frameworks and conditations. – Nick Kidman Jul 3 '12 at 17:31 Let me add this very brief comment on terminology: "Theory" in everyday language often is meant as "guess", "hunch", "could be that way". Scientifically speaking, those should be called guesses, educated guesses or hypotheses. A theory in science is a rather exhaustive framework of explaining all currently available data pertaining to a certain subject, as in "theory of electrodynamics", "theory of fluid dynamics" etc. Currently, this confusion about what the word "theory" means is most annoying in discussing the "theory of evolution"... – Lagerbaer Jul 3 '12 at 19:48 ## 5 Answers Simple Answer: Nothing is guaranteed 100%. (In life or physics) Now to the physics part of the question. Soft-Answer: Physics uses positivism and observational proof through the scientific process. No observation is 100% accurate there is uncertainty in all measurement but repetition gives less chance for arbitrary results. Every theory and for that matter laws in physics are observational representations that best allow prediction of future experiments. Positivism can overcome theological and philosophical discrepancies such as what is the human perception of reality. Is real actually real type questions. The scientific process is an ever evolving representation of acquired knowledge based on rigorous experimental data. No theory is set in stone so to speak as new results allow for modification and fine tuning of scientific theory. - Cheers pal. Good writing there. :) do you reckon a super advanced civilization could ever become 100 certain of everything or is there a fundemental issue with that? – Joseph Jul 1 '12 at 6:02 That is a tricky question as we are 100 percent certain untilled new date proofs us wrong. Fundamentally there is always an arbitrary uncertainty in any "complex" measuring device so I would have to say technically knowing everything all at once would be extremely difficult if not implausible. To be fair ask me again in 100 thousand years I am sure I will have a better answer. – Argus Jul 1 '12 at 7:17 I basically agree with Argus, though I take a slightly different perspective. Physicists try to explain the world by constructing mathematical models to approximate it. The phrase mathematical model can sound mysterious, but it just means an equation or equations that predict what's going to happen given some initial conditions. For example Newton's laws of motion are a mathematical model, as is general relativity, quantum mechanics, string theory and so on. Every mathematical model has a domain in which is a good description of the world, and within that domain we regard the model as effectively exact. Outside that domain we know the model fails. For example Newton's laws describe the motion of ideal particles at speeds well below the speed of light. We know that for higher speeds we need a different model i.e. special relativity, but this fails for high mass/energy densities. To handle high mass/energy densities we need general relativity, and so on. So we describe the world using a range of theories i.e. mathematical models, and we pick the one that we know works for the situation we are considering. In this sense our theories are always approximate. However within the domain of our model we are completely certain the model works. If you're sitting at a desk in NASA working out how to send a spaceship to Pluto you can be absolutely confident that the trajectory you calculate will work. You would not be worrying about whether some new and unexplained physics might send your spaceship spiralling into the Sun. - +1 very true each mathematical model describes its perticular are of "application to a high enough degree of accuracy to effectively predict "set" situations. – Argus Jul 1 '12 at 7:13 Cheers guys :) interesting read. – Joseph Jul 1 '12 at 17:41 "However within the domain of our model we are completely certain the model works" - Can you explain this statement? Is it ment in an absolute sense (justification) or do you interpret "we can" as "it's possible to imagine a world where everyone agrees on this". Or do you mean it as a suggestion, as in "to do it is a good idea, because otherwise you'd worry to much and that's unhealthy". And who is "we" in this sentence? – Nick Kidman Jul 3 '12 at 17:19 Within it's domain Newtonian mechanics has been working perfectly for about 400 years so far. Some may say that this doesn't prove anything, to which I'd reply that they really need to get out more. – John Rennie Jul 3 '12 at 17:23 I doesn't prove anything. (This might however lead in a discussion about the term "prove".) – Nick Kidman Jul 3 '12 at 17:27 You can never be certain of anything, except possibly mathematical theorems. This is the conclusion after long debates on epistemology. The ancient Greek skeptics were of the opinion that knowing the uncertainty of everything will give you peace of mind. - The philosopher David Hume pointed out induction can never be proven. Even if we have some proposed "law" describing everything we know so far, there is no guarantee the next observation will completely violate it. The world might not be what we think it is. There could be some malicious demon messing with our minds. - I'll try to answer this with three points about the scientific method and how "certain" we are of the truth in our theories. Keep in mind that scientists are overly dogmatic about pet theories but we should aspire to transparency about how wrong we might be and distrust everything until the evidence, be it scant or ample, is verified. First, you can gather quite a lot of insight by listening to Richard Feynman's analogy between discovering the laws of nature and learning the rules of chess through observation of a fraction of the board. In particular, there's the part where he talks about a bishop changing it's colour despite ample observations of this never happening. His overall point is that we're never truly sure but we are always inadvertedly gathering evidence that the theory is right. Secondly, you should read Isaac Asimov's essay The Relativity of Wrong. His point is that while a theory might be "wrong", sometimes they're very wrong ("the Earth is flat") but sometimes less wrong ("the Earth is a sphere"). In some cases, you can quanitify this. For a contemporary example, cosmologists have settled on $\lambda$CDM as the right model of the Universe. The point isn't that $\lambda$CDM is necessarily the whole story but that, if it isn't, then the evidence we've gathered already implies that the whole story can't be much different. Finally, let's think back to the superluminal neutrino fanfare. It made big news, with the media painting a picture that made it look like the scientific community needed to revolutionize special relativity (SR). But a lot of scientists responded skeptically, even by offering to eat their shorts. So why the skepticism? Surely that flies against the scientific mantra of doubting authority? Not quite. There were good reasons to doubt the result and anyone who dismissed those results should've defended their position. It was quickly pointed out that, if neutrinos travelled faster than light, we'd detect supernovae early. Also, I think Glashow and others pointed out that we'd see something like Cerenkov radiation from the neutrinos. But more importantly, SR is, to me, a theory that is close to being "certain". It was and still is tried and tested extensively and it forms the basis of other theories that are themselves successful. So the odds of SR being "wrong" are outrageously small. We have inadvertedly tested it bazillions of times and it's worked perfectly. And the amount by which it can be wrong is very small. At the time, it could've been like the first time a pawn was queened into a bishop, but, to roll out the cliche, extraordinary claims require extraordinary evidence. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567322134971619, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/162124/pemdashow-to-solve-this-exercise
# PEMDAS:How to solve this exercise? I have the following problem: $$100*\left\{24+100*[1001-4*(25*6+25*4)]\right\}$$ I'm very frustrated that I can't solve exercises like this. I have read about PEMDAS and followed the steps but somehow I am making a mistake because the result I get is not correct.(looked at answers in the book). Can someone provide the steps in order to solve correctly this problem? - Arturo + Zev: I think Morphism should be asked for his steps first, rather than us simply doing it for him in a matter of minutes. – GEdgar Jun 23 '12 at 19:57 @GEdgar: I would not say that I "did it for him"; I explained what steps he needs to take, but did not take them. – Arturo Magidin Jun 23 '12 at 20:08 ## 2 Answers First, do the innermost products, $25*6$ and $25*4$. Then add them. Then multiply the result by $4$. Then subtract this from $1001$. Then multiply the answer by $100$. Then add $24$. Then multiply the result of that by $100$. Or: 1. You will need to multiply $100$ by the result of what is in curly brackets; • What is in curly brackets requires you to add $24$ to the result of multiplying $100$ by the answer you get from inside the square brackets; -What is inside the square brackets requires you to start with $1001$, and subtract the result of multiplying $4$ by the result of what you have inside the round parentheses. • To compute what is inside the round parentheses, you first do the products ($25$ times $6$ and $25$ times $4$), and add them. Now "unfold" that to get the answer. - I totally got it now! Thank you so much! – Morphism Cekl Jun 23 '12 at 19:53 $\newcommand{\myemph}[1]{{\color{red}{\bf #1}}}$ $$100*\left\{24+100*[1001-4*(\myemph{25*6}+\myemph{25*4})]\right\}$$ $$100*\left\{24+100*[1001-4*(\myemph{150+100})]\right\}$$ $$100*\left\{24+100*[1001-\myemph{4*(250)}]\right\}$$ $$100*\left\{24+100*[\myemph{1001-1000}]\right\}$$ $$100*\left\{24+\myemph{100*[1]}\right\}$$ $$100*\left\{\myemph{24+100}\right\}$$ $$\myemph{100*\left\{124\right\}}$$ $$\fbox{12400}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420915246009827, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/174448-transformation-matrix.html
# Thread: 1. ## Transformation matrix Hi I've got this math problem in an past IGCSE exam.. Even though it's question (c), it doesn't rely on any other information above it, so thats the whole question. What I want to know is, how can I tell what type of transformation will it be just by looking at the matrix? Thanks Attached Thumbnails 2. Hello, yorkey! I don't think you can "eyeball" the matrix. You might have to do a teensy bit of math . . . $\text{(c) The matrix }\,\begin{pmatrix}0 & 1 \\ \text{-}1 & 0 \end{pmatrix}\,\text{ represents a single transformation.}$ $\text{(1) Describe fully this transformation.}$ Let $(a,b)$ be any vector. Then: . $(a,b)\begin{pmatrix}0 & 1 \\ \text{-}1 & 0\end{pmatrix} \;=\;(\text{-}b,a)$ It transforms point $P(a,b)$ to the point $Q(\text{-}b,a).$ Code: ``` Q | * | :* | : * | P : * | * a: * | * : : * | * :b : *| * : ----+------*-----------+---- -b | a |``` Point $\,P$ is rotated $90^o$ counterclockwise about the Origin. $\text{(ii) Find the coordinates of the image of the point }(5,3).$ . . $(5,3)\begin{pmatrix}0 & 1 \\ \text{-}1 & 0\end{pmatrix} \;=\;(\text{-}3,5)$ 3. Originally Posted by Soroban Let $(a,b)$ be any vector. Then: . $(a,b)\begin{pmatrix}0 & 1 \\ \text{-}1 & 0\end{pmatrix} \;=\;(\text{-}b,a)$ It transforms point $P(a,b)$ to the point $Q(\text{-}b,a).$ Thank you! I understand all the steps except this one: $(a,b)\begin{pmatrix}0 & 1 \\ \text{-}1 & 0\end{pmatrix} \;=\;(\text{-}b,a)$ Did you replace the variables a and b with any numbers? Sorry if I'm confusing you. I know how to multiply matrices, but how did you do it with variables?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073809385299683, "perplexity_flag": "middle"}
http://polylogblog.wordpress.com/2010/07/
# Monthly Archive You are currently browsing the monthly archive for July 2010. ## Open Problems: Longest Increasing Subsequence July 30, 2010 in open problems | by Andrew | 1 comment A rather long time ago, I mentioned that Krzysztof Onak and I were compiling a list of open research problems for data stream algorithms and related topics. We’re starting with some of the problems that were explicitly raised at the IITK Workshop on Algorithms for Processing Massive Data Sets but we’d also like to add additional questions from the rest of the community. Please email (mcgregor at cs.umass.edu) if you have a question that you’d like us to include (see the previous list for some examples). I’ll also be posting some of the problems here while we work on compiling the final document. Here’s one now… Given a stream $\langle a_1, \ldots, a_m \rangle$, how much space is required to approximate the length of the longest increasing subsequence up to a factor $1+\epsilon$? Background. [Gopalan, Jayram, Krauthgamer, Kumar] presented a single-pass algorithm that uses $\tilde{O}((m/\epsilon)^{0.5})$ space and a matching (in terms of $m$) lower bound for deterministic algorithms was proven by [Gal, Gopalan] and [Ergun, Jowhari]. Is there a randomized algorithm that uses less space or can the lower bound be extended to randomized algorithms? Very recently, [Chakrabarti] presented an “anti-lowerbound”, i.e., he showed that the randomized communication complexity of the communication problems used to establish the lower bounds is very small. Hence, if it is possible to extend the lower bound to randomized algorithms, this will require new ideas. Note that solving the problem exactly is known to require $\Omega(m)$ space [Gopalan, Jayram, Krauthgamer, Kumar] and [Sun,Woodruff]. The related problem of finding an increasing subsequence of length $k$ has been resolved: $\tilde{\Theta}(k^{1+1/(2^p-1)})$ space is known to be necessary [Guha, McGregor] and sufficient [Liben-Nowell, Vee, Zhu] where $p$ is the number of passes permitted. However, there are no results on finding “most” of the elements. ## Bite-sized streams: Random Walks July 20, 2010 in educational | by Andrew | 1 comment Here’s another bite-sized stream algorithm for your delectation. This time we want to simulate a random walk from a given node $u$ in a graph whose edges arrive as an arbitrarily-ordered stream. I’ll allow you multiple passes and semi-streaming space, i.e., $\tilde{O}(n)$ space where $n$ is the number of nodes in the graph. You need to return the final vertex of a length $t=o(n)$ random walk. This is trivial if you take $t$ passes: in each pass pick a random neighbor of the node picked in the last pass. Can you do it in fewer passes? Well, here’s an algorithm from [Das Sarma, Gollapudi, Panigrahy] that simulates a random walk of length $t$ in $\tilde{O}(n)$ space while only taking $O(\sqrt{t})$ passes. As in the trivial algorithm, we build up the random walk sequentially. But rather than making a single hop of progress in each pass, we’ll construct the random walk by stitching together shorter random walks. 1. We first compute short random walks from each node. Using the trivial algorithm, do a length $\sqrt{t}$ walk from each node $v$ and let $T[v]$ be the end point. 2. We can’t reuse short random walks (otherwise the steps in the random walk won’t be independent) so let $S$ be the set of nodes from which we’re already taken a short random walk. To start, let $S\leftarrow \{u\}$ and $v\leftarrow T[u], \ell\leftarrow\sqrt{t}$ where $v$ is the vertex that is reached by the random walk constructed so far and $\ell$ is the length of this random walk. 3. While $\ell < t-\sqrt{t}$ 1. If $v\not \in S$ then set $v\leftarrow T[v], \ell \leftarrow \sqrt{t}+\ell, S\leftarrow S\cup \{v\}$ 2. Otherwise, sample $\sqrt{t}$ edges (with replacement) incident on each node in $S$. Find the maximal path from $v$ such that on the $i$-th visit to node $x$, we take the $i$-th edge that was sampled for node $x$. The path terminates either when a node in $S$ is visited more than $\sqrt{t}$ times or we reach a node that isn’t in $S$. Reset $v$ to be the final node of this path and increase $\ell$ by the length of the path. (If we complete the length $t$ random walk during this process we may terminate at this point and return the current node.) 4. Perform the remaining $O(\sqrt{t})$ steps of the walk using the trivial algorithm. So why does it work? First note that the maximum size of $S$ is $O(\sqrt{t})$ because $|S|$ is only incremented when $\ell$ increases by at least $\sqrt{t}$ and we know that $\ell \leq t$. The total space required to store the vertices $T$ is $\tilde{O}(n)$. When we sample $\sqrt{t}$ edges incident on each node in $S$, this requires $\tilde{O}(|S|\sqrt{t})=\tilde{O}(t)$ space. Hence the total space is $\tilde{O}(n)$. For the number of passes, note that when we need to take a pass to sample edges incident on $S$, we make $O(\sqrt{t})$ hops of progress because either we reach a node with an unused short walk or the walk uses $\Omega(\sqrt{t})$ samples edges. Hence, including the $O(\sqrt{t})$ passes used at the start and end of the algorithm, the total number of passes is $O(\sqrt{t})$. Das Sarma et al. also present a trade-off result that reduces the space to $\tilde{O}(n\alpha+\sqrt{t/\alpha})$ for any $\alpha\in (0,1]$ at the expense of increasing the number of passes to $\tilde{O}(\sqrt{t/\alpha})$. They then use this for estimating the PageRank vector of the graph. ## FOCS accepts… Luca has just announced the accepted papers for FOCS. Papers that have direct or indirect connections to streaming and communication complexity include: • Lower Bounds for Near Neighbor Search via Metric Expansion [Panigrahy, Talwar, Wieder] • The Limits of Two-Party Differential Privacy [McGregor, Mironov, Pitassi, Reingold, Talwar, Vadhan] • Information Cost Tradeoffs for Augmented Index and Streaming Language Recognition [Chakrabarti, Cormode, Kondapally, McGregor] • Bounded Independence Fools Degree-2 Threshold Functions [Diakonikolas, Kane, Nelson] • Polylogarithmic Approximation for Edit Distance and the Asymmetric Query Complexity [Andoni, Krauthgamer, Onak] • Sublinear Optimization for Machine Learning [Clarkson, Hazan, Woodruff] • The Coin Problem, and Pseudorandomness for Branching Programs [Brody, Verbin] Abstracts can be found here. More pdfs can be found at My Brain is Open. ### About A research blog about data streams and related topics. ### Recently Tweeted • Coding! Complexity! Sparsity! eecs.umich.edu/eecs/SPARC2013/ 2 days ago • None technical thought for the day: "You - Ha Ha Ha" really reminds me of "Fineshrine" fwiw #hashcollision. 1 week ago • SPIRE 2013 deadline extended to 9 May. u.cs.biu.ac.il/~porately/spir… 2 weeks ago • SODA 2014 CFP. siam.org/meetings/da14/ 4 weeks ago • I can't be the first person to have confused circadian and Acadian rhythms but I'm the most recent. #toomuchdaniellanois 1 month ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 61, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8940945863723755, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/55983/dealing-with-experimental-data?answertab=active
# Dealing with experimental data I have some experimental data about a value $n$, now, I am supposed to give, in the ending, a single value with an error: $n=a\pm b$. I have originally 6 values of $n$, each one comes as an indirect measurement from direct measurement, each one with it's systematic errors, so in the ending I have those 6 values, each one with an error. So what I guess I have to do is to mix the systematic error with the random error the way I've been taught $(E_{sys}^2+E_{rand}^2)^{1/2}$. The systematic is already calculated, what do I use for the error? the mean of systematic errors? Another question is that those values, which are by the way moles of a quantity of a gas, have been got from different ways (basically from calculating different isothermic curves and getting the $n$ value that best fits each of them), so they're actually not from the same kind of measurement, but from different ones. This makes me doubt about how this would affect the calculation of the final error, if it does, or if I can just do it the way I said above. - I think that you may have mis-transcribed the formula for combining errors above: it has the wrong units. The usual prescription is to combine them "in quadrature" meaning $\left( \sum E_i^2 \right)^{1/2}$. – dmckee♦ Mar 5 at 22:04 @dmckee Yes it's wrong, I'm fixing it, thanks, that was a typo. – MyUserIsThis Mar 5 at 22:05 ## 1 Answer The answers depend on a number of details, and without knowing more about the actual situation you face I can give only a very general prescription. Assuming that you have uncorrelated errors, you would form the error-weighted mean $$\bar{n} = \sigma^2 \sum_i \frac{n_i}{E_i^2} \, ,$$ where the variance of the mean is $$\sigma^2 = \sum_i \frac{1}{E_i^2} \, .$$ With more information it might be possible to do better, but this is the way to punt in the absence of a better scheme. That said, you describe the errors as "systematic", which introduces a very real possibility that they are not uncorrelated and this analysis will under-estimate the real uncertainty that you face. There is quite a bit of detail in the linked wikipedia article, though it is somewhat terse. - Thanks for your answer. Systematic is probably a bad translation, I mean the error you make because of the instrument of measurement. – MyUserIsThis Mar 5 at 22:04 1 I understood about the errors, the problem is that if your instrument or method gets a, say low, value then it might get a low value on all measurements. Those errors are correlated and invalidate the above analysis. – dmckee♦ Mar 5 at 22:07 Ok, I see, what else would you need to know? The experiment is measuring values of pressure-volume to calculate isothermic curves. We then fit those curves to some equation of state, tipically Van der Waals, as we're dealing with phase transition, and we get the van der Waals coefficients, a, b, and the number of moles, n, I have to get these done for all three numbers. So what I have is 6 values (from six fits to 6 isothermic curves) for a,b,n. And now thinking about it, the error for each of them is the error of the fitting, not a systematic one. How would one approach this? – MyUserIsThis Mar 5 at 22:11 The biggest thing here is to examine the tools and methods to try to identify what correlation might exist in your data (in my field we would also assign a "model dependent" uncertainty by examining how the number vary under different assumptions about the equation of state (from among 2 or three of the best available models)). Repeating a single measurement six or more times would allow you to work out what part of you error is random (uncorrelated), but evaluating true systematics is hard work. At some point you have to ask your self if it is worth it. – dmckee♦ Mar 5 at 22:19 Ok, that's good enough, I will accept this answer and follow your advice (it's totally not worth it). I will simplify things. – MyUserIsThis Mar 5 at 22:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447500109672546, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/20069/what-does-it-mean-for-objects-to-follow-the-curvature-of-space?answertab=oldest
# What does it mean for objects to follow the curvature of space? In science documentaries that touch on general relativity, it is often said that gravitational pull isn't an actual a pull (as described by classical physics), but rather one body travelling in a straight line on the curved space around another massive body. I don't know if I misunderstood that or if it's just an oversimplification meant to help the uninitiated understand the concept better, but something doesn't seem to follow. If the Earth travelling around the Sun is just moving along a straight line in the curved space, shouldn't light also be trapped in orbit around the Sun? I expect the actual equations in the theory also take into account a body's velocity, not just the curvature. - ## 2 Answers Just to get that out of the way, "straight line" here means geodesic on a curved surface/manifold, but I guess you understand that. "If the Earth travelling around the Sun is just moving along a straight line in the curved space, shouldn't light also be trapped in orbit around the Sun?" As you've guessed, they don't follow the same geodesic because of their velocity. And the velocity of a body is automatically taken into account, because you don't just compute these geodesics in curved space, but in curved spacetime, where this makes a difference: Imagine a $t$-$x$-diagram and how the earth or a ray of light go away from the origin. The earths path will be close to the time axis, while the light path will be leaning towards the space axis (depending on your units). Both paths are similarly affected by the curvature of spacetime but they clearly start out in different directions (in spacetime, not in space) and so the geodesics will be quite different. There is some specific angle for which the object will be orbiting (notice that this now requires at least two spatial dimensions). Smaller angles will fall towards the earth while bigger angles will boldly go where no man has gone before. This is a very geometric notion of escape velocity. "I expect the actual equations in the theory also take into account a body's velocity, not just the curvature." Since you have 1000 rep on SE Mathematics I can formulate it this way: The geodesic equations are, like Newtons second law, second order and so they require two initial values. One is a position vector and the other one is a velocity vector, i.e. the direction in spacetime. The curvature (and that's another business) is taken into account by the coefficients of $\Gamma$ in the differential equations. These coefficients are basically derivatives of the spacetime metric, which itself must be a solution of the Einstein equations. Also, if you take a look at the geodesic equation, you see that for $\Gamma = 0$ (flat spacetime), you get the easy case $x''(t)=0$, or $x(t)=x_0+v_0t$, which represent actual straight lines. Here $x_0$ and $v_0$ are initial data. Furthermore, due to the equivalence principle in general relativity, the geodesics don't depend on the mass of the objects. All things fall equally, once they have the same starting position and velocity. But if two different masses are initially at rest (or since this is a relative statement let's better say: not moving relative to each other), then because of the relation between acceleration and mass, it is more difficult to get the heavier mass to have a certain starting velocity, i.e. direction in spacetime. And therefore you personally will never be able to get a chair on the same trajectory as a pen, which you forcefully throw in a certain direction in space. The chair is too heavy for you to make it follow the same path you could make a small pen follow. If two objects with different masses are initially at rest (not moving relative to each other) and then get pushed equally hard by some force, they will not both end up orbiting the same thing. So even if the geodesics of spacetime don't depend on the masses of the objects, which would follow them, you'll never see a ray of light follow the same trajectory as a flashlight, because, by the laws of relativity, the flashlight can't move at the speed of light. This is the extreme example: Massive objects never follow light-like geodesics and the other way around. - Let's build some physical intuition here. What happens to a rocket launched from Earth? If it is too slow , it will fall back to Earth. If it is fast enough , it will stay trapped in earth orbit, if it is faster (at least escape velocity) it will escape Earth orbit , but stay trapped in Sun orbit (it will become a planet). If it is even faster, it will escape the solar system and stay trapped in a galaxy orbit.We have just one space probe who has managed to reach escape velocity out of the solar system. And so on. The faster you go , the farther you manage to escape from gravity wells. Suppose now that your "rocket" is actually a light beam (a laser beam, if you like). Things do not change at all (the geodesic equations are the same, since no mass is present in them, just t becomes an affine parameter ). Its trajectory will be bent (very very slightly), but since it is so fast it will manage to escape Earth and Sun and Galaxy orbit. In principle it could be seen by an alien race in Andromeda, if you point it that way. In practice your light beam is too weak and will not be seen even on Alpha centaury. But the Sun emits powerful light rays all the time and they manage to escape Sun orbit and reach the stars. These light rays are powerful in intensity, but the speed is always the same of course (light speed). Question: is it possible for a light ray to be in closed orbit around a massive object? Yes it is! If you have a Black Hole and you are inside the horizon, nothing -not even light - can escape outside of the horizon. Light rays emitted tangentially on the horizon surface, stay on the surface , that is they orbit the black hole. Light rays emitted from within the horizon always stay inside. That's why from the outside we see it black. So your question : "shouldn't light also be trapped in orbit around the Sun?" The answer is no, since light (like a very fast rocket) manages to escape the Sun gravitational field. The trajectory is nevertheless slightly bent. Einstein successfully predicted bending of star-light rays grazing the sun by the right amount. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412941336631775, "perplexity_flag": "head"}
http://mathoverflow.net/questions/119686/on-an-inequality-of-brezis-lieb
## on an inequality of Brezis-Lieb ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In their 1983 JFA-paper Brezis and Lieb have shown, among many other things, a Poincaré-type inequality: in the case of a harmonic function $f$ on a bounded domain $\Omega$, their inequality ((3.14) in the paper) states that the $L^2(\Omega)$-norm of $f$ can be estimated by the $L^2(\partial \Omega)$-norm of its trace on $\partial \Omega$ (times a constant only depending on $\Omega$). My question: Is it possible to reverse this inequality, viz estimating the $L^2(\partial \Omega)$-norm of the trace of $f$ by the $L^2(\Omega)$-norm. This is indeed possible if $\Omega\subset R$ ($\Omega$ an interval), but this clearly follows simply from the fact that on an interval both the space of harmonic functions and the space of their boundary values are $2$-dimensional (then using equivalence of any two norms on a finite dimensional space). I have no clue whether this may extend to higher dimensions - in fact, I am pessimistic. Thanks in advance. - In general, the norm of the function on the boundary can be estimated by the norm of the function in $H^1$, so this may be what you are looking for. The answer below shows how just $L^2$ norms may fail. – Daniel Spector Jan 24 at 10:02 well, estimating the $L^2$-norm of a trace of $u$ by the $H^1$-norm of $u$ simply means that the trace operator is bounded from $H^1(\Omega)$ to $L^2(\partial \Omega)$ (which, by the way, only holds under certain smoothness assumption on $\partial \Omega$, as far as i know). My question was motivated by my hope that, perhaps, in this regard (weakly) harmonic function may perform better than mere H1-functions. – Delio Mugnolo Jan 27 at 8:48 ## 1 Answer No, choose $\Omega={z \in \mathbb{C} : |z| \leq 1 } ,\ f(z)=Re\ z^{n}$ and let $n\rightarrow \infty$ . - too bad. thanks a lot. – Delio Mugnolo Jan 27 at 19:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230373501777649, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/226700/describing-the-intersection-of-two-planes
# Describing the intersection of two planes Consider the plane with general equation 3x+y+z=1 and the plane with vector equation (x, y, z)=s(1, -1, -2) + t(1, -2 -1) where s and t are real numbers. Describe the intersection of these two planes. I started by substituting the parametric equations into the general equation and got 0=9. Does that imply the planes are parallel and hence do not intersect? - ## 2 Answers Yes, the planes are parallel and therefore do not intersect at a unique line, we can see this as the normal vector of the first plane is $\bigr(\begin{smallmatrix}3 \\ 1 \\ 1\end{smallmatrix}\bigl)$, and the normal vector of the second can be found by computing: $$\begin{pmatrix}1 \\ -1 \\ -2\end{pmatrix}\times\begin{pmatrix}1 \\ -2 \\ -1\end{pmatrix}=\begin{pmatrix}-3 \\ -1 \\ -1\end{pmatrix}=-1\begin{pmatrix}3 \\ 1 \\ 1\end{pmatrix}$$ Therefore the two planes are parallel as their normal vectors are anti-parallel. - As a test to see if the planes are parallel you can calculate the normalvectors for the planes {n1, n2}. If $abs\left (\frac{(n1\cdot n2)}{\left | n1 \right |*\left | n2 \right |} \right )==1$ the planes are parallel. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278446435928345, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=209625
Physics Forums Thread Closed Page 1 of 2 1 2 > ## Twin paradox As we all probably know, the twin paradox states that a twin goes off in a spaceship at c and returns, aging less than the twin on Earth because he traveled at c relative to Earth. However, relative to the spaceship, Earth is travelling away from it at c also, so technically, if I were on that spaceship, I'd think I'm stationary and Earth is moving, causing me to think the twin on Earth would age less since he's travelling at c relative to the spaceship. Why does the twin in the spaceship age less nevertheless? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 3 Recognitions: Gold Member Hi Fred2028, No spaceship will ever be able to go at c, whatever frame of reference you're in, so let's keep its velocity (relative to the earth ) at v which will always be less than c. The difference between the twins is that the space-ship passenger has to be accelerated by rocket motors on the outward trip and then decelerated before begining the return trip. So the situation is not symmetric, and the accelerated twin ages less. There are many threads on this subject, just scroll down the list of previous threads. Mentor Mentz explained why they don't have to age the same, but not why it's the astronaut twin that ages less. If you want to understand the twin "paradox" fully, you first have to understand space-time diagrams and how simultaneity works in SR. This diagram explains everything that's relevant about the twin paradox: (Why doesn't the "img" tag work here?) I'm calling the twin on Earth "A" and the twin in the rocket "B". Blue lines: Events that are simultaneous in the rocket's frame when it's moving away from Earth. Red lines: Events that are simultaneous in the rocket's frame when it's moving back towards Earth. Cyan (light blue) lines: Events that are simultaneous in Earth's frame. Dotted lines: World lines of light rays. Vertical line in the upper half: The world line of the position (in Earth's frame) where the rocket turns around. Green curves in the lower half: Curves of constant -t^2+x^2. Points on the two world lines that touch the same green curve have experienced the same time since the rocket left Earth. Green curves in the upper half: Curves of constant -(t-20)^2+(x-16)^2. Points on the two world lines that touch the same green curve have experienced the same time since the rocket turned around. From A's point of view B is aging at 60% of A's aging rate. From B's point of view A is aging at 60% of B's aging rate. The reason this isn't a paradox is that the moment before B turns around, he's in a frame where A has aged 7.2 years, and the moment after he's turned around, he's in a frame where B has aged 32.8 years. ## Twin paradox Whether Special Relativity really explains the age difference is a subject of much debate -what is not in question is the fact that two clocks in relative motion will not accumulate the same amount of time during the same spacetime interval. Read the books by Sciama, Lederman, Born, Atkins etc etc for one view --Read the books by Wheeler, Rindler, Resnick etc for another view. Then Read the papers of Selleri, Hatch and others for a third view. If there is no debate, why so many different assertions. Einstein himself waited 13 years before attempting to explain the time difference between relativly moving inertial frames in terms of a general relativity arguement. Recognitions: Science Advisor Quote by yogi Read the books by Sciama, Lederman, Born, Atkins etc etc for one view --Read the books by Wheeler, Rindler, Resnick etc for another view. Can you provide quotes from two of these that you think show differing views? In the past when I've been able to read the sources you refer to, it's seemed to me that you interpret them in overly narrow ways to create the appearance of contradiction, when there are perfectly reasonable ways of interpreting the authors' quotes such that they aren't contradicting one another... Jesse - we have already been over this in several threads - recall "Space, Time and Mass" and "Confusion in basic SR" Here is quote from Atkins (Physics, Universie of Pennsylvannia ) at p 509: "The problem cannot therefore be satisfactorily discussed in terms of the special theory. It is necessary to apply the general theory." I have already in past discussions given you citations to Born's view and direct quotes from his book. The fact that some people do not see a conflict does not mean that others do not. I don't have at this location, most of my library, so I cannot oblige you with what you asked. If I recall it was you that changed your argument as to the reality of the time difference that i cited in Part 4 of the 1905 paper - I think you began by saying that the example involving the relative motion of one of two synced clocks brought together cant mean "a real object time difference" ...that led to a lot of posts between us, and each being convinced the other was misunderstanding what Einstein was saying. For the purpose of my post - I really don't care what camp you are in - but there is nonetheless a disagreement as to whether SR explains the cause or simply rationalizes the result "What counts in the end is experimental predictions. If you set up an experiment where one twin travels at a high velocity, accelerates and turns around, and then returns to a twin that has stayed in a single inertial frame, everyone agrees that the twin who has remained inertial has the longer elapsed time. In a similar vein, one can say with definiteness that when one compares two clocks, one on a mountain top, and another in a valley, that the clock on the mountaintop will tick appear to tick faster when compared by light signals that have a constant propagation time. One can even have two clocks start out in a valley, carry one up to the mountain (via slow clock transport), let it sit for a while, then carry it back, and one will find that the clock that remained in the valley has less elapsed time. Now there are a number of ways to explain this all philosphically, as long as one can get to the same conclusions in the end, one can take several different philosphical positions about what is "real" and what is "not real". So my advice is not to worry too much when the phiolsophical parts of the answers differ. Focus on some experimental results (even thought experiments) - those are what must agree." Recognitions: Science Advisor Quote by yogi Jesse - we have already been over this in several threads - recall "Space, Time and Mass" and "Confusion in basic SR" Yes, and I recall in Confused in basic SR that you simply ignored my request to provide quotes or even page numbers for most of the various authors you listed (see my comment in post #70 and your non-response), and in the case of a few like Rindler and Einstein you interpreted them in strange ways to make it sound like they were agreeing with you, even though I pointed out perfectly reasonable ways of interpreting their quotes in ways that would be consistent with the standard textbook perspective. Quote by yogi Here is quote from Atkins (Physics, Universie of Pennsylvannia ) at p 509: "The problem cannot therefore be satisfactorily discussed in terms of the special theory. It is necessary to apply the general theory." Context, please. What is "the problem" he refers to? I'd guess it's something more like the problem of what laws of physics to use in the non-inertial coordinate system of the accelerating twin, not the problem of why one twin ages more in itself. Quote by yogi I have already in past discussions given you citations to Born's view and direct quotes from his book. Are you sure? I remember Rindler and Einstein quotes, but not any from Born. Doing a search for posts by you which mention Born, the only one that provides direct quotes or page numbers appears to be post #39 from this thread, but it doesn't seem relevant to your claim that SR does not resolve the twin paradox. Quote by yogi The fact that some people do not see a conflict does not mean that others do not. I don't have at this location, most of my library, so I cannot oblige you with what you asked. I'd be happy to wait. But if you don't have the quotes handy, how can you be so sure that they cannot be interpreted in a way that's consistent with the standard textbook perspective? Quote by yogi If I recall it was you that changed your argument as to the reality of the time difference that i cited in Part 4 of the 1905 paper - I think you began by saying that the example involving the relative motion of one of two synced clocks brought together cant mean "a real object time difference" ...that led to a lot of posts between us, and each being convinced the other was misunderstanding what Einstein was saying. And where did I "change" that argument? I have always maintained that unless two clocks start at a common location in space, you cannot say which has objectively "elapsed more time" in any frame-independent sense when they come together. You never pointed to anything in Einstein's statements that obviously conflicted with this. Quote by yogi For the purpose of my post - I really don't care what camp you are in - but there is nonetheless a disagreement as to whether SR explains the cause or simply rationalizes the result Physics doesn't even talk about the "cause" of anything, just provides mathematical rules for how things behave, so your statement appears meaningless to me. Quote by yogi "What counts in the end is experimental predictions. If you set up an experiment where one twin travels at a high velocity, accelerates and turns around, and then returns to a twin that has stayed in a single inertial frame, everyone agrees that the twin who has remained inertial has the longer elapsed time. In a similar vein, one can say with definiteness that when one compares two clocks, one on a mountain top, and another in a valley, that the clock on the mountaintop will tick appear to tick faster when compared by light signals that have a constant propagation time. One can even have two clocks start out in a valley, carry one up to the mountain (via slow clock transport), let it sit for a while, then carry it back, and one will find that the clock that remained in the valley has less elapsed time. Now there are a number of ways to explain this all philosphically, as long as one can get to the same conclusions in the end, one can take several different philosphical positions about what is "real" and what is "not real". I'm not sure what it even means to "explain this all philosophically". Your philosophy seems to consist of the idea that for some frame-dependent quantities (like the rate a clock is ticking at a particular moment), there is an objective truth about their "real" value, but since all frames are physically on equal footing this would have to be some sort of metaphysical position, and I don't think you'll find any physicists who say there is any good reason to believe this. Mentor Isn't there a forum rule against repeatedly posting crackpot claims? The claim that some of the "paradoxes" of SR can't be resolved within SR itself, but need GR, is a common misconception (even among people who are intelligent and have studied SR), but that doesn't mean it isn't absurd. Really absurd. It's right up there with "I believe there's life on all planets but perhaps not in our dimension" and "I believe that when the Maya calender ends in 2012, humans will stop living in 3 dimensions and start living in 5". (Those two claims were made in a Swedish forum about paranormal stuff). If it had been true, just about all of mathematics would fall with it. Even the integers would have to be thrown out the window. SR is just the set $\mathbb{R}^4$ and some functions. Those functions can't introduce a contradiction into the theory, so any inconsistency would have to be present in the real numbers, but the reals are constructed from the rationals, the rationals from the integers, and the integers from the ZF axioms of set theory, so we'd have to dismiss the integers and everything else constructed from the ZF axioms. hehe I remember in the old physics forum, I made a post asking about the twin paradox, and I wondered why it was even a paradox, and the thread ended up being one of the top 10 most replied or something . I didn't even post after the initial post. Fredrick - what gives you the right to decide who is crackpot - I suppose now I will have to hunt down Born's book and find Lederman's quote - who are you compared to these Nobel winners. Jesse: Your last scathing criticism is not of my ideas, that was a direct quote copied from one of pervects post - gotcha Also, jesse, to clarify, the quote from Atkins was in Chapter 25-6 entitled the Twin Paradox I just happened to have looked up something and came across that chapter and was surprised to realize Atkins was aligned with the Lederman, Born, Sciama crowd. Here is the rest of it: "The problem cannot therefore be satisfactorily discussed in terms of the special theory. It is necessary to apply the general theory. It can then be proved that the combined effects of B's velocities and accelerations are that, when he lands back on earth, his clocks have indeed registered a shorter period of time than A's clocks" Finally, I am not saying that it is necessary to use GR - I am not expressing a personal opinion. Einstein seemed to think something was needed by way of explanation - otherwise why would have taken the time to write the 1918 article If Einstein considered the problem totally resolved by SR, the 1918 article is redundant. My opinion is not in issue - all that was said is that its debated as to whether SR explains the Twin Paradox - that statement still stands - I don't care whether its resolved in the mind of any particular poster or not - it is not resolved in the minds of some real bright people falc39 - that is almost always the case with the Clock paradox - everyone jumps in to condemn the paradox - but in different ways - so many solutions, so many authorities, very little humility Recognitions: Science Advisor Quote by yogi Jesse: Your last scathing criticism is not of my ideas, that was a direct quote copied from one of pervects post - gotcha Um, except that I did not actually criticize that quote in my response, except to say I wasn't sure exactly what was meant by explaining the twin paradox philosophically: I'm not sure what it even means to "explain this all philosophically". Your philosophy seems to consist of the idea that for some frame-dependent quantities (like the rate a clock is ticking at a particular moment), there is an objective truth about their "real" value, but since all frames are physically on equal footing this would have to be some sort of metaphysical position, and I don't think you'll find any physicists who say there is any good reason to believe this. When I said "your philosophy seems to consist of...", I was referring to statements you have made elsewhere (like your claim that when two separate clocks are brought together, one of them must have elapsed less time), not to anything in that particular quote, which said nothing about frame-dependent quantities having a "true" value. Quote by yogi Also, jesse, to clarify, the quote from Atkins was in Chapter 25-6 entitled the Twin Paradox I don't have that book, I was asking you to provide the context of what was meant by "the problem", perhaps by quoting the entire paragraph as well as one or two preceding. Quote by yogi I just happened to have looked up something and came across that chapter and was surprised to realize Atkins was aligned with the Lederman, Born, Sciama crowd. Here is the rest of it: "The problem cannot therefore be satisfactorily discussed in terms of the special theory. It is necessary to apply the general theory. It can then be proved that the combined effects of B's velocities and accelerations are that, when he lands back on earth, his clocks have indeed registered a shorter period of time than A's clocks" Still doesn't explain what he meant by "the problem"--you need to provide more of the text preceding "the problem cannot therefore be satisfactorily discussed..." to show the actual context. Like I suggested, he may just have been talking about the problem of what the laws of physics look like in a non-inertial frame, which doesn't conflict with the idea that the twin paradox can be satisfactorily resolved using inertial frames in SR. Quote by yogi Finally, I am not saying that it is necessary to use GR - I am not expressing a personal opinion. Einstein seemed to think something was needed by way of explanation - otherwise why would have taken the time to write the 1918 article If Einstein considered the problem totally resolved by SR, the 1918 article is redundant. Because the 1918 paper was not specifically about the twin paradox--the problem he wanted to resolve was probably something more like finding laws of physics such that we could say the laws were the same in all frames, not just inertial ones. It would help if you would point out what specifically in the 1918 paper you're referring to when you talk about "the problem", though. Quote by yogi My opinion is not in issue - all that was said is that its debated as to whether SR explains the Twin Paradox - that statement still stands - I don't care whether its resolved in the mind of any particular poster or not - it is not resolved in the minds of some real bright people You have not provided any real support for the claim that any of these "bright people" think it can't be resolved in SR, just your own distorted interpretations of a few quotes taken out of context, as well as the claim that you have lots of other quotes by people like Born and Lederman and such which you refuse to actually provide. Follow-up Jesse: From Einstein's Theory of Relativity by Max Born At page 356: "Thus the clock paradox is due to a false application of the special theory of relativity, namely, to a case in which the methods of the general theory should be applied." I am not going to go through the whole development - since you don't believe anything I say - buy the book and see for yourself Recognitions: Science Advisor Quote by yogi Follow-up Jesse: From Einstein's Theory of Relativity by Max Born At page 356: "Thus the clock paradox is due to a false application of the special theory of relativity, namely, to a case in which the methods of the general theory should be applied." When he refers to "a false application of the special theory of relativity", he presumably means the paradox arises from falsely assuming the time dilation formula of SR will still work in the non-inertial frame of the accelerating observer. I would of course agree that this is a false application of SR, but it doesn't mean he's saying we can't explain why one twin ages less using SR alone, he's probably just saying that if we want to understand how things look from the frame of the accelerating twin, we need to use GR. As it happens, much of the book is available online, and looking at p. 261 confirms my assumption that Born sees no problem in resolving the twin paradox in SR by pointing out that the time dilation formula is only supposed to work in inertial frames, although he points out that you can analyze things in a non-inertial frame if you invoke GR: But it is superficial reasoning and the error is obvious; the principle of relativity concerns only such systems as are moving uniformly and rectilinearly with respect to each other. In the form in which it has been so far developed it is not applicable to accelerated systems. But the system B is accelerated and it is not, therefore, equivalent to A. The system A is an inertial system, B is not. Later, it is true, we shall see that the general theory of relativity of Einstein also regards systems as equivalent which are accelerated with respect to each other, but in a sense which requires more detailed discussion. When dealing with this more general standpoint we shall return to the clock paradox and show that on close examination there are no difficulties in it. For in the considerations above we made the assumption that for sufficiently long journeys the short periods of acceleration exert no influence on the beating of the clocks. But this holds only when we are judging things from the inertial system A and not for the measurement of time in the accelerated system B. Do you see anything here which denies that the time elapsed on each clock can be adequately computed from the perspective of "the inertial system A"? Again, all he's saying is that if you want to look at things from the perspective of the non-inertial system B, it's a false application of SR to use the standard time dilation equation in this system, instead you must use GR. That is the very issue - see the last two lines of the page (261) and p 355 .."when the system of reference is altered, definite gravitational fields must be introduced during the times of acceleration" I think he is saying "we can't explain why one twin ages less using SR alone" contrary to your interpretation. I call that a debateable point - My assertion still stands. We seem to always be interpreting words differently - for me the meaning is clear - the development of Born is based upon the need to explain the aging in terms of a pseudo gravitational field - of course, as you pointed out earlier, we don't really explain things in physics I will get you a citation of the 1918 paper Recognitions: Science Advisor Quote by yogi That is the very issue - see the last two lines of the page (261) and p 355 .."when the system of reference is altered, definite gravitational fields must be introduced during the times of acceleration" I think he is saying "we can't explain why one twin ages less using SR alone" contrary to your interpretation. I call that a debateable point - My assertion still stands. But that's the thing, he never actually says that, you're "reading between the lines" here (all he actually says is that we can't understand how things look in the non-inertial frame of the accelerating twin using SR alone, which is entirely different). I think the problem is basically this--if someone states the twin paradox by referring to twins A and B, with A moving inertially and B accelerating to turn around, and then says "each twin sees the other in motion, so each should predict that the other ages less", then there are two basic ways a physicist can respond to this: 1. "The apparent paradox comes from falsely trying to apply SR's time dilation formula to the non-inertial frame of twin B; that formula is only supposed to work in inertial frames. As long as we stick to inertial frames, we will correctly predict that B ages less than A." 2. "The apparent paradox comes from falsely trying to apply SR's time dilation formula to the non-inertial frame of twin B; that formula is only supposed to work in inertial frames. If we want to analyze things from the perspective of B's non-inertial frame, we must use GR, not SR." You see physicists making statements of type #2, and you "read between the lines" to infer that they're saying that the twin paradox itself can not be adequately resolved by SR. But in fact there is absolutely no inconsistency between #2 and #1; someone who agrees with #2 (like me) can also agree with #1, and both would certainly be correct under the standard understanding of relativity. I'm sure everyone on this forum who agrees that SR can resolve the twin paradox would basically agree with #2 (with the caveat that some would say that a 'uniform gravitational field' on flat spacetime is not really an application of GR since there's no spacetime curvature...I'm sure everyone would at least agree that in order to analyze things from the perspective of a non-inertial frame, one must go beyond the standard algebraic equations of introductory SR like the time dilation equation $$\Delta t' = \Delta t / \sqrt{1 - v^2/c^2}$$). So it seems fairly perverse for you to interpret physicists who say things along the lines of #2 as somehow denying #1; unless you can find a mainstream physicist specifically denying that SR alone is sufficient to tell us how much each twin ages, I think this is just a case of you misinterpreting their words. Thread Closed Page 1 of 2 1 2 > Thread Tools | | | | |-----------------------------------|------------------------------|---------| | Similar Threads for: Twin paradox | | | | Thread | Forum | Replies | | | Special & General Relativity | 5 | | | Special & General Relativity | 35 | | | Special & General Relativity | 12 | | | Special & General Relativity | 105 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9705842733383179, "perplexity_flag": "middle"}
http://castingoutnines.wordpress.com/2008/09/01/what-part-of-fx-fax-a-dont-you-understand/
1 September 2008 · 8:28 pm What part of (f(x)-f(a))/(x-a) don’t you understand? Why is the concept of the difference quotient so hard for beginning calculus students to handle? The idea is not as hard as some other concepts at this level that students have fewer problems with. You start with a function f and a point a. You are asked to write, and then simplify completely, the fraction $\frac{f(x) - f(a)}{x-a}$ or $\frac{f(a+h)-f(a)}{h}$ This involves four clearly-defined steps. (1) Compute all the function values in the numerator. (2) Perform the subtraction between the two objects in the numerator and simplify. (3) Factor the result out completely, and (4) see if you can find a common factor to cancel. And there’s a step (5): Since you know that every time you’ve done or seen a problem like this, there’s a factor/cancel step at the end, you know you screwed up if there isn’t one. But somehow, the fact that this is a totally algorithmic, almost automatic process that is the same procedure every single time — and even the slight variations among instances only consist in algebra tricks — doesn’t stop students from suffering a complete brain-freeze at the sight of them. They convince themselves they don’t know how or where to start (despite worked-out examples or even difference quotient exercises that they themselves have worked out before). They plug $x-f(a)/(x-a)$ in to f. They use $f(a)+h$ instead of $f(a+h)$. And so on. There’s a massive wall of intimidation that these exercises lay down, and even those who make it over that wall end up talking themselves into doing all kinds of stuff that is wrong bordering on bizarre. These exercises get inside their heads somehow. And these are young men and women smart and capable enough of getting into college, mind you — not dummies. The cure for math intimidation is a disciplined heuristic for solving problems and a faith in your algorithms for more mechanical exercises. But with difference quotients somehow the heuristics and algorithms run fleeing like Tokyo residents before Godzilla. It’s not just difference quotients, either — there are lots of algebra components that throw calculus students for an absolute loop, and I cannot figure out why. Any ideas? Like this: 8 Comments Filed under Calculus, Math, Problem Solving, Teaching Tagged as Calculus, difference quotient, godzilla, mathematics, Teaching 8 Responses to What part of (f(x)-f(a))/(x-a) don’t you understand? 1. I think maybe you hit on part of the answer in your explanation. Is differentiationreally “a totally algorithmic, almost automatic process that is the same procedure every single time”, or is it…slope/rate-finding? I understand that it’s taught initially as the former, but it has a deeper meaning that is closer to the latter (and beyond). One reason a student might write f(a) +h in the numerator, I imagine, is that they’ve been taught this – like you say – as an algorithm of symbol-manipulation without fully understanding the meaning of all the symbols or manipulations. In that context it’s quite easy to miss the important distinction between f(a+h) and f(a)+h…after all, same symbols, just different order. How could it make that much difference? I believe that for me there was an initial awkwardness-stage with these concepts until I really got it regarding what differentiation was really all about. And of course once I did, the quotient & method seemed laughably trivial. So I think a lot of students are in that position, of having to do differentiation before “getting it”, and that explains the difficulty. By the way I’m not saying I know how to solve this thorny issue. I don’t necessarily know a superior way to teach differentiation than how it already tends to be taught. I can think of many reasons why it might be necessary to teach the ‘algorithm’ before providing sufficient time for the ‘getting it’. But as long as we do the difficulty will be there, I suspect. best 2. Keep in mind that I’m not talking about taking limits here — jsut forming and simplifying the difference quotient. When you throw on limits that adds several new layers of complication. I think there’s some conceptual reasoning that has to take place, like the meaning of f(whatever), but it seems to me that there’s far more of mechanical calculation than there is deeper conceptual understanding required for this sort of thing. 3. Right. My point is just that a memorized mechanical calculation is far easier to forget/flub if you haven’t internalized the meaning behind what you’re doing. And (rightly or wrongly) most students have to do the former before the latter. It’s actually far easier to remember mechanical calculations if you know their meaning because the meaning becomes a shorthand for the calculations, encodes them. I’m reminded of a friend who had to study for a technical exam and I marveled to observe him writing down and memorizing, separately, on his flash cards, as if they were different things, something along the lines these two formulas: 1. Y = A * exp(-Dx) 2. I = I0 * exp(-rt) Of course, one formula was about radiation shielding (how much radiation gets through a wall) and another was about radiation decay (how much activity remains after a given time), or something like that. I couldn’t believe he was devoting brainspace to memorizing these things since they each just boiled down to “exponential”, but memorize he did, right down to the letters for the parameters that were to be used in the formulas. This is a lot of work. It is also (relatedly) easy to screw up, I imagine, because if you didn’t memorize “exponential” how easy it might be to write Y=A * exp(-D)x. That’s why doing things this way makes me shudder. Yet when kids are introduced to differentiation, this is sort of what they have to do: (f(x+h) – f(x)) / h That’s a bunch of symbols, f’s, x’s, h’s to get all in the right place. How much easier and robust it would be just to memorize “take a slope” and then work out the required divided-difference from there. anyway, sorry to ramble but you raised an interesting topic…best, 4. What Sonic Charmer said. They never loved fractions, even if they could handle them, and could deal with f(x) as long as that meant the same thing as y… And now they are together and strange. It’s the symbols, mostly. Jonathan 5. Mark Last year, my 11th grader daughter who was taking precalculus (and who is quite mathematically inclined) said to me (a mathematics professor) “Dad — you won’t believe this! We’ve been doing difference quotients for TWO WEEKS and there are still some kids who can’t simplify them!” I found her shock amusing and she found my lack of shock surprising. Of course, my experience tells me that with some students, you could spend months and still not get satisfactory results…. 6. I think the problem is not understanding function notation. Too often students do not see this early in their algebraic careers and never fully understand it. To many f(x) + a is the same as f(x + a). There are things we can do in Algebra I to help prepare the way… I’m trying! 7. Just another liberal professor In my experience, it’s usually because too many calculus students know almost no algebra. 8. Pingback: Average velocity « Casting Out Nines • Robert Talbert Here's my personal website. • Subscribe • I'm also part of the blogging team at:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9521595239639282, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/136514-separable-differential-equations-print.html
# Separable Differential Equations Printable View • March 30th 2010, 08:40 AM Spudwad Separable Differential Equations So I have been getting caught up on two separable differential equations the first being: $\frac {3du} {dt} = u^2$ with initial conditions: $u(0) = 4$ I divided by $u^2$ and multiplied by $dt$ giving: $\frac {3du} {u^2} = dt$ and after integrating I got: $\frac {3} {u} +C = t + C$ and inversing it, so that the u was in the numerator and multiplying by 3 I ended up with: $u = \frac {3} {t+c}$ But I am not sure if this is how the problem is really supposed to be done. The other equation I am having trouble with is finding the general term for: $\frac {dR} {dx} = a(R^2+25)$ where a is some nonzero constant. Would this just be $5tan(ax)+C$? Thanks ahead of time for the help, I really appreciate it. • March 30th 2010, 09:27 AM mathemagister Quote: Originally Posted by Spudwad So I have been getting caught up on two separable differential equations the first being: $\frac {3du} {dt} = u^2$ with initial conditions: $u(0) = 4$ I divided by $u^2$ and multiplied by $dt$ giving: $\frac {3du} {u^2} = dt$ and after integrating I got: $\frac {3} {u} +C = t + C$ and inversing it, so that the u was in the numerator and multiplying by 3 I ended up with: $u = \frac {3} {t+c}$ But I am not sure if this is how the problem is really supposed to be done. The other equation I am having trouble with is finding the general term for: $\frac {dR} {dx} = a(R^2+25)$ where a is some nonzero constant. Would this just be $5tan(ax)+C$? Thanks ahead of time for the help, I really appreciate it. $3\frac{du}{dt} = u^2$ $3u^{-2} du = dt$ Integrate: $-3u^{-1} + K = t + K$ $\frac{-3}{u} = t + C$ $u = -\frac{3}{t+C}$ $u(0) = 4 \implies -\frac{3}{C}=4 \implies C = -\frac{3}{4}$ $u(t) = -\frac{3}{t-\frac{3}{4}}$ All times are GMT -8. The time now is 01:32 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.960968554019928, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/205505/why-do-we-need-taylor-polynomials/205610
# Why do we need Taylor polynomials? This question doubles as "Is my understanding of what a Taylor polynomial is for, correct?" but In order to write out a Taylor polynomial for a function, which we will use to approximate said function at a given value, we must first have the function itself. BUT, if we already have the function, why not just use it and get an exact value instead of using it's Taylor polynomial and only getting an approximation? Also, how does this benefit computer programmers? - ## 2 Answers Often times we know the function at very specific values and we want to approximate around that value. An easy example is to consider the exponential around zero. We know that the exponential function at zero is one. We also know that every derivative at zero is one, but it'd be much more difficult to calculate say $e^{0.1}$. However, using just an order three Taylor polynomial $$T_3(x) = 1 + x + \frac{x^2}{2} + \frac{x^3}{6}$$ I can calculate $e^{0.1}$ to be approximately $1.10517$ which is accurate to $6$ significant digits. Higher order approximations give greater accuracy. The Taylor series gives us methods to approximate around regions of interest. And how exactly does this benefit computer science? The ability to estimate well is already essential to programming. If only a certain amount of precision is needed, it is often faster to take an approximation than to make some sort of full calculation and Taylor polynomials give a standard way of making such approximations. Taylor polynomials also give an order of magnitude estimate which is quite useful for asymptotics. - How do your calculator/computer calculate $\pi$, $e$ and $\sin$ and $\cos$ values? I would bet, mostly using Taylor polynomial approximation. For example, integrating $\arctan'(x)=\displaystyle\frac1{1+x^2} = 1-x^2+x^4-x^6\pm\dots$, we get that $\arctan(x) = x-\displaystyle\frac{x^3}3+\frac{x^5}5-\frac{x^7}7\pm\dots$. One still needs to think over that it is legal, but if we put $x=1$ in it, we get $$\frac\pi4=1-\frac 13+\frac 15-\frac 17\pm\dots$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502652287483215, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/226987/what-is-name-of-conjecture
# What is name of conjecture? [duplicate] Possible Duplicate: Every even integer can be expressed as the difference of two primes? there is one conjecture that I do not know what they are called. This is: Every even number can be always written as the difference between two prime numbers. Could you please help me to know what it is called? Regards, - 1 Do you mean difference of two prime numbers or sum of two prime numbers. – Graphth Nov 1 '12 at 19:44 1 – Dejan Govc Nov 1 '12 at 20:13 ## marked as duplicate by Graphth, Chris Eagle, Arkamis, ncmathsadist, tomaszNov 1 '12 at 22:08 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 1 Answer Polignac's conjecture is what you want. It goes further, stating that there are infinately many prime gaps of size $n$ for every even integer $n$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9633240699768066, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/23607?sort=oldest
## Are the arithmetic genera of Cohen-Macaulay curves in a fixed homology class bounded? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let X be a smooth projective variety over the complex numbers. Recall that a Cohen-Macaulay curve is a one-dimensional closed subscheme without embedded or isolated points (fat components are allowed). I want to show that if you fix a curve class β in H2(X), then there is a bounded interval which contains the genus of every CM curve whose homology class is β. Do you know of a reference for this result? Motivation: I am trying to prove a certain class of sheaves forms a bounded family (namely, the collection of sheaves underlying stable pairs). The above result will allow me to reduce to the case where the support is a fixed CM curve, and from there, I know how to finish the proof. I am aware that Le Potier (who built the moduli space of stable pairs, among others things, in "System Coherents et Structures de Niveau" --- pardon my lack of accents) has proven a much stronger result. However, for my purposes, it would be very helpful to have a proof (hopefully much less technically demanding than Le Potier's) that is no more general than what I need. EDIT: in hindsight, and in light of the negative answer below, I realize that Le Potier does not claim to prove quite what I claimed he claimed. - ## 3 Answers I don't think this is true. Take $X = \mathbb P^1 \times \mathbb P^2$. Let $C$ be $\mathbb P^1 \times 0$, and let $C_1$ be the first infinitesimal neighborhood of $C$. The curve $C_1$ is the relative spectrum of $\mathcal O_{\mathbb P^1} \oplus O_{\mathbb P^1}^{\oplus 2}$, where $\mathcal O_{\mathbb P^1}^{\oplus 2}$ is a square-zero ideal. Any sheaf $\mathcal O_{\mathbb P^1}(d)$, with $d \ge 0$, is a quotient of $\mathcal O_{\mathbb P^1}^{\oplus 2}$. If $C(d)$ denotes the relative spectrum of $\mathcal O_{\mathbb P^1} \oplus O_{\mathbb P^1}(d)$, then $C(d)$ is contained in $C_1$ for all $d \ge 0$; hence it is embedded in $X$ with fundamental class $2[C]$. But the arithmetic genus of $C(d)$ is $-d-1$. [Added later] For a surface, a Cohen-Macaulay curve is a divisor, and the adjunction formula shows that the arithmetic genus is determined by the cohomology class, so the answer is positive. I believe that the answer is negative for all $X$ of dimension at least three. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. EDIT: As Angelo mentions, the argument below has a problem in the case of non-reduced curves. I am not sure that an upper bound for the arithmetic genus is impossible to show, but certainly the lower bound for Cohen-Macaulay curves is false, as Angelo's examples show. The argument below shows that there is an upper bound (and in fact there is also a lower bound) for the arithmetic genus of a reduced subscheme of pure dimension one in projective space, whether Cohen-Macaulay or not. I tried to play a little with the Cohen-Macaulay condition to prove that there is an upper bound, but with little success. Choose an embedding of X in projective space. Since your curves are all in the same homology class, they all have the same degree: this is simply the intersection number of the ample class with the homology class of the curves. Generic projection to a plane tells you (since the curves are reduced) that the curves you are interested in are partial normalizations of plane curves with bounded degree. Since the arithmetic genus decreases under (partial) normalizations, and since the arithmetic genus of a plane curve of bounded degree is bounded above, you conclude that the arithmetic genera of your curves are bounded above. - 1 Does this work for non-reduced curves? – Angelo May 5 2010 at 19:10 I strongly believe that there should be an upper bound for Cohen-Macaulay (notice that reduced curves are automatically Cohen-Macaulay). Of course for proving boundedness this is not good enough, but if the OP cares I might try to write up a proof. – Angelo May 6 2010 at 15:07 Why isn't this enough to prove boundedness? The class beta (i think) is enough to fix the dimension of H^0, and an upper bound on genus fixes H^1, so they curves live in a finite number of Hilbert schemes. – David Steinberg May 7 2010 at 17:50 The component of the Hilbert scheme is determined by degree and genus; infinitely many genera means infinitely many components. – Angelo May 7 2010 at 18:37 Hi David, there is a two-line proof for the bound from above, if I am allowed to use the boundedness of the Hilbert scheme: If there is a curve with degree $\beta$ and $\chi = m-d$, then the Hilbert scheme of curves of degree $\beta$ and Euler characteristic $\chi = m$ has dimension at least $d \cdot \dim X$, as I can always add $d$ floating points to my given curve. But the Hilbert scheme for $\beta, \chi = m$ has finite dimension. So $\chi$ is bounded from below (and the genus from above). I also think this bound for $\chi$ from below should be enough to prove the boundedness of sheaves appearing in stable pairs that you need. - Great! Thank you, Arend. – David Steinberg Aug 4 2010 at 17:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422215223312378, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/18617/do-interaction-free-measurements-require-a-physical-collapse-or-splitting-in-ord?answertab=active
# Do interaction-free measurements require a physical collapse or splitting in order to be truly interaction free? Interaction-free quantum experiments like Renninger's experiment or the Elitzur-Vaidman bomb tester are often taken to be examples of interaction-free measurements of a system. Unfortunately, such assumptions presuppose the ability to post-select in the future just to make sense. Interpretationally speaking, it is hard to see how post-selection can possibly be made without some form of physical collapse of the wave function or a preferred physical splitting of the wave function into branches. Without either a physical collapse or splitting, is it possible to gain information about a system of which we are totally ignorant about the preparation of its properties we are interested in without an actual interaction with it? Basically, does the idea of interaction-free measurements only make sense within some interpretations of quantum mechanics and not others? Is there a philosophical reading of the two-state formalism which does not presuppose a collapse or a splitting? In the two-state formalism, does the necessity of normalizing the overall probability factor to 1 entail the ontological reality of the other outcomes because we have to sum up over their probabilities to get the rescaling factor? The other outcomes where the bomb did in fact go off? - ## 1 Answer diagram The Elitzur-Vaidman bomb tester isn't really an interaction free measurement. Analyze it using consistent histories. Suppose initially, for the three possible bomb states, we start off with the mixture diag(p, 1-p, 0) for dud, workable but unexploded, and exploded respectively. Let $P_c$ correspond to the projector of a photon detected at C. Let $P_l$ correspond to the projector the photon traveled along the lower path, and $1−P_l$ it travelled along the upper path. Consider the chain operators $C_1≡P_c P_l$ and $C_2≡P_c (1−P_l)$. Note that $Tr[C_1 ρ C_2^\dagger]=p/4≠0$. The consistency conditions are not satisfied. In a realm where we know the photon was detected at C, we can't say whether or not it took the lower path. It's not really interaction free after all. That presupposes the photon didn't take the lower path. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9287291765213013, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/69265-simplifying-expressions-into-different-forms.html
# Thread: 1. ## Simplifying Expressions Into Different Forms :( Okay so I have this problem that says: If we simplify the expression $x^4y^4-x^6/(xy)^2$ to the form $x^Py^Q-x^Ry^S$ what are the terms P,Q,R, and S? im struggling on how to get started to figure it out. my first idea was to make the expression $x^4y^4-x^6/(xy)(xy)$ 2. Originally Posted by jemadd2 Okay so I have this problem that says: If we simplify the expression $x^4y^4-x^6/(xy)^2$ to the form $x^Py^Q-x^Ry^S$ what are the terms P,Q,R, and S? im struggling on how to get started to figure it out. my first idea was to make the expression $x^4y^4-x^6/(xy)(xy)$ Re-write your second term as $x^6 x^{-2}y^{-2}$ and use the law of exponents. 3. Originally Posted by danny arrigo Re-write your second term as $x^6 x^{-2}y^{-2}$ and use the law of exponents. so $x^4y^4-x^6/(xy)^2$turns into $x^6x^-2y^-2$? how does that work? sorry, im confused 4. Originally Posted by jemadd2 so $x^4y^4-x^6/(xy)^2$turns into $x^6x^-2y^-2$? how does that work? sorry, im confused Yes, in general $\frac{1}{x^n} = x^{-n}$ and $(xy)^n = x^n y^n$ #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413905143737793, "perplexity_flag": "middle"}
http://psychology.wikia.com/wiki/Rank_correlation
Rank correlation Talk0 31,725pages on this wiki Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory In statistics, rank correlation is the study of relationships between different rankings on the same set of items. It deals with the measurement of correspondence between two rankings, and the calculation of the significance of the correspondence. Correlation coefficients Suppose we rank a group of eight people by height and by weight: Person A B C D E F G H Rank by Height 1 2 3 4 5 6 7 8 Rank by Weight 3 4 1 2 5 7 8 6 We can see that there is some correlation between the two rankings but that the correlation is far from perfect, and we would like some way of objectively measuring the degree of correspondence. In the 1940s Maurice Kendall developed a coefficient, τ, for this purpose that has the following properties: • If the agreement between the two rankings is perfect, ie. the two rankings are the same, the value of the coefficient is equal to 1. • If the disagreement beween the two rankings is perfect, ie. one ranking is the reverse of the other, the coefficient is equal to -1. • For all other arrangements the value lies between -1 and 1, and increasing values imply increasing agreement between the rankings. • A value of 0 implies that the two rankings are independent. It is defined by $\tau = \frac{2P}{\frac{1}{2}n(n-1)} - 1$ where n is the number of items, and P is a quantity derived from the rankings as follows: In the Weight ranking above, the first entry, 3, has five higher ranks to the right of it; the contribution to P of this entry is 5. Moving to the second entry, 4, we see that there are four higher ranks to the right of it and the contribution to P is 4. Continuing this way, we find that P = 5 + 4 + 5 + 4 + 3 + 1 + 0 + 0 = 22. Thus $\tau= \frac{44}{28}-1 = 0.57$. This result indicates that there is strong agreement between the rankings, as expected. References • Kendall, M. (1948) Rank Correlation Methods, Charles Griffin & Company Limited Photos Add a Photo 6,465photos on this wiki • by Dr9855 2013-05-14T02:10:22Z • by PARANOiA 12 2013-05-11T19:25:04Z Posted in more... • by Addyrocker 2013-04-04T18:59:14Z • by Psymba 2013-03-24T20:27:47Z Posted in Mike Abrams • by Omaspiter 2013-03-14T09:55:55Z • by Omaspiter 2013-03-14T09:28:22Z • by Bigkellyna 2013-03-14T04:00:48Z Posted in User talk:Bigkellyna • by Preggo 2013-02-15T05:10:37Z • by Preggo 2013-02-15T05:10:17Z • by Preggo 2013-02-15T05:09:48Z • by Preggo 2013-02-15T05:09:35Z • See all photos See all photos >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.887722373008728, "perplexity_flag": "middle"}