url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/51727?sort=oldest
Presheaves are locally sheaves? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) On nlab it says that a presheaf is locally isomorphic to a sheaf. What do they mean by locally isomorphic? Their definition of locally isomorphic is given in terms of Grothendieck topologies which i think is overkill. When I first read the nlab page, I thought that it might mean that every presheaf, when restricted to a small enough open set is a sheaf, but I have doubts now because I can't find a proof in the literature and I can't prove it myself. - 2 Given that they defined a sheaf in terms of Grothendieck topologies, its not surprising that their other definitions involve them. If you're not dealing with such things I guess you could drop the Grothendieck bit. And the statement that presheaves are locally isomorphic to sheaves is almost tautological, given their definition of local isomorphism = isomorphism after passing to sheaves. – George Lowther Jan 11 2011 at 3:35 But, having said that, I don't think this question is really suitable here. Maybe math.stackexchange would be a better place? – George Lowther Jan 11 2011 at 3:37 Give a presheaf $F$, any the section $s\in Sh(F)(U)$ of the associated sheaf are locally sections of the presheaf i.e exist a covering $U=\cup_i U_i$ such that any restriction $s_{|U_i}$ come from (by the reflection map $r_{U_i}: F(U_i)\to Sh(F)(U_i)$) a section of the presheaf. – Buschi Sergio Jan 11 2011 at 14:13 4 The statement in the nLab entry about local isomorphism is correct, with the definition of local isomorphism as given at the page linked to: the canonical morphism from a presheaf to its sheaffification is a morphism that becomes an isomorphism under sheafification. Such morphisms are traditionally called local isomorphisms. And yes, this is a special case of the general theory of left exact reflective localizations. – Urs Schreiber Jan 11 2011 at 18:16 1 -1: Daniel, did you ever think to look up the page local isomorphism on the nLab: ncatlab.org/nlab/show/local+isomorphism ? Also, the definition in terms of Grothendieck topologies is a necessary complication of the theory WRT local isomorphisms (or else we're left with only the trivial case). See my comment on Clark Barwick's answer. – Harry Gindi Jan 12 2011 at 13:24 show 2 more comments 3 Answers Dear Daniel, the reason you couldn't find a proof of your statement nor locate one in the literature is that it is false ; so you were quite right to "have doubts now" ! Here are two (essentially equivalent) statements that hopefully clarify the situation. I) Given a presheaf $\mathcal F$ on a topological space, it is not true that there exists a non-empty open subset $U\subset X$ such that the restriction $\mathcal F |U$ is a sheaf. For example take $X=\mathbb R$ and define the presheaf $\mathcal F$ by $\mathcal F(V)= \mathbb Z$ for all open $V\subset \mathbb R$ (constant presheaf on $\mathbb R$ with values in $\mathbb Z$). Since every open $U$ contains disjoint open subsets, the restriction $\mathcal F |U$ is never a sheaf. II) Given a presheaf $\mathcal F$ on a topological space and its sheafification $\mathcal F \to \mathcal F'$ it is not true that there exists a non-empty open subset $U\subset X$ such that the restricted morphism $\mathcal F |U \to \mathcal F'|U$ is an isomorphism of presheaves. In the preceding example the sheafification $\mathcal F'$ is the sheaf of locally constant $\mathbb Z$-valued functions and again for every $U\subset \mathbb R$ you will find disjoint open intervals $I_1,I_2 \subset U$ for which $\mathcal F(I_1\sqcup I_2)= \mathbb Z \neq \mathcal F'(I_1\sqcup I_2)= \mathbb Z^2$ . So the restricted morphism $\mathcal F |U \to \mathcal F'|U$ is not an isomorphism of presheaves. Conclusion I find it ambiguous, as proved by this very question, to call a morphism of sheaves a "local isomorphism" if it is an isomorphism on the stalks. I don't know how widespread this usage is but in my opinion people using it should warn their readers if they decide to adopt it. On the other hand, I must concede that everybody (myself included) calls $\mathcal F'$ a constant sheaf. This terminology also seems a little misleading but it is firmly entranched now and is here to stay. An answer to Roy's question He asks (in his answer below) for an example of a presheaf all of whose restrictions to open subsets are non-separated. [Recall that a presheaf $\mathcal F$ is said to be separated if given a covering $U=\cup U_i$ of an open set $U$ by open subsets $U_i$, you can deduce for two sections $f,g\in \mathcal F (U)$ that $f=g$ as soon as you know that $f| U_i=g| U_i$ for all $i$ . This is equivalent to saying that, if $\mathcal F'$ denotes the sheafification of $\mathcal F$, all morphisms $\mathcal F (U) \to \mathcal F'(U)$ are injective.] Here is the example. On the topological space $\mathbb R$ consider the sheaf of continuous functions $\mathcal C$, its subpresheaf $\mathcal C_b$ of continuous bounded functions ( Caution: this is not a sheaf !) and the quotient presheaf $\mathcal F= \mathcal C / \mathcal C_b$ i.e. for $V$ open in $\mathbb R$, $\mathcal F(V)=\mathcal C (V)/ \mathcal C_b (V)$. It is then clear that for all non-empty open $V\subset \mathbb R$ we have $\mathcal F(V) \neq 0$ but for the sheafification $\mathcal F'$ we have $\mathcal F'(V)= 0$ (because every continuous function is locally bounded !). And this is the example requested by Roy: for every non-empty $U$ the restriction $\mathcal F |U$ is a non-separated presheaf on $U$ : $\mathcal F |U \neq 0$ certainly does not inject into $\mathcal F'|U =0$ - 1 An isomorphism on the level of stalks should probably be called an "infinitesimal isomorphism." – Kevin Ventullo Jan 11 2011 at 17:38 1 Harry, all manifolds of the same dimension are locally diffeomorphic, independently of any morphisms betwen them. On the other hand a submersion of a manifold onto another one of lower dimension has local sections about each point but the manifolds are certainly not locally diffeomorphic. Finally I have never heard the notion of objects being globally morphic . – Georges Elencwajg Jan 12 2011 at 14:18 1 Dear Georges, two manifolds are locally diffeomorphic iff there exists a local diffeomorphism from one to the other (whence globally morphic). – Harry Gindi Jan 12 2011 at 15:21 2 Dear Harry, since our discussion hinges on terminology, I have nothing to add: we just seem to have different definitions. Thank you for your interest in this post and for sharing your point of view. – Georges Elencwajg Jan 12 2011 at 17:29 1 @Harry: it seems that your view on what "local isomorphism" means keeps coming up and it seems that no one else thinks that it should mean what you are suggesting. Requiring the existence of a morphism for saying that two objects are "locally isomorphic" is an overkill. In particular, in most cases, this will imply that those objects are actually isomorphic. Think of sheaves; if there is a morphism $\mathcal F\to \mathcal G$ that's an isomorphism on the stalks, then it is an isomorphism. – Sándor Kovács Jan 14 2011 at 0:32 show 10 more comments You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here's one way to answer your question. Consider the category $\mathbf{PSh}(X)$ of presheaves (of sets) on a topological space $X$. A map $F\to G$ of $\mathbf{PSh}(X)$ is said to be a local isomorphism if for every point $x\in X$, the induced map $F_x\to G_x$ on stalks is a bijection. Denote by $W$ the class of local isomorphisms. Now the category $\mathbf{Sh}(X)$ of sheaves on $X$ is equivalent to the localization $W^{-1}\mathbf{PSh}(X)$. In particular, for any presheaf $F$, there is a local isomorphism $F\to F'$, where $F'$ is a sheaf. - 4 @Daniel: Clark's answer extends to the general case of an arbitrary base category with grothendieck topology $\tau$ by defining $W_\tau$ to be $Sh_\tau^{-1}(Iso(Psh(C))$. The rather more interesting part of this question, however, is that there exists a direct characterization of these "systems of local isomorphisms". It is then a theorem that Grothendieck topoogies on $C$ are in canonical bijection with systems of local isomorphisms on $Psh(C)$. – Harry Gindi Jan 11 2011 at 11:38 Arrgh, I wish to delete this, but do not know how. So i will make it into a question. Georges' nice answer is a presheaf that violates the existence axiom s2 on every nbhd of a point. Is there an example that violates the uniqueness axiom s1 on every nbhd of some point? - 1 Dear Roy, I have answered your interesting question in an addendum to my original answer. Please do not delete your question above, else readers will be disoriented by my orphaned answer! – Georges Elencwajg Jan 13 2011 at 21:19 Thank you Georges! If I understand correctly, in your example every element becomes zero near the point, but there is no set on which all do at once. It seems so clear after seeing it. I.e. this is all that is needed to make all the stalks zero, which makes the sheaf (but not the presheaf) zero. – roy smith Jan 14 2011 at 22:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407923817634583, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/255372/what-is-a-complex-name/255582
# What is a Complex Name? On Page 38, Elementary Set Theory with a Universal Set, Randall Holmes(2012), which can be found here. We give a semi-formal definition of complex names (this is a variation on Bertrand Russell's Theory of Descriptions): Definition. A sentence $\psi [(\text{the }y\text{ such that }\phi)/x]$ is defined as $$\begin{align*}&\big((\text{there is exactly one }y\text{ such that }\phi)\text{ implies }(\text{for all }y, \phi\text{ implies }\psi[y/x])\big)\\&\text{ and }\\&\Big(\big(\text{not}(\text{there is exactly one }y\text{ such that }\phi)\big)\text{ implies }\\&\qquad\big(\text{for all }x,(x\text{ is the empty set})\text{ implies }\psi\big)\Big)\;.\end{align*}$$ Renaming of bound variables may be needed. Definition of the form "$\phi[y/x]$" is: Definition. When $\phi$ is a sentence and $y$ is a variable, we define $\phi[y/x]$ as the result of substituting $y$ for $x$ throughout $phi$, but only in case there are no bound occurrences of $x$ or $y$ in $\phi$. (We note for later, when we allow the construction of complex names $a$ which might contain bound variables, that $\phi[a/x]$ is only defined if no bound variable of $a$ occurs in $\phi$ (free or bound) and vice versa). I can't understand why $\psi [($the$\, y\,$such that$\, \phi)/x]$ is defined as it is? Especially, "((not(there is exactly one $y$ such that $\phi$ )) implies (for all $x$, ($x$ is the empty set) implies $\psi$ ))" seems to come out of nowhere. Feel free to retag this question, I'm not sure if some other disciplines, like elementary set theory, lingusitics are more closely related to it. - – RParadox Dec 10 '12 at 11:33 @RParadox: Thank you for your link. The problem is that it's equally elusive. – Metta World Peace Dec 10 '12 at 11:46 ## 2 Answers Good question. A guess coming up. General issue: How should we regard expressions of the form "the $\varphi$" or better "the $y$ such that $\varphi(y)$". Option one: as mere "syntax sugar" that can be parsed away. This is Russell's line. "The $y$ such that $\varphi(y)$" isn't really a complex name, but vanishes on analysis, because (i) $\psi$(the $y$ such that $\varphi(y)$) is equivalent to (ii) there is at least one thing which is $\varphi$ and at most one thing which is $\varphi$ and whatever is $\varphi$ is $\psi$. Option two: descriptions are complex names. "The $y$ such that $\varphi(y)$" is a complex name of the one and only one thing that is $\varphi$ if there is such a thing, and takes a default value, say the empty set, if there isn't. This was Frege's line. Both treatments are logically workable. Or we can mix them. Which seems to be what Holmes is doing here. We do parsing away (a la Russell): but treat the cases where there is and where there isn't a unique $\varphi$ differently, in effect supplying a default value when there isn't (a la Frege). So, roughly speaking, $\psi$(the $y$ such that $\varphi(y)$) says that whatever is $\varphi$ is $\psi$ if there is a unique $\varphi$, but becomes [equivalent to] $\psi(\emptyset)$ when there is no unique $\varphi$. But I am making this up as I go along, you understand: caveat lector! - Thank you for your excellent answer. But as a layman, I've no idea how these two competing treatments are "logically workable", which makes me unable to fully understand your argument. Could you please recommend something with a textbook treatment of Russell's and Frege's approaches at an introductory level? – Metta World Peace Dec 10 '12 at 22:08 – RParadox Dec 11 '12 at 9:59 Usually plato.stanford.edu is a good resource, see plato.stanford.edu/entries/descriptions for example. – RParadox Dec 11 '12 at 10:02 @RParadox: Thank you for your suggestions. – Metta World Peace Dec 11 '12 at 17:13 The way Holmes presents this matter is not clear at all. How is the theory of descriptions and logic related? A good example is the word "and". We use "and" in the english language and there is the symbol of symbol logic "$\wedge$". Consider the mapping f from "and" to "$\wedge$" and g from "$\wedge$" to "and". $f:$ "and" $\rightarrow "\wedge"$ $g: "\wedge" \rightarrow$ "and" Now, Frege and Russell introduced the symbol logic, so that we can clearly distinguish between "and" and "$\wedge$", because they are not at all the same. Consider the expression "two and two is four". The expression is best translated as 2+2=4. Translation means really taking the first expression and putting into a proper system of logic. For instance here the word "and" was translated into the obivous symbol for addition "+", and not "$\wedge$", although a naive translator would not have known what "and" should stand for. This matter is not at all trivial, and is not linguistics but logic proper (philosophy if you will). We want to know how the symbols "+" and "$\wedge$" operate. This study is what we call logic in the first place. For instance a few days ago, people downvoted my elementary proof of logic, probably because they thought real mathematicians use lots of strange symbols. However when we are concerned with logic, we can't be so presumptuous. We can't simply throw around symbols and hope that it will all make sense in the end. Where this kind of analysis comes from is thinking about propositions. What does it mean to talk about anything? Well, if we talk about a thing, we are refering to its existence or non-existence. Which is why a mathematical expression will start with the phrase: $\exists x ...$ or $\nexists x ...$ In his theory of denoting Russells explains why every statements refers to all other things 1. Definition of all: $C(E) \leftrightarrow \forall x C(x)$ 2. Definition of nothing: $C(N) \leftrightarrow \forall x \neg C(x)$ 3. Definition of exists $C(S)\leftrightarrow \neg \forall x \neg C(x)$ E="Everything",N="Nothing",S="Something". Taken from wikipedia. See also http://en.wikipedia.org/wiki/Theory_of_descriptions What this achieves is that it shows a certain map, as explained above, but not for "and", but for the expression "exists". So we could say we have explained the map $h:$ "exists x" $\rightarrow "\exists x"$ , although there are some remaining issues. What one should realize is that all of mathematics essentially is build on this theory, although very mathematicians realize it. Holmes sentence is an ackward variant of this theory of description. You arrive at it, by applying the given definitions. A much better way to understand the operations is to look at the axioms, see metamath: PL, and play around with them. - "Holmes sentence is an ackward variant of this theory of description. You arrive at it, by applying the given definitions." A variant, but plainly not arrived at by applying Russell's definitions. Which is why the OP asked the question. – Peter Smith Dec 10 '12 at 19:32 The OP asked the question, because the problem is the definition and the exposition does not explain anything properly. What does it mean to apply predicate logic and what do the notations mean? Shouldn't a book on logic be clear, so that everyone with common sense should be able to follow it? The OP described the analysis of the expression as linguistics (which is actually logic). Perhaps everyone describing logic in this way, should call it something else. The m-logic: reasoning exclusively for mathematicians. Everyone else might want to learn logic. – RParadox Dec 10 '12 at 20:28 The OP did not describe the analysis of the expression as linguistics. The OP suggested that the discipline of linguistics might be more closely related to the question than logic is. A book on any subject should ideally be clear. Clarity, however, does not necessarily mean that ‘everyone with common sense should be able to follow it’: common sense is no substitute for an adequate background. – Brian M. Scott Dec 10 '12 at 22:11 This could be called the elitist belief of mathematicians, that their science has to be some kind of black art. However, often, when pressed with a difficult philosophical question, they will not have an answer. Why do we even write $\exists x: \varphi(x)$, what is a function really? and so on. The questions and answers on this site are very representative of these notions. In the end, all these notions depend on certain beliefs. The very idea of logic, is that there is a process of reasoning which can be easily followed. – RParadox Dec 11 '12 at 9:50 If I have a proof, it is expected from me that people who are in the field can understand it and verify it. But if we are talking about logic, there is no such knowledge which can be assumed. Using abstract algebra to prove an elementary theorem in logic is just non-sensical. We are talking about the most fundamental notions, such as those in predicate logic. And anyone who uses complicated language to express elementary concepts is misusing the language. This is certainly true in this case. – RParadox Dec 11 '12 at 9:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9541618227958679, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/29593?sort=oldest
## Non-isomorphic graphs of given order. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is well discussed in many graph theory texts that it is somewhat hard to distinguish non-isomorphic graphs with large order. But as to the construction of all the non-isomorphic graphs of any given order not as much is said. So, it follows logically to look for an algorithm or method that finds all these graphs. A Google search shows that a paper by P. O. de Wet gives a simple construction that yields approximately $\sqrt{T_n}$ non-isomorphic graphs of order n. ( ${T_n}$ being the number of labeled graphs of order n.) So, I have the followings to ponder over: (1) Are there such algorithms or has there been an improvement on the aforementioned algorithm? (2) Where can I find a collection of non-isomorphic graphs of a given order? If you allow me, I would also like to extend my question to connected graphs. Many thanks. (I am a beginner in Graph theory, so please give answers in not-very-specialized terms.) - 4 This is sequence 88 in Sloane's OEIS, which lists many references. research.att.com/~njas/sequences/A000088 – Timothy Chow Jun 28 2010 at 15:58 Timothy, much obliged! For me this is more than a comment. – To be cont'd Jun 28 2010 at 16:46 ## 5 Answers Acknowledging Timothy’s comment, let me answer the question. For a diagrammatic list of the non-isomorphic graphs (all in pdfs): I quote “The topologies were computed using the nauty program by Brendan McKay and the layouts created with graphviz. I wrote python programs to interface these and produce the pdfs.” - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The nauty software contains the "geng" program, which enumerates all nonisomorphic graphs of a given order, or only connected ones, or selected on a wide range of other criteria. The method is tuned for practical speed rather than simplicity or theoretical bounds. The author Brendan McKay also has a page where you can download nonisomorphic (connected) graphs up to 10 vertices. - Nice answer. But I can't open them. I tried a lot on Windows XP. – To be cont'd Jun 26 2010 at 15:47 Support for gtools is limited to Unix. How sad for me. – To be cont'd Jun 26 2010 at 17:27 I have got them to work using cygwin on windows – Dave Pritchard Sep 26 2010 at 20:11 You can use nauty inside of Sage. – Graphth Dec 8 at 16:51 Sage also has graph theory tools here. For example: ````for g in graphs(4): print g.spectrum() ```` - This is it though it took me some time. – To be cont'd Jun 28 2010 at 8:18 Sage has nauty built in. It's much faster to do: for g in graphs.nauty_geng("4"): print g.spectrum() Or, if you only care about connected, "4 -c". – Graphth Jul 25 at 14:25 See: Combinatorial algorithms: an update, Herbert S. Wilf, Albert Nijenhuis SIAM, 1989. Chapter 8: Generating Random Graphs. The chapter gives an algorithm for producing an undirected graph uniformly over all graphs of size $n$. It is based on Polya counting. Computing the enumerating polynomial depends on some group theory that is time consuming (I don't know the complexity class, but I'll just conjecture it is most likely exponential space on $n$). But it is a guarantee of uniform distribution. Unfortunately I don't know of a way (I haven't heard of a way) to derandomize this to create an unranking algorithm (to give a mapping from the naturals to the set of unlabeled graphs). The algorithm presented in your link (by de Wet) is cute (I mean that in the sense that it is cleverly simple, does not lie, but doesn't really give the meat of it, what it means to have a list of non-isomorphic graphs). The graphs created there have a very particular structure (two paths with an arbitrary subset of edges between the paths, plus some small widgets on one end of each path to break symmetry. All subsets is a good trick but having two paths is pretty uncommon and $\sqrt{T_n}\over T_n$ goes to 0 as $n$ grows. As to practicality, in addition to the suggestions of nauty and Sage, there's also Mathematica (commercial) which has a list (that you can manipulate) of graphs up to size 11. - Thank you for your answers. I also want to compute the number of all the non-isomorphism graphs with given vertices, non-isomorphism connected graph and the all the non-isomorphism maximum planar graph. I want to compute the number and draw the graphs with the matlab language. Could I have the program code? I think the pdf documents are very good. However, the web sites cannot be opened. - Try using Sage. – To be cont'd Sep 29 2010 at 7:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406315684318542, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2012/02/02/maxwells-equations-integral-form/
# The Unapologetic Mathematician ## Maxwell’s Equations (Integral Form) It is sometimes easier to understand Maxwell’s equations in their integral form; the version we outlined last time is the differential form. For Gauss’ law and Gauss’ law for magnetism, we’ve actually already done this. First, we write them in differential form: $\displaystyle\begin{aligned}\nabla\cdot E&=\frac{1}{\epsilon_0}\rho\\\nabla\cdot B&=0\end{aligned}$ We pick any region $V$ we want and integrate both sides of each equation over that region: $\displaystyle\begin{aligned}\int\limits_V\nabla\cdot E\,dV&=\int\limits_V\frac{1}{\epsilon_0}\rho\,dV\\\int\limits_V\nabla\cdot B\,dV&=\int\limits_V0\,dV\end{aligned}$ On the left-hand sides we can use the divergence theorem, while the right sides can simply be evaluated: $\displaystyle\begin{aligned}\int\limits_{\partial V}E\cdot dS&=\frac{1}{\epsilon_0}Q(V)\\\int\limits_{\partial V}B\cdot dS&=0\end{aligned}$ where $Q(V)$ is the total charge contained within the region $V$. Gauss’ law tells us that the flux of the electric field out through a closed surface is (basically) equal to the charge contained inside the surface, while Gauss’ law for magnetism tells us that there is no such thing as a magnetic charge. Faraday’s law was basically given to us in integral form, but we can get it back from the differential form: $\displaystyle\nabla\times E=-\frac{\partial B}{\partial t}$ We pick any surface $S$ and integrate the flux of both sides through it: $\displaystyle\int\limits_S\nabla\times E\cdot dS=\int\limits_S-\frac{\partial B}{\partial t}\cdot dS$ On the left we can use Stokes’ theorem, while on the right we can pull the derivative outside the integral: $\displaystyle\int\limits_{\partial S}E\cdot dr=-\frac{\partial}{\partial t}\Phi_S(B)$ where $\Phi_S(B)$ is the flux of the magnetic field $B$ through the surface $S$. Faraday’s law tells us that a changing magnetic field induces a current around a circuit. A similar analysis helps with Ampère’s law: $\displaystyle\nabla\times B=\mu_0J+\epsilon_0\mu_0\frac{\partial E}{\partial t}$ We pick a surface and integrate: $\displaystyle\int\limits_S\nabla\times B\cdot dS=\int\limits_S\mu_0J\cdot dS+\int\limits_S\epsilon_0\mu_0\frac{\partial E}{\partial t}\cdot dS$ Then we simplify each side. $\displaystyle\int\limits_{\partial S}B\cdot dr=\mu_0I_S+\epsilon_0\mu_0\frac{\partial}{\partial t}\Phi_S(E)$ where $\Phi_S(E)$ is the flux of the electric field $E$ through the surface $S$, and $I_S$ is the total current flowing through the surface $S$. Ampère’s law tells us that a flowing current induces a magnetic field around the current, and Maxwell’s correction tells us that a changing electric field behaves just like a current made of moving charges. We collect these together into the integral form of Maxwell’s equations: $\displaystyle\begin{aligned}\int\limits_{\partial V}E\cdot dS&=\frac{1}{\epsilon_0}Q(V)\\\int\limits_{\partial V}B\cdot dS&=0\\\int\limits_{\partial S}E\cdot dr&=-\frac{\partial}{\partial t}\Phi_S(B)\\\int\limits_{\partial S}B\cdot dr&=\mu_0I_S+\epsilon_0\mu_0\frac{\partial}{\partial t}\Phi_S(E)\end{aligned}$ ## 5 Comments » 1. [...] law, we’re already done, since it’s exactly the third of Maxwell’s equations in integral form. So far, so [...] Pingback by | February 3, 2012 | Reply 2. who is maxwell and why did he decide to come with his equations that are difficult to understand Comment by TPL | May 17, 2012 | Reply • Blame the universe’s electromagnetism for being most easily calculable with vector calculus. Comment by | February 12, 2013 | Reply 3. That note is clear and good prapared Comment by Habamenshi Pierre Claver | November 23, 2012 | Reply 4. why are these equations called maxwell’s equations while none of them are derived or proved by mexwell ,all of them were alread present Comment by abdur rahim | December 26, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923995852470398, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/215419/eigen-vector-proof-ci-a-if-a-has-all-non-0-eigenvetors
Eigen vector proof: $cI=A$ if $A$ has all non 0 eigenvetors A homework question I'm having difficulty with: -Prove that if every non-zero vector is an eigen vector of $A$. -Then $A = cI$ for some real number $c$. -Either, solution/ hint...would be nice :) - 2 Answers You need to show that $A$ can have only one unique eigenvalue. Suppose $x$ and $y$ are linearly independent vectors. Then there exist $\lambda$ and $\mu$ such that $Ax = \lambda x$ and $Ay = \mu y$, and we know that $A(x + y) = \lambda x + \mu y$. Because we also know that $x + y$ is non-zero (by linear independence), it is an eigenvector of $A$, i.e., there exists $\nu$ such that $A(x + y) = \nu(x + y)$. Set this equal to the result obtained earlier: $\nu(x + y) = \lambda x + \mu y$. By linear independence of $x$ and $y$, we must have $\nu = \lambda$ and $\nu = \mu$. Therefore $\lambda = \mu$. - if $\alpha_1,\alpha_2$ are linear independent vectors, assume $A(\alpha_1)=k_1\alpha_1,A(\alpha_2)=k_2\alpha_2$ but $\alpha_1+\alpha_2$ is eigenvector of $A$. so $A(\alpha_1+\alpha_2)=k(\alpha_1+\alpha_2)=k_1\alpha_1+k_2\alpha_2$ we get $(k-k_1)\alpha_1+(k-k_2)\alpha_2=0$. $\rightarrow k_1=k_2$ so eigenvalues of $A$ are all equal. and $A$ can be diagonalization,so $A=cI$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380632042884827, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/02/19/integration-gives-signed-areas/?like=1&source=post_flair&_wpnonce=4ca8fc8c32
# The Unapologetic Mathematician ## Integration gives signed areas I haven’t gotten much time to work on the promised deconstruction, so I’ll punt to a math post I wrote up earlier. Okay, let’s look back and see what integration is really calculating. We started in on integration by trying to find the area between the horizontal axis and the graph of a positive function. But what happens as we extend the formalism of integration to handle more general situations? What if the function $f$ we integrate is negative? Then $-f$ is positive, and $\int_a^b-f(x)dx$ is the area between the horizontal axis and the graph of $-f$. But moving from $f$ to $-f$ is just a reflection through the horizontal axis. The horizontal axis stays in the same place, and it seems the area should be the same. But by the basic rules of integration we spun off at the end of yesterday’s post, we see that $\displaystyle\int\limits_a^bf(x)dx=-\int\limits_a^b-f(x)dx$ That is, we don’t get the same answer; we get its negative. So, integration counts areas below the horizontal axis as negative. We could also see this from the Riemann sums, where we replace all the function evaluations with their negatives, and factor out a $-1$ from the whole sum. How else could we extend the formalism of integration? What if we ran it “backwards”, from the right endpoint of our interval to the left? That is, let’s take an “interval” $\left[b,a\right]$ with $a<b$. Then when we partition the interval we should get a string of partition points decreasing as we go along. Then when we set up the Riemann sum we’ll get negative values for each $x_i-x_{i-1}$ We can factor out all these signs to give an overall negative sign, along with a Riemann sum for the integral over $\left[a,b\right]$. The upshot is that we can integrate over an interval from right to left at the cost of introducing an overall negative sign. We can handle this by attaching a sign to an interval, just like we did to points yesterday. We write $\left[b,a\right]^-=\left[a,b\right]$. Then when we integrate over a signed interval, we take its sign into account. Notice that if we integrate over both $\left[a,b\right]$ and $\left[a,b\right]^-$ the two parts cancel each other out, and we get ${0}$. About these ads ### Like this: Like Loading... Posted by John Armstrong | Analysis, Calculus ## 7 Comments » 1. I am not sure how relevant you may find this but you might be interested in adding some “identities” (on definite integrals) that I have posted here and some more here. Comment by | February 20, 2008 | Reply 2. They’re interesting, Vishal, and I’ll recommend interested readers go to your weblog to read about them, but they feel to me a lot more useful to find actual answers, and less useful for the theory. Of course, if it turns out to be useful for my exposition, I’ll gladly come back to them. Comment by | February 20, 2008 | Reply 3. Hi Thank you for this analysis. I see too many lecture notes that contain whoppers like “to find the area under the graph, you find the integral”. This dooms students to incorrect answers later… But it seems to me that what you have is more complicated than it needs to be, especially for the ‘generally interested lay audience’. You say: So, integration counts areas below the horizontal axis as negative. No, the integration of a function that sits below the horizontal axis is negative. There is a semantic problem here, where “area” can be taken to mean “a space enclosed by a boundary” or the mathematical quantity measured in meters squared, or similar. To find the area of that space, you simply need to find the absolute value of the integral. In your previous post, you have: Now we’ve just got a bunch of rectangles, and we can add up their areas… So if the rectangles are below the horizontal axis, the areas of the bunch of rectangles will each be positive – we just refer to the function values as positive values, since an area can only be positive. Thanks again for the stimulus. Comment by | February 20, 2008 | Reply 4. Zac: That’s another way to look at it, yes. To find the “actual” area, you have to take the absolute value of the area of each part. That much is exactly right. But my point here is that integration as a technique actually doesn’t give us area, except in the case of the graph of a positive function integrated over an interval from left to right. What integration alone tells us is signed area. Now, there’s a really great way to put both area and signed area on an equal footing, but that will come later, once we get the basic tools set up. Comment by | February 20, 2008 | Reply 5. John, a comment to the theme rather than the actual post. I hope you’ll keep up the series. I’m too busy with school to get a chance to read them carefully right now, but you can bet I’ll be poring over them this summer. All that I’ve seen so far is enormously enticing to this second year undergrad who should’ve taken honors calc and is trying to make up for it Comment by rick | February 20, 2008 | Reply 6. Oh I’ll be keeping on going, rick, but I’ll warn you to pay more attention to your class than to my own take, since I’m doing a really high-level view on things and emphasizing a lot more of an abstract point of view, especially as I move forwards from here. Comment by | February 20, 2008 | Reply 7. [...] from the definition we can see the same “additivity” (using signed intervals) in the region of integration that the Riemann integral [...] Pingback by | March 15, 2008 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## RSS Feeds RSS - Posts RSS - Comments • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301432967185974, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/110515-differentiability-question-print.html
# Differentiability Question Printable View • October 25th 2009, 08:51 PM JohnLeee Differentiability Question http://i34.tinypic.com/wu5k414.png I don't understand how to prove this considering f(x) is not specifically given....(Wondering) • October 25th 2009, 09:13 PM redsoxfan325 Quote: Originally Posted by JohnLeee http://i34.tinypic.com/wu5k44.png I don't understand how to prove this considering f(x) is not specifically given....(Wondering) At zero, $f(0)\leq ||0||^a\implies f(0)=0$. So $\frac{|f(x)-f(0)|}{||x-0||}\leq||x||^{a-1}$. Take the limit of both sides as $x\to0$ and you get $f'(0)=0$. If $a=1$, all you know is that $|f'(0)|\leq1$. • October 25th 2009, 09:16 PM Jose27 Remember that for a function to be diff. at $0$ there exists a linear transformation $D_f(0)$ such that $\vert \frac{f(h)-f(0)-D_f(0)h}{ \Vert h \Vert } \vert \rightarrow 0$ as $h \rightarrow 0$. Take $D_f(0) \equiv 0$ then $\vert \frac{f(h)-f(0)}{ \Vert h \Vert } \vert \leq \frac{ \Vert h \Vert ^a}{ \Vert h \Vert } = \Vert h \Vert ^{a-1}$ and since $a-1>0$ we have that this goes to $0$ as $h$ goes to $0$. This proves that $f$ is diff. at $0$ and $D_f(0) \equiv 0$. When $a=1$ we can only bound by $1$, but I can't think of a counter-example at the moment for this case. • October 25th 2009, 09:19 PM redsoxfan325 A good counterexample is $f:\mathbb{R}\longrightarrow\mathbb{R}$ with $f(x)=x$. • October 25th 2009, 09:27 PM Jose27 Quote: Originally Posted by redsoxfan325 A good counterexample is $f:\mathbb{R}\longrightarrow\mathbb{R}$ with $f(x)=x$. But $f$ is differentiable at $0$, I think we need a function such that $\vert f(x) \vert \leq \Vert x \Vert$ and it's not diff. at $0$ (If such a counter-example exists) • October 25th 2009, 09:31 PM redsoxfan325 Oh, in that case how about $x\sin(1/x)$? All times are GMT -8. The time now is 12:12 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416674375534058, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/55654-prove-inverse-upper-triangular-matrix.html
# Thread: 1. ## prove inverse of upper triangular matrix Can you help me with the following question please Prove that the inverse of a non singular N X N upper triangular matrix is an upper triangular matrix 2. We can assume that the matrix A is upper triangular and invertible, since $A^{-1}=\frac{1}{det(A)}\cdot adj(A)$ We can prove that $A^{-1}$ is upper triangular by showing that the adjoint is upper triangular or that the matrix of cofactors is lower triangular. Do this by showing every cofactor $C_{ij}$ with i<j(above the diagonal) is 0. Since $C_{ij}=(-1)^{i+j}M_{ij}$ it suffices to show that each minor $M_{ij}$ with i<j is 0. Start by letting $B_{ij}$ be the matrix we get when the ith row and jth column of A are gotten rid of (deleted). Can you go further?. There is a handy theorem which we won't prove, but use, which says: "If A is an nXn triangular matrix then det(A) is the product of the entries on the main diagonal of the matrix. What I mean is $det(A)=a_{11}a_{22}.....a_{nn}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230335354804993, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/stress-energy-tensor+general-relativity
# Tagged Questions 1answer 42 views ### Does the actual curvature of spacetime hold energy? My understanding of GR is that curvature of spacetime reflects the density of energy-matter. Does the curvature itself have energy? Or if energy is assigned to curvature it simply reflects the energy ... 1answer 83 views ### Stress energy tensor of a perfect fluid and four-velocity In the following demonstration, there is an error, but I cannot find where. (I explicitely put the $c^2$ to keep track of units). We consider a metric $g_{\mu\nu}$ with a signature $(-, +, +, +)$ : ... 1answer 50 views ### Does non-mass-energy generate a gravitational field? At a very basic level I know that gravity isn't generated by mass but rather the stress-energy tensor and when I wave my hands a lot it seems like that implies that energy in $E^2 = (pc)^2 + (mc^2)^2$ ... 1answer 174 views ### Scalar field stress energy tensor Can anyone explain why $T_{\mu \nu} = \frac{2}{\sqrt{-g}} \frac{\delta \mathcal{L}_M}{\delta g^{\mu \nu}}$, other than justifying it from the einstein field equations? 3answers 134 views ### Having trouble seeing the similarity between these two energy-momentum tensors Leonard Suskind gives the following formulation of the energy-momentum tensor in his Stanford lectures on GR (#10, I believe): T_{\mu \nu}=\partial_{\mu}\phi \partial_{\nu}\phi-\frac{1}{2}g_{\mu ... 2answers 219 views ### fourth rank tensor for stress energy The Weyl tensor equates the Riemann tensor in vacuum $$C_{\mu \nu \eta \lambda} = R_{\mu \nu \eta \lambda}$$ So it makes me wonder about the tensor T_{\mu \nu \eta \lambda} = C_{\mu \nu \eta ... 4answers 210 views ### Formulation of general relativity EDIT: I think I can pinpoint my confusion a bit better. Here comes my updated question (I'm not sure what the standard way of doing things is - please let me know if I should delete the old version). ... 1answer 520 views ### What is the stress energy tensor? I'm trying to understand the Einstein Field equation equipped only with training in Riemannian geometry. My question is very simple although I cant extract the answer from the wikipedia page: Is the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8934468030929565, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/29894-intergration-201-a.html
# Thread: 1. ## Intergration 201 intergrate (2x-3)/((9-x^2)^1/2) dx Believe I need to break this down into 2x/(9-x^2)^1/2dx - 3/(9-x^2)^1/2 dx. I cant afford to not undertstand these problems though. 2. ## substitution integrals (trig) Originally Posted by Bust2000 intergrate (2x-3)/((9-x^2)^1/2) dx Believe I need to break this down into 2x/(9-x^2)^1/2dx - 3/(9-x^2)^1/2 dx. I cant afford to not undertstand these problems though. At first I saw this problem and that the denominator could be written as the difference of two squares and then split by partial fractions which gives two integrals who are of the differential of a function over a function and are therefore logarithms...however this is rubbish as far as you are concerned because of that sneaky little square root I failed to note first time round!! What you need is a sneaky little substitution, when you have integrals similar to this all you need to know is the trig identity $cos^2+sin^2=1$ (and the similar cosh identity is useful for others with - instead of +). You should notice that with this substitution (try x proportional to cos) you can get the whole root to cancel with the other contribution (remember if you sub x=f(y) then you must replace dx=f'(y)dy) which gives a nice easy integral... If you have any problems with this lot let me know and I'll give you a few more clues (or if you want to check your answer i've got it on my scratch pad next to me) but I don't want to spoil your fun of learning this stuff. Besides, its fun to play the sadist who used to make me work things out for myself!!! 3. Hello, Bust20001 Intergrate: . $\int\frac{2x-3}{9-x^2)^{\frac{1}{2}}}\,dx$ Believe I need to break this down into: . $\int\frac{2x}{(9-x^2)^{\frac{1}{2}}}\,dx-\int\frac{3}{(9-x^2)^{\frac{1}{2}}}\,dx$ . Right! The first integral is: . $\int(9-x^2)^{-\frac{1}{2}}dx$ . . Let $u\:=\:9-x^2\quad\Rightarrow\quad du\:=\:-2x\,dx\quad\hdots\;\;\text{etc.}$ The second integral is:. $\int\frac{dx}{\sqrt{9-x^2}}\quad\hdots\quad\text{arcsine}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9189532399177551, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/36462/where-does-the-g-force-that-pilots-experience-come-from/36463
# Where does the “g” force that pilots experience come from? I understand that it has to do with acceleration. Say a pilot does a quick maneuver and experiences a force of 5g. What exactly is happening here? And what is this force relative to? If someone can show an example with some calculations that would be really helpful. Thank you - ## 5 Answers It comes from the wings actually. His body wants to move in a free fall parabola and the wings make the plane move some other way forcing the pilot on a different path. NASA's vomit comet plane makes parabolic flights causing 20 seconds of weightlessness. The opposite occurs when a fighter pilot does a split-S or barrel rolls, where the control surfaces of the plane force it on a track requiring up to 9g of acceleration being felt through the seat. - Thank you for your answer. Ok so the body wants to fall a certain way but the plane is going another way so that results in the force? If someone can explain it using a simplified example with some calculations that would be very nice because I am having a hard time grasping this concept because I don't understand why would you feel that force relative to your seat. Does inertia play a role in this? – Aman Sep 15 '12 at 0:09 Actually the force doesn't come from the wings, it comes from the seat the pilot is sitting on. – user1631 Sep 18 '12 at 16:20 1 And where does the force from the seat come from? Eventually you will end up at the wings. – ja72 Sep 18 '12 at 16:55 There are some simple diagrams and definitions here The lift action of air on the wings as well as the thrust of the engines or propellers apply a force on the plane. That force will cause the plane to accelerate unless it exactly balances gravity and drag. Since the the pilot is strapped into the plane he or she feels the force caused by the acceleration of the seat and/or straps. That force divided, by the weight of the pilot to make it relative to 1, is the g force. - The g-forces you feel are caused by inertia. Inertia is the basic tendency of all matter to resist any change of motion, whether it be a change of speed or of direction. Because of that, when the plane turns, your body still wants to keep going straight ahead. As a result, you feel as if you are being pushed towards the outside of the curve. The plane itself also wants to keep going straight ahead, but it has wings and control surfaces that can apply a force to overcome its inertia and make it curve. As your body does not have such features, it is your seat, seatbelt or the walls of the plane that will apply the force you feel. The actual force experienced can be positive, zero or negative, depending on the trajectory. During level flight you feel the normal 1 g. If the plane pulls up, your seat pushes you up to follow the plane, and you feel more than 1 g. This is positive g-force. If the plane pulls the other way (down), the g-force you feel will be less than the normal 1 g. It can go to zero, or even go negative (so you're thrown towards the top of the plane), depending on the path of the plane. NASA's Vomit Comet flies on a special path that keeps the g-force at zero for up to 30 seconds. - Suppose you jump off a tall building. When you hit the ground you feel a high "g force" for the obvious reason that the ground decelerates you rather rapidly. Now suppose you put a large balloon on your landing point. This time the compression of the air slows you more gradually, brings you to a stop and eventually you bounce up again. During this time you feel a "g force" that might, for example, be 5g. The force is ultimatwely provided by the Earth as a whole. The air in the balloon pushes on you and the ground pushes on the balloon. Now consider the aeroplane. When the plane pulls out of the dive it's the air rushing over the wings that brings the dive to a halt and accelerates the plane upwards, and during this time the pilot (and plane) feel a "g force" just as you do when landing on the balloon. The force isn't just compression of the air as in a balloon: it's more complicated as it's the air flow over the wings, but essentially it's the same as you landing on the balloon. However it's harder to see what is ultimately providing the push. With the balloon it was the ground under the balloon. With the aeroplane the air supporting the plane it's the atmospheric pressure of the air around the plane. The plane's wings compress the air that's supporting them and this generates pressure waves that spread into the surrounding atmosphere. - Let me simplify it. We know that $F=mg$, where $g$ is the acceleration due to gravity. Weight has the measurement of force (Newton) so if you are 50Kg, that means your mass times 9.8M/Sec^2 gravitational force is acting on you. This force is resisted by us always, and that is why we could stand on the surface of the earth. Imagine you are freely falling on the earth. In every second you pick up speed of 9.8 m s^-2. If we are falling at this speed, we feel weightless, because we dont have resist at all. But when we are quickly climbing up in the sky, and if the acceleration of climbing is higher than gravitational pull (9.8 m s^-2), then you feel more weight than your normal weight. This is because we are moving against acceleration due to gravity. If it is 5 times, then you will feel 5g. This can be measured easily. Tie a 5KG stone on a weighing scale and drop it from a height of 1.5m. (Drop it on some shock absorbing material,like bed or something). You see the scale reading when it is free falling - it will be 0. If you quickly lift this weighing scale upwards and see the reading, it will show an increase in the weight - ie more than 5Kg. The reason is we move the object against gravity and so it increases. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518530368804932, "perplexity_flag": "head"}
http://mathoverflow.net/questions/104905/do-semi-continuous-functions-generate-bounded-borel-measurable-functions-as-a-c/104946
## Do semi-continuous functions generate bounded Borel measurable functions as a $C^*$-algebra? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is related to Question 2 of my previous posting. Question. Let $\mu$ be a Radon measure on a compact Hausdorff space $\Omega$ and $L^{\infty}(\Omega,\mu)$ the set of essentially bounded Borel measurable functions on $\Omega$. Suppose that $S$ is the set of bounded lower or upper semi-continuous functions on $\Omega$. Does $S$ generate $L^{\infty}(\Omega,\mu)$ as a $C^*$-algebra? It suffices to consider whether the indicator function of any Borel set is obtained from $S$ by algebraic operations and (essential-supremum) norm limits. If necessary, you may assume $\Omega$ to be second countable. Thank you. - ## 2 Answers No. The bounded Baire class one functions on $[0,1]$ are stable under uniform limits and hence constitute a C*-algebra. This C*-algebra contains every semicontinuous function on [0,1]. Every function in $L^\infty[0,1]$ is equal almost everywhere to a Baire class two function, but not a Baire class one function. (The previous version of my answer neglected this essential point.) See http://www.encyclopediaofmath.org/index.php/Baire_classes - Thank you very much Nik for your clear answer and the reference! – Masayoshi Kaneda Aug 18 at 10:28 Sure. General rule of thumb --- ignore my first answer, wait for the correction ... – Nik Weaver Aug 18 at 14:58 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Any upper or lower semicontinuous function is continuous almost everywhere in the sense of Baire category (since it is a pointwise limit of a sequence of continuous functions, at least when $\Omega$ is compact metrizable). The algebra of Baire-a.e. continuous functions is itself a $C^\ast$-algebra. So the answer to your question is no, once we show that there exists a function $f \in L^\infty$ that is not $\mu$-equivalent to a Baire-a.e. continuous function. For an example we may take any indicator function of a set $S$ such that both $\mathrm{supp} \, \mu\restriction S$ and $\mathrm{supp} \, \mu\restriction (\Omega \setminus S)$ equal $\Omega$. - Thank you very much Alexander for your answer! – Masayoshi Kaneda Aug 18 at 10:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8870944380760193, "perplexity_flag": "head"}
http://mathoverflow.net/questions/51863?sort=newest
## Does Riemann map depend continuously on the domain? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I've always taken this for granted until recently: In the simplest case, given Jordan curve $C \subseteq \mathbb{C}$ containing a neighborhood of $\bar{0}$ in its interior. Given parametrizations $\gamma_1:S^1 \rightarrow C$. Is it true that for all $\varepsilon >0$, there exists $\delta >0$ s.t. any Jordan curve $C'$ with a parametrization $\gamma_2:S^1 \rightarrow C_2$ so that $||\gamma_1-\gamma_2||<\delta$ in the uniform norm implies the Riemann maps $R, R'$ from $\mathbb{D}$ to the interiors of $C, C'$ that fixes the origin and have positive real derivatives at $\bar{0}$ would be at most $\varepsilon$ apart? - ## 2 Answers Here's a conceptual proof why this is true, up to things which are intuitively obvious and not hard to prove: In the unit disk, almost every Brownian path hits the boundary. The hitting measure equals proportional to arc length. In two dimensions, a conformal map takes trajectories of Brownian paths to trajectories of Brownian paths: just the time parametrization changes. (This is a consequence of the fact that conformal maps take harmonic functions to harmonic functions; harmonic functions are the functions whose expectation is invariant under Brownian motion.) It follows that the pushforward of arc length of the unit disk via the Riemann mapping is the hitting probability for Brownian paths starting at the image of the origin. Your question is equivalent to asking whether the measure of intervals in your parametrized Jordan curves is uniformaly continuous with respect to the uniform topology on parametrized Jordan curves. It's intuitively obvious, as well as true and not hard to prove (further explanation below), that a Brownian path that starting at a point $z$ inside a Jordan domain near the boundary is likely to hit the boundary nearby. This fact quickly implies the continuity that you need: follow Brownian motion until it gets within $2 \epsilon$ of the initial boundary curve, so it is between $\epsilon$ and $3 \epsilon$ of the perturbed curve. When Brownian motion is continued, most of it can't shift very far before hitting. (Note: given a Jordan curve, you must take $\epsilon$ small enough that short intervals as measured by hitting measure are also short on the curve, to be able to conclude that the Riemann mapping does not move very far when you perturb the curve.) There are a variety of ways to prove that a ; one way is to lift continuously to a branch of the map $\log(z-z_0)$, where $z_0$ is the closest point to the boundary. Now the random walk takes place in an arbitrarily long strip of width no more than $2 \pi$; it has little chance of remaining in the strip long enough to move far along its length. This follows from the fact that a Brownian path in 1 dimension has a large probability of going outside an interval of length $2 \pi$ after a certain length of time. Another way to prove that Brownian paths are likely to hit nearby on the curve is to make use of the estimate for the Poincaré metric inside a domain: it varies by no more than a factor of 2 from 1/(minimum distance to the boundary). With this estimate, you can show that for a large Poincaré disk centered about $z$ near a boundary point $z_0$, most of its arc length gets squeezed near to $z_0$. Side note: Brouwer proved (in his intuitionistic framework) that every function that is everywhere defined is continuous, so from this point of view Caratheodory's theorem about continuity at the boundary implies continuity. However, one needs to check that Caratheodory's theorem is true intuitionistically; Brouwer later rejected his famous fixed pointed theorem on these grounds. - @Bill Thurston --Thanks a lot for the insights and intuition! I eventually came up with a rigorous proof last night and by now it's verified. Although the poof does not involve Brownian motions (it's cross-cuts, extremal length and convergence on compact sets combined in a funny way). Your comment certainly motivated me in believing this is true! – Conan Wu Jan 14 2011 at 1:34 1 @Conan Wu: I'm glad you worked out your own proof. I should mention that I think of the two points of view (cross-cuts and Brownian motion) as very closely related: they can be converted back and forth. What you find most satisfying mostly has to do with background and experience. Another method: there is a beautiful set of connections (cut locus) <--> (convex hull of sterographic image) <--> (developable hyperbolic surface with given boundary) <--quasi-equivalent--> Riemann mapping. The "bending measure" is a good way to parametrize the actual shapes, or <--> the Schwarzian derivative. – Bill Thurston Jan 14 2011 at 1:54 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Yes. This is a classical result of the geometric function theory due to Carathéodory. The theorem and a fairly straightforward proof can be found, for instance, in the Hurwitz-Courant Funktionentheorie (in the part written by Courant). Edit. Theory of Functions of a Complex Variable by Markushevich might be a more accessible reference (see Volume 3, Theorem 2.1). Edit 2. By the way there exist sequences of domains $\{G_n\subset\mathbb C:\ n\in\mathbb N\}$ such that $$\limsup_{n\to\infty}\ \mbox{dist}(\partial G_n, \partial G)>0$$ but the corresponding Riemann maps $\phi_n:\mathbb D\to G_n$ converge uniformly in $\mathbb D$ to the Riemann map $\phi:\mathbb D\to G$. For example, let $G_n$ be a union of two disjoint rectangles $G'$ and $G''$ connected with a 'thin' rectangle of the fixed length $l$ and width $h_n=1/n$ (see the picture below). Let $z_0\in G'$. Then the sequence of the conformal maps $f_n:G_n\to\mathbb D$ which satisfy the conditions $$f_n(z_0)=0,\qquad f'_n(z_0)>0,$$ converges uniformly in $G'$ to the conformal map $f:G'\to\mathbb D$ satisfying the same condition. Moreover, the sequence of the inverse maps $\phi_n:\mathbb D\to G_n$ converges uniformly in $\mathbb D$ to $\phi=f^{-1}$. The general Carathéodory theorem gives a criterion of the convergence of the Riemann maps in terms of the corresponding domains. - If you meant Carathéodory's theorem on extending Riemann maps to the boundary while the boundary is a Jordan curve, then it's not the problem asked: Carathéodory gives a parametrization of the Jordan curve, furthermore, the distance between the Carathéodory parametrizations for the curves is equal to the distance between their corresponding Riemann maps. However, the question can be stated as: given a pair of parametrizations for the curved that's uniformly close, does it follow the Carathéodory parametrizations are close? – Conan Wu Jan 12 2011 at 23:09 The result I referred to can be used in the proof of Carathéodory's theorem on extending Riemann maps to the boundary. Roughly speaking it says that if a sequence of simply connected domains $(G_n)$ converges to a simply connected domain $G$ then the corresponding Riemann maps $(f_n)$ converge uniformly to the Riemann map $f$ associated with $G$. – Andrey Rekalo Jan 13 2011 at 0:13 Thank you for your response! Took me some time to locate a copy of the book. However, when I look into the proof of the theorem 2.1, it seems that what he meant by converging "uniformly inside the domain" is that, it converges uniformly on compact subsets. Goluzin uses the same terminology in the classic book "Grometric theory of functions of a complex variable", and the same theorem (i.e. domains converging to a unique kernel imply Riemann maps converging uniformly on compact sets) can be found there. (and there is clear definition about what he means by "inside", see p.11 on unif converge) – Conan Wu Jan 13 2011 at 19:52 You are absolutely right, this is the uniform convergence on compact subsets. I forgot to mention the excellent book by Goluzin. A good find! – Andrey Rekalo Jan 13 2011 at 20:29 Well, then this does not give an answer to the question, right? i.e. there is no uniform δ to make the whole Riemann maps ϵ apart. However I think I have (finally) produced a proof now! (by using cross-cuts, extremal length and this classical result in a not-so-direct way). in any case, thanks again~ – Conan Wu Jan 13 2011 at 20:38 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9221407175064087, "perplexity_flag": "head"}
http://mathoverflow.net/questions/82826/haar-measure-on-infinite-dimensional-lie-groups
## Haar measure on infinite dimensional Lie groups? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi. Is there a Haar measure or equivalent on infinite dimensional Lie groups? I've been playing around with $Diff(S^1)$, and at least a direct approach seems quite hopeless. It goes something like this: Def. element on the group by "Euler coordinates", $g \doteq \prod\limits_{i=-\infty}^{\infty} e^{\omega^i X_i}$, with $\left[ X_i ,X_j \right] = (j-i)X_{i+j}$. Now I could define a (left invariant) Maurer-Cartan form as $\Omega_L \doteq g^{-1} dg = X_i \otimes \theta^i$, where $\theta^i = \mathcal L^i_j d\omega^j$. Then the Haar measure is $d\mu (g) \doteq ||\mathcal L || \bigwedge\limits_i d\omega^i$. Elements of $\mathcal L$ can be written as $\mathcal L^i_j = \left( \prod\limits_{n=\infty}^{j+1} \exp(-\omega^n adX_n) \right)^i_j$ Clearly the determinant $||\mathcal L ||$ will be horrible... is there any hope for a manageable explicit expression? I couldn't find any literature on the subject (yet), so I'd appreciate any hints to the right direction. EDIT: umm and of course the whole question of existence of such a measure should probably addressed... EDIT 2: I realized that the question setup is a bit misleading: I'm actually looking for a measure on the Virasoro group (with zero central charge), i.e. the Lie group corresponding to the algebra above... maybe the Shavgulidze measure has something to do with it, I don't know... - 1 I think usually people restrict to fairly "tame" subspaces of infinite-dimensional groups in order to construct an actual measure that's useful. I don't know the literature as well as I'd like but my impression was that the kind of measures you want simply don't exist until you specialize to more modest spaces. – Ryan Budney Dec 6 2011 at 21:56 2 I think you might need to give up some of the properties of a measure (so there might be some kind of left-invariant form, in a weak sense, but not a measure per se). The reason is that a separated measurable group - roughly speaking, a measure space equipped with compatible group topology - carries a natural topology with respect to which it is a topological group, and that topological group has a "completion" to a locally compact group. (This is in Section 62 of Halmos's book on Measure Theory.) – Yemon Choi Dec 6 2011 at 22:58 2 IIRC if there was such a thing quantum field theory would be a lot easier. – Steve Huntsman Dec 7 2011 at 3:27 Small correction to my comment: "compatible group topology" should have been "compatible group structure". [The point is that the Weil topology can be defined in terms of the measure algebra structure] – Yemon Choi Dec 7 2011 at 4:26 1 Doug Pickrell did some work in this direction. You might want to check out arxiv.org/pdf/funct-an/9612009v1. If I remember correctly, this paper appeared as part of a book/monograph. – Antun Milas Dec 15 2011 at 20:57 show 1 more comment ## 1 Answer There is something called the Shavgulidze, or the Malliavin-Shavgulidze measure on Diff of smooth manifolds. You can find a discussion in Differentiable measures and the Malliavin calculus (p. 397, available on google books). It is not quite invariant, but quasi-invariant. - Thanks for pointng that out! I can't get my hands on the book, and google won't show me some important pages... but at least I have something to google and I've found some references. – H. Arponen Dec 7 2011 at 9:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422798156738281, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/199212-simple-statistics-combination-permutation-problem.html
# Thread: 1. ## Simple statistics Combination/Permutation problem Hello, I've recently joined this forum to get help in mathematics. I am going to be studying in an engineering field next year as an undergrad. Over the summer, I've taken up computer programming as a hobby and I'm having a hard time with some statistical programs I'm trying to make. Here is my simple problem: How many different combinations of two numbers can add up to make 8? Each number can be no greater than 6. This is exactly the probability of rolling an 8 when rolling two dice. I know that the answer to this question is 5, as in there are 5 different combinations of 2 numbers that will add up to equal 8. For example: 1. 2 + 6 2. 6 + 2 3. 3 + 5 4. 5 + 3 5. 4 + 4 Therefore, there is a 5/36 chance of rolling an 8 when rolling 2 dice with 6 faces each. I want to extract some basic principle or formula out of this so that I can do the same thing with more dice and more or less faces on each die. Thank you very much, I appreciate any responses. (I'm not here just to leech on others. I plan contributing to the help threads where I'm qualified to actually help like calculus. ) 2. ## Re: Simple statistics Combination/Permutation problem Originally Posted by matzematze How many different combinations of two numbers can add up to make 8? Each number can be no greater than 6. This is exactly the probability of rolling an 8 when rolling two dice. I know that the answer to this question is 5, as in there are 5 different combinations of 2 numbers that will add up to equal 8. For example: 1. 2 + 6 2. 6 + 2 3. 3 + 5 4. 5 + 3 5. 4 + 4 Therefore, there is a 5/36 chance of rolling an 8 when rolling 2 dice with 6 faces each. If you go to this webpage Scroll down to the expanded form you will see one term $5x^8$. The coefficients tell us the number of ways to add the dice. $4x^5$ means we get five in four ways. If you now change the power from $[~~]^2$ to $[~~]^4$ those new coefficients tell us the number of ways to add four dice. 3. ## Re: Simple statistics Combination/Permutation problem Ah that's brilliant. I have no idea how that works but it works. Thanks. 4. ## Re: Simple statistics Combination/Permutation problem Is there a name for this procedure that I can Google in order to learn more on how it works?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9152020812034607, "perplexity_flag": "head"}
http://mathoverflow.net/questions/41609/have-all-numbers-with-sufficiently-many-zeros-been-proven-transcendental/41760
## Have all numbers with “sufficiently many zeros” been proven transcendental? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Any number less than 1 can be expressed in base g as $\sum _{k=1}^\infty {\frac {D_k}{g^k}}$, where $D_k$ is the value of the $k^{th}$ digit. If we were interested in only the non-zero digits of this number, we could equivilantly express it as $\sum _{k=1}^\infty {\frac {C_k}{g^{Z(k)}}}$, where $Z(k)$ is the position of the $k^{th}$ non-zero digit base $g$ and $C_k$ is the value of that digit (i.e. $C_k = D_{Z(k)}$). Now, consider all the numbers of this form $(\sum _{k=1}^\infty {\frac {C_k}{g^{Z(k)}}})$ where the function $Z(k)$ eventually dominates any polynomial. Is there a proof that any number of this form is transcendental? So far, I have found a paper demostrating this result for the case $g=2$; it can be found here: http://www.escholarship.org/uc/item/44t5s388?display=all - 3 This was also posted on MSE: math.stackexchange.com/questions/6321/… . – Joseph O'Rourke Oct 9 2010 at 16:37 2 Roth's theorem. – Felipe Voloch Oct 9 2010 at 17:05 7 Roth's theorem implies that it is transcendental if $Z(k+1)>(2+\epsilon)Z(k)$ infinitely often. Unfortunately, Z(k) eventually dominating any polynomial is not sufficient to guarantee this. Even $Z(k)=2^k$ doesn't grow fast enough for Roth's theorem to apply. – George Lowther Oct 9 2010 at 17:23 6 But the $p$-adic version of Roth's theorem (due to Ridout in the version one needs here) does the trick if $Z(k+1) > (1+ \epsilon) Z(k)$ infinitely often and, in particular for $Z(k)=2^k$. – Mike Bennett Oct 9 2010 at 22:24 1 I see the correct theorem now. The paper is Rational approximations to algebraic numbers (dx.doi.org/10.1112/S0025579300001182). Not free access, but it is quoted in An explicit version of the theorem of Roth-Ridout (seminariomatematico.dm.unito.it/rendiconti/…) Theorem 2 (Ridout's theorem). This does indeed do the job! Many thanks for that. In fact you only need the denominators to have factors from some finite set of primes, which covers Z(k) growing exponentially and any base. Still, the case for Z(k) merely dominating any polynomial seems to be open. – George Lowther Oct 11 2010 at 1:25 show 5 more comments ## 2 Answers I don't know of a paper proving the result, but I can prove it for you now. In fact, the methods in the paper you link generalize to an arbitrary base g > 2. The authors of the paper don't seem to think that it generalizes quite so easily, as in the Open Problems section they state that "For bases b  > 2 there is the problem of having more than two possible digits. What kinds of bounds might be placed on counts of 1's and 2's for ternary expansions of algebraic numbers?". Hopefully I have not made any major mistakes... [Edit: A paper by Bugeaud, On the b-ary expansion of an algebraic number, available from his homepage gives lower bounds on the number of nonzero digits in an irrational algebraic number. There, he references the paper linked in the question, saying "Apparently, their approach does not extend to a base b with b ≥ 3". However, he has just responded to this question, agreeing that the method does indeed generalize. So I'm more confident about my proof now.] Use #(x,N) to denote the number of nonzero base-g digits in the expansion of x, up to and including the Nth digit after the 'decimal' point, then what you are asking for is implied by the following. If x is irrational and satisfies a rational polynomial of degree D then #(x,N) ≥ cN1/D for a positive constant c and all N. First, I'll introduce some notation similar to that used in the linked paper. Use r1(n) to denote that n'th base-g digit of x, so that 0 ≤ r1(n) ≤ g-1 and $$x=\sum_nr_1(n)g^{-n}.$$ It's enough to consider 1 ≤ x < 2, so I'll do that throughout. Then r1(n)=0 for n < 0 and r1(0) = 1. Also use rd(n) to denote $$r_d(n)=\sum_{p_1+p_2+\cdots+p_d=n}r_1(p_1)r_1(p_2)\cdots r_1(p_d)=\sum_{j+k=n}r_1(j)r_{d-1}(k)$$ This satisfies the inequalities rd(n) ≥ r1(0)rd-1(n) = rd-1(n) and $$\sum_{n\le N}r_d(n)\le(g-1)^d \#(x,N)^d\le(g-1)^d(N+1)^d.\qquad\qquad(1)$$ Also, raising x to the d'th power gives $$x^d=\sum_nr_d(n)g^{-n},$$ which differs from the base g expansion of xd only because rd(n) can exceed g. We also introduce notation for the expansion of x with the digits shifted to the left R places and truncated to leave the fractional part, $$T_d(R)=\sum_{n\ge1}r_d(R+n)g^{-n},$$ so that gRxd-Td(R) is an integer. This can also be bounded, using (1), $$\begin{array}{rl} \displaystyle T_d(R)&\displaystyle\le\sum_{n\ge1}(g-1)^d(R+n+1)^dg^{-n}\\ &\displaystyle\le\sum_{n\ge1}(g-1)^d(R+1)^d(n+1)^dg^{-n}\\ &\displaystyle\le C_d(R+1)^d \end{array}$$ where $C_d=\sum_{n\ge1}(g-1)^d(n+1)^dg^{-d}$ is a constant independent of R. Now suppose that x satisfies an integer polynomial of degree D > 1, $$A_Dx^D+A_{D-1}x^{D-1}+\cdots+A_1x+A_0=0$$ with AD > 0. It follows that $$T(R)\equiv\sum_{d=1}^D A_dT_d(R)$$ is an integer for each R. The following is similar to Theorem 3.1 in the linked paper. Lemma 1: For all sufficiently large N, there exists n ∈ (N/(D+1),N) with r1(n) > 0. Proof: This is a consequence of Liouville's theorem for rational approximation. If the statement was false then setting $m=\lfloor N/(D+1)\rfloor$, $p=\sum_{n=0}^mr_1(n)g^{-n}$, $q=g^{m}$ gives infinitely many approximations $\vert x-p/q\vert=q^{-D}o(1)$ as N increases, contradicting Liouville's theorem. In Lemma 1, Roth's theorem could have been used to reduce the D+1 term to 2+ε. In fact, Ridout's theorem as discussed in the comments can be used to reduce it even further to 1+ε. This isn't needed here, so I just used the more elementary Liouville's theorem. Lemma 6.1 from the linked paper generalizes to base b, and puts upper bounds on the number of times at which T(n) can be nonzero. Lemma 2: For large enough N, setting $K=\lceil 2D\log_g N\rceil$ gives $$\sum_{1\le R\le N-K}T_d(R) < (g-1)^{d-1} \#(x,N)^d+1$$ for 1 ≤ d ≤ D and so, $$\sum_{1\le R\le N-K}\vert T(R)\vert\le\sum_{d=1}^D\vert A_d\vert ((g-1)^{d-1} \#(x,N)^D+1)$$ Proof: Using similar inequalities to the proof used in the linked paper, $$\begin{array}{rl} \displaystyle\sum_{1\le R\le N-K}T_d(R) &\displaystyle=\sum_{m\ge1}g^{-m}\sum_{R\le N-K}r_d(R+m)\\ &\displaystyle\le\sum_{m=1}^Kg^{-m}\sum_{R\le N}r_d(R)+g^{-K}\sum_{m > K}g^{K-m}\sum_{R\le N-K}r_d(R+m)\\ &\displaystyle \le \frac{1}{g-1}\sum_{R\le N}r_d(R)+g^{-K}\sum_{K\le R\le N}T_d(R)\\ &\displaystyle\le(g-1)^{d-1}\#(x,N)^d+g^{-K}NC_d(N+1)^d. \end{array}$$ The final term is bounded by Cd(N+1)d+1/N2D which will be less than 1 for N large. Lemma 6.2 also generalizes, which gives blocks where T(R) is nonzero. Lemma 3: Let R1 < R2 be positive integers with rD-1(R) = 0 for all R ∈ (R0,R1] and T(R1) > 0. Then T(R) > 0 for all R ∈ [R0,R1]. Proof: We have the following relation for T, $$T(R-1)=\frac{1}{g}T(R)+\frac{1}{g}\sum_{d=1}^D A_dr_d(R).$$ As rd(n) ≥ rd-1(n), the hypothesis implies that rd(R) = 0 for all 1 ≤d ≤D-1 and R ∈ (R0,R1]. Therefore, $$T(R-1)=\frac{1}{g}T(R)+\frac{1}{g}A_Dr_D(R)\ge \frac{1}{g}T(R).$$ Assuming inductively that T(R) > 0 gives T(R-1) > 0. Putting this together gives the result (Theorem 7.2 in the linked paper). Theorem 4: There is a constant c such that, for all sufficiently large N $$\#(x,N)>cN^{1/D}$$ Proof: Suppose not, then for any δ > 0, there are infinitely many N with #(x,N) < δN1/D and, using (1), $$\sum_{n\le N}r_{D-1}(n)\le \delta N^{1-1/D}\qquad\qquad(2)$$ In particular, the proportion of integers R with rD-1(R) > 0 goes to 0. Let 0 = R1 < R2 <...< RM ≤ N be those integers in the range [0,N] with rD-1(Rk) > 0 and set RM+1 = N. Then (2) gives M+1 ≤ δN1-1/D, and rd(R) = 0 for d ≤ D-1 and R in any of the ranges (Ri,Ri+1). So, Td(R-1) = gR-Ri+1Td(Ri+1-1). Fixing ε > 0 and letting I denote the numbers i with Ri+1-Ri > εN1/D gives $$\sum_{i\in I}(R_{i+1}-R_i)\ge N - (M+1)\epsilon N^{1/D}\ge N(1-\epsilon \delta).$$ So, the intervals (Ri,Ri+1) larger than εN1/D cover most of the interval [0,N], as long as εδ is small enough. If R is in the range (Ri,Ri+1-DloggN) and rD(R) > 0 then T(R-1) > 0: $$T(R-1)\ge \frac{1}{g}A_D-g^{R-R_{i+1}}\sum_{d=1}^{D-1}\vert A_d\vert T_d(R_{i+1}-1) \ge\frac1g-N^{-D}\sum_{d=1}^{D-1}\vert A_d\vert C_d R_{i+1}^d$$ which is positive, so long as N is chosen large enough. Assuming that N is large enough, by Lemma 1, for each i in I, there is $$j\in\left(\frac{1}{D+1}(R_{i+1}-R_i-D\log_gN),R_{i+1}-R_i-D\log_gN\right)$$ with r1(j) > 0. Then, rD(Ri+j) ≥ rD-1(Ri)r1(j) is positive, so T(Ri+j-1) > 0. Lemma 3 implies that T(Ri+j) is positive for all 0 ≤ j< (Ri+1-Ri-DloggN)/(D+1). $$\sum_{1\le n< N-2D\log_g N}\vert T(n)\vert\ge\frac{1}{D+1}\sum_{i\in I}(R_{i+1}-R_i-2D\log_gN) \ge\frac{N(1-\epsilon\delta)}{D+1}-2\delta N^{1-1/D}\log_gN$$ This contradicts Lemma 2, which gives, for N large, $$\sum_{1\le n< N-2D\log_g N}\vert T(n)\vert =O(\delta^D N).$$ - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I must apologize and correct what I wrote in an earlier paper: the method does extend to base > 2. Using Liouville's inequality, you lose a bit in the constant c, but the great advantage is that everything can be made fully explicit. - +1. Many thanks for your response! Unfortunately, with less than 50 rep you can't add a comment to my answer, but I'll edit it to refer to this response. – George Lowther Oct 11 2010 at 15:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 20, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919136106967926, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Projection_(set_theory)
# Projection (set theory) In set theory, a projection is one of two closely related types of functions or operations, namely: • A set-theoretic operation typified by the jth projection map, written $\mathrm{proj}_{j}\!$, that takes an element $\vec{x} = (x_1,\ \ldots,\ x_j,\ \ldots,\ x_k)$ of the Cartesian product $(X_1 \times \cdots \times X_j \times \cdots \times X_k)$ to the value $\mathrm{proj}_{j}(\vec{x}) = x_j$. • A function that sends an element x to its equivalence class under a specified equivalence relation E. The result of the mapping is written as [x] when E is understood, or written as [x]E when it is necessary to make E explicit. ## See also This set theory-related article is a stub. You can help Wikipedia by expanding it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269440174102783, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/21032/problem-with-singular-covariance-matrices-when-doing-gaussian-process-regression
# Problem with singular covariance matrices when doing gaussian process regression I'm working with gaussian process regression. Currently I start testing differnt covariance functions and compositions to see what type of data they could describe best. I made an own implementation in Java. My problem: Most of the covariance funtions I use result in a singular covariance matrix which is not invertable. 1. Should't the proposed covariance functions/estimators produce only invertable matrices? 2. Are there methods or hints for regularizing the matrices? Or can that be done by using other values or ranges as inputs? May the introduction of error terms would help as well? Most problems I get with integer $x$ inputs to the brownian motion covariance function $k(x,x') = \min(x,x')$. When I am using this the matrix it is always singular. Thank you - ## 1 Answer If all covariance functions give you a singular matrix, it could be that some of your data points are identical, which gives two identical rows/columns in the matrix. To regularise the matrix, just add a ridge on the principal diagonal (as in ridge regression), which is used in Gaussian process regression as a noise term. Note that using a composition of covariance functions or an additive combination can lead to over-fitting the marginal likelihood in evidence based model selection due to the increased number of hyper-parameters, and so can give worse results than a more basic covariance function, even though the basic covariance function is less suitable for modelling the data. - I read in some publications about estimating the inverse for singular matrices. Is that a standard problem in gaussian process regression or why is there so much literature about numerical problems in the covariance matrices. – Andreas Jan 13 '12 at 10:35 1 The matrix is often ill-conditioned, so there are indeed a few tricks for inverting it more reliably, there is a good discussion of this in Rasmussen and Williams book. However, I have found that I normally only run into problems when model selection tries to make a very bland RBF covariance to model an essentially linear decision boundary, so you could argue it was model mis-specification? I haven't used GP regression very much, so it is hard to know whether it crops up more often there. – Dikran Marsupial Jan 13 '12 at 15:24 "there is a good discussion of this in Rasmussen and Williams book" - May i ask where exactly – Andreas Jan 16 '12 at 9:56 1 The key trick to limiting numerical instability is on page 45 (equation 3.26). – Dikran Marsupial Jan 16 '12 at 11:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9151065349578857, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/CP-symmetry
# CP violation (Redirected from CP-symmetry) Beyond the Standard Model Simulated Large Hadron Collider CMS particle detector data depicting a Higgs boson produced by colliding protons decaying into hadron jets and electrons Standard Model Evidence Theories Experiments In particle physics, CP violation (CP standing for Charge Parity) is a violation of the postulated CP-symmetry (or Charge conjugation Parity symmetry): the combination of C-symmetry (charge conjugation symmetry) and P-symmetry (parity symmetry). CP-symmetry states that the laws of physics should be the same if a particle were interchanged with its antiparticle (C symmetry), and then left and right were swapped (P symmetry). The discovery of CP violation in 1964 in the decays of neutral kaons resulted in the Nobel Prize in Physics in 1980 for its discoverers James Cronin and Val Fitch. It plays an important role both in the attempts of cosmology to explain the dominance of matter over antimatter in the present Universe, and in the study of weak interactions in particle physics. ## CP-symmetry CP-symmetry, often called just CP, is the product of two symmetries: C for charge conjugation, which transforms a particle into its antiparticle, and P for parity, which creates the mirror image of a physical system. The strong interaction and electromagnetic interaction seem to be invariant under the combined CP transformation operation, but this symmetry is slightly violated during certain types of weak decay. Historically, CP-symmetry was proposed to restore order after the discovery of parity violation in the 1950s. The idea behind parity symmetry is that the equations of particle physics are invariant under mirror inversion. This leads to the prediction that the mirror image of a reaction (such as a chemical reaction or radioactive decay) occurs at the same rate as the original reaction. Parity symmetry appears to be valid for all reactions involving electromagnetism and strong interactions. Until 1956, parity conservation was believed to be one of the fundamental geometric conservation laws (along with conservation of energy and conservation of momentum). However, in 1956 a careful critical review of the existing experimental data by theoretical physicists Tsung-Dao Lee and Chen Ning Yang revealed that while parity conservation had been verified in decays by the strong or electromagnetic interactions, it was untested in the weak interaction. They proposed several possible direct experimental tests. The first test based on beta decay of Cobalt-60 nuclei was carried out in 1956 by a group led by Chien-Shiung Wu, and demonstrated conclusively that weak interactions violate the P symmetry or, as the analogy goes, some reactions did not occur as often as their mirror image. Overall, the symmetry of a quantum mechanical system can be restored if another symmetry S can be found such that the combined symmetry PS remains unbroken. This rather subtle point about the structure of Hilbert space was realized shortly after the discovery of P violation, and it was proposed that charge conjugation was the desired symmetry to restore order. Simply speaking, charge conjugation is a simple symmetry between particles and antiparticles, and so CP-symmetry was proposed in 1957 by Lev Landau as the true symmetry between matter and antimatter. In other words a process in which all particles are exchanged with their antiparticles was assumed to be equivalent to the mirror image of the original process. ### CP violation in the Standard Model "Direct" CP violation is allowed in the Standard Model if a complex phase appears in the CKM matrix describing quark mixing, or the PMNS matrix describing neutrino mixing. In such a scheme, a necessary condition for the appearance of the complex phase, and thus for CP violation, is the presence of at least three generations of quarks. The reason why this causes CP violation is not immediately obvious, but can be seen as follows. Consider any given particles (or sets of particles) $a$ and $b$, and their antiparticles $\tilde{a}$ and $\tilde{b}$. Now consider the processes $a \rightarrow b$ and the corresponding antiparticle process $\tilde{a} \rightarrow \tilde{b}$, and denote their amplitudes $M$ and $\tilde{M}$ respectively. Before CP violation, these terms must be the same complex number. We can separate the magnitude and phase by writing $M=|M|e^{i\theta}$. If a phase term is introduced from (e.g.) the CKM matrix, denote it $e^{i\phi}$. Note that $\tilde{M}$ contains the conjugate matrix to $M$, so it picks up a phase term $e^{-i\phi}$. Now we have: $M=|M|e^{i\theta}e^{i\phi}$ $\tilde{M}=|M|e^{i\theta}e^{-i\phi}$ However, physically measurable reaction rates are proportional to $|M|^{2}$, so far nothing is different. However, consider that there are two different routes for $a \rightarrow b$. Now we have: $M = |M_{1}|e^{i\theta_{1}}e^{i\phi_{1}} + |M_{2}|e^{i\theta_{2}}e^{i\phi_{2}}$ $\tilde{M} = |M_{1}|e^{i\theta_{1}}e^{-i\phi_{1}} + |M_{2}|e^{i\theta_{2}}e^{-i\phi_{2}}$ Some further calculation gives: $|M|^{2}-|\tilde{M}|^{2}=-4|M_{1}||M_{2}|\sin(\theta_{1}-\theta_{2})\sin(\phi_{1}-\phi_{2})$ Thus, we see that a complex phase gives rise to processes that proceed at different rates for particles and antiparticles, and CP is violated. ## Experimental status ### Indirect CP violation In 1964, James Cronin, Val Fitch with coworkers provided clear evidence (which was first announced at the 12th ICHEP conference in Dubna) that CP-symmetry could be broken. This work won them the 1980 Nobel Prize. This discovery showed that weak interactions violate not only the charge-conjugation symmetry C between particles and antiparticles and the P or parity, but also their combination. The discovery shocked particle physics and opened the door to questions still at the core of particle physics and of cosmology today. The lack of an exact CP-symmetry, but also the fact that it is so nearly a symmetry, created a great puzzle. Only a weaker version of the symmetry could be preserved by physical phenomena, which was CPT symmetry. Besides C and P, there is a third operation, time reversal (T), which corresponds to reversal of motion. Invariance under time reversal implies that whenever a motion is allowed by the laws of physics, the reversed motion is also an allowed one. The combination of CPT is thought to constitute an exact symmetry of all types of fundamental interactions. Because of the CPT symmetry, a violation of the CP-symmetry is equivalent to a violation of the T symmetry. CP violation implied nonconservation of T, provided that the long-held CPT theorem was valid. In this theorem, regarded as one of the basic principles of quantum field theory, charge conjugation, parity, and time reversal are applied together. ### Direct CP violation Kaon oscillation box diagram The two box diagrams above are the Feynman diagrams providing the leading contributions to the amplitude of K0-K0 oscillation The kind of CP violation discovered in 1964 was linked to the fact that neutral kaons can transform into their antiparticles (in which each quark is replaced with the other's antiquark) and vice versa, but such transformation does not occur with exactly the same probability in both directions; this is called indirect CP violation. Despite many searches, no other manifestation of CP violation was discovered until the 1990s, when the NA31 experiment at CERN suggested evidence for CP violation in the decay process of the very same neutral kaons (direct CP violation). The observation was somewhat controversial, and final proof for it came in 1999 from the KTeV experiment at Fermilab and the NA48 experiment at CERN.[1] In 2001, a new generation of experiments, including the BaBar Experiment at the Stanford Linear Accelerator Center (SLAC) and the Belle Experiment at the High Energy Accelerator Research Organisation (KEK) in Japan, observed direct CP violation in a different system, namely in decays of the B mesons.[2] By now a large number of CP violation processes in B meson decays have been discovered. Before these "B-factory" experiments, there was a logical possibility that all CP violation was confined to kaon physics. However, this raised the question of why it's not extended to the strong force, and furthermore, why this is not predicted in the unextended Standard Model, despite the model being undeniably accurate with "normal" phenomena. In 2011, a first indication of CP violation in decays of neutral D mesons was reported by the LHCb experiment at CERN. ## Strong CP problem ‹ The template below (Unsolved) is being considered for possible deletion. See templates for discussion to help reach a consensus.› Why is the strong nuclear interaction force CP-invariant? There is no experimentally known violation of the CP-symmetry in quantum chromodynamics. As there is no known reason for it to be conserved in QCD specifically, this is a "fine tuning" problem known as the Strong CP problem. QCD does not violate the CP-symmetry as easily as the electroweak theory; unlike the electroweak theory in which the gauge fields couple to chiral currents constructed from the fermionic fields, the gluons couple to vector currents. Experiments do not indicate any CP violation in the QCD sector. For example, a generic CP violation in the strongly interacting sector would create the electric dipole moment of the neutron which would be comparable to 10−18 e·m while the experimental upper bound is roughly one trillionth that size. This is a problem because at the end, there are natural terms in the QCD Lagrangian that are able to break the CP-symmetry. ${\mathcal L} = -\frac{1}{4} F_{\mu\nu}F^{\mu\nu}-\frac{n_f g^2\theta}{32\pi^2} F_{\mu\nu}\tilde F^{\mu\nu}+\bar \psi(i\gamma^\mu D_\mu - m e^{i\theta'\gamma_5})\psi$ For a nonzero choice of the θ angle and the chiral quark mass phase θ′ one expects the CP-symmetry to be violated. One usually assumes that the chiral quark mass phase can be converted to a contribution to the total effective $\scriptstyle{\tilde\theta}$ angle, but it remains to be explained why this angle is extremely small instead of being of order one; the particular value of the θ angle that must be very close to zero (in this case) is an example of a fine-tuning problem in physics, and is typically solved by physics beyond the Standard Model. There are several proposed solutions to solve the strong CP problem. The most well-known is Peccei–Quinn theory, involving new scalar particles called axions. A newer, more radical approach not requiring the axion is a theory involving two time dimensions first proposed in 1998 by Bars, Deliduman, and Andreev.[3] The strong CP problem may also be solved within a theory of quantum gravity. ### Little CP problem The little CP problem is a term coined by Lisa Randall. It refers to an issue related to the enhanced new physics contributions to the neutron EDM in flavor anarchic models.[4] ## CP violation and the matter–antimatter imbalance Main article: Baryogenesis ‹ The template below (Unsolved) is being considered for possible deletion. See templates for discussion to help reach a consensus.› Why does the universe have so much more matter than antimatter? The universe is made chiefly of matter, rather than consisting of equal parts of matter and antimatter as might be expected. It can be demonstrated that, to create an imbalance in matter and antimatter from an initial condition of balance, the Sakharov conditions must be satisfied, one of which is the existence of CP violation during the extreme conditions of the first seconds after the Big Bang. Explanations which do not involve CP violation are less plausible, since they rely on the assumption that the matter–antimatter imbalance was present at the beginning, or on other admittedly exotic assumptions. The Big Bang should have produced equal amounts of matter and antimatter if CP-symmetry was preserved; as such, there should have been total cancellation of both—protons should have cancelled with antiprotons, electrons with positrons, neutrons with antineutrons, and so on. This would have resulted in a sea of radiation in the universe with no matter. Since this is not the case, after the Big Bang, physical laws must have acted differently for matter and antimatter, i.e. violating CP-symmetry. The Standard Model contains only two ways to break CP-symmetry. The first of these, discussed above, is in the QCD Lagrangian, and has not been found experimentally; but one would expect this to lead to either no CP violation or a CP violation that is many, many orders of magnitude too large. The second of these, involving the weak force, has been experimentally verified, but can account for only a small portion of CP violation. It is predicted to be sufficient for a net mass of normal matter equivalent to only a single galaxy in the known universe. Since the Standard Model does not accurately predict this discrepancy, it would seem that the current Standard Model has gaps (other than the obvious one of gravity and related matters) or physics is otherwise in error. Moreover, experiments to probe these CP-related gaps may require the practically impossible-to-obtain energies that may be necessary to probe the gravity-related gaps (see Planck mass).[citation needed] ## Notes 1. NA48 Collaboration, V. Fanti, A. Lai, D. Marras, L. Musa, et al.. (1999). "A new measurement of direct CP violation in two pion decays of the neutral kaon". 465 (1–4): 335–348. arXiv:hep-ex/9909022. Bibcode:1999PhLB..465..335F. doi:10.1016/S0370-2693(99)01030-8. 2. Rodgers, Peter (August 2001). "Where did all the antimatter go?". Phys. World (Bristol, England) (magazine) (Bristol: Institute of Physics): 11. Retrieved 2009-01-22  More than one of `|magazine=` and `|journal=` specified (help) 3. I. Bars; C. Deliduman; O. Andreev (1998). "Gauged Duality, Conformal Symmetry, and Spacetime with Two Times". 58 (6): 066004. arXiv:hep-th/9803188. Bibcode:1998PhRvD..58f6004B. doi:10.1103/PhysRevD.58.066004. 4. Kadosh, Avihay; Pallante, Elisabetta (2011). "CP violation and FCNC in a warped A4 flavor model". Journal of High Energy Physics 2011 (6). arXiv:1101.5420. doi:10.1007/JHEP06(2011)121. ## References • Sozzi, M.S. (2008). Discrete symmetries and CP violation. Oxford University Press. ISBN 978-0-19-929666-8. • G. C. Branco, L. Lavoura and J. P. Silva (1999). CP violation. Clarendon Press. ISBN 0-19-850399-7. • I. Bigi and A. Sanda (1999). CP violation. Cambridge University Press. ISBN 0-521-44349-0. • Michael Beyer, ed. (2002). CP Violation in Particle, Nuclear and Astrophysics. Springer. ISBN 3-540-43705-3.  (A collection of essays introducing the subject, with an emphasis on experimental results.) • L. Wolfenstein (1989). CP violation. North–Holland Publishing. ISBN 0-444-88081-X.  (A compilation of reprints of numerous important papers on the topic, including papers by T.D. Lee, Cronin, Fitch, Kobayashi and Maskawa, and many others.) • David J. Griffiths (1987). Introduction to Elementary Particles. John Wiley & Sons. ISBN 0-471-60386-4. • Bigi, I. (1997). "CP Violation — An Essential Mystery in Nature's Grand Design". 12: 269–336. arXiv:hep-ph/9712475. Bibcode:1997hep.ph...12475B. doi:10.1080/01422419808228861. • Mark Trodden (1998). "Electroweak Baryogenesis". 71 (5): 1463. arXiv:hep-ph/9803479. Bibcode:1999RvMP...71.1463T. doi:10.1103/RevModPhys.71.1463. • Davide Castelvecchi. "What is direct CP-violation?". SLAC. Retrieved 2009-07-01.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.921856701374054, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Harmonic_series_(mathematics)
# Harmonic series (mathematics) Calculus Definitions Concepts Rules and identities Integral calculus Definitions Integration by Formalisms Definitions Specialized calculi In mathematics, the harmonic series is the divergent infinite series: $\sum_{n=1}^\infty\,\frac{1}{n} \;\;=\;\; 1 \,+\, \frac{1}{2} \,+\, \frac{1}{3} \,+\, \frac{1}{4} \,+\, \frac{1}{5} \,+\, \cdots.\!$ Its name derives from the concept of overtones, or harmonics in music: the wavelengths of the overtones of a vibrating string are 1/2, 1/3, 1/4, etc., of the string's fundamental wavelength. Every term of the series after the first is the harmonic mean of the neighboring terms; the phrase harmonic mean likewise derives from music. ## History The fact that the harmonic series diverges was first proven in the 14th century by Nicole Oresme, but this achievement fell into obscurity. Proofs were given in the 17th century by Pietro Mengoli, Johann Bernoulli, and Jacob Bernoulli. Historically, harmonic sequences have had a certain popularity with architects. This was so particularly in the Baroque period, when architects used them to establish the proportions of floor plans, of elevations, and to establish harmonic relationships between both interior and exterior architectural details of churches and palaces.[1] ## Paradoxes The harmonic series is counterintuitive to students first encountering it, because it is a divergent series though the limit of the nth term as n goes to infinity is zero. The divergence of the harmonic series is also the source of some apparent paradoxes. One example of these is the "worm on the rubber band".[2] Suppose that a worm crawls along a 1 metre rubber band and, after each minute, the rubber band is uniformly stretched by an additional 1 metre. If the worm travels 1 centimetre per minute, will the worm ever reach the end of the rubber band? The answer, counterintuitively, is "yes", for after n minutes, the ratio of the distance travelled by the worm to the total length of the rubber band is $\frac{1}{100}\sum_{k=1}^n\frac{1}{k}.$ Because the series gets arbitrarily large as n becomes larger, eventually this ratio must exceed 1, which implies that the worm reaches the end of the rubber band. The value of n at which this occurs must be extremely large, however, approximately e100, a number exceeding 1040. Although the harmonic series does diverge, it does so very slowly. Another example is: given a collection of identical dominoes, it is clearly possible to stack them at the edge of a table so that they hang over the edge of the table without falling. The counterintuitive result is that one can stack them in such a way as to make the overhang arbitrarily large, provided there are enough dominoes.[2][3] ## Divergence There are several well-known proofs of the divergence of the harmonic series. Two of them are given below. ### Comparison test One way to prove divergence is to compare the harmonic series with another divergent series: $\begin{align} & 1 \;\;+\;\; \frac{1}{2} \;\;+\;\; \frac{1}{3} \,+\, \frac{1}{4} \;\;+\;\; \frac{1}{5} \,+\, \frac{1}{6} \,+\, \frac{1}{7} \,+\, \frac{1}{8} \;\;+\;\; \frac{1}{9} \,+\, \cdots \\[12pt] >\;\;\; & 1 \;\;+\;\; \frac{1}{2} \;\;+\;\; \frac{1}{4} \,+\, \frac{1}{4} \;\;+\;\; \frac{1}{8} \,+\, \frac{1}{8} \,+\, \frac{1}{8} \,+\, \frac{1}{8} \;\;+\;\; \frac{1}{16} \,+\, \cdots. \end{align}$ Each term of the harmonic series is greater than or equal to the corresponding term of the second series, and therefore the sum of the harmonic series must be greater than the sum of the second series. However, the sum of the second series is infinite: $\begin{align} & 1 + \left(\frac{1}{2}\right) + \left(\frac{1}{4}+\frac{1}{4}\right) + \left(\frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8}\right) + \left(\frac{1}{16}+\cdots+\frac{1}{16}\right) + \cdots \\[12pt] =\;\; & 1 \;\;+\;\; \frac{1}{2} \;\;+\;\; \frac{1}{2} \;\;+\;\; \frac{1}{2} \;\;+\;\; \frac{1}{2} \;\;+\;\; \cdots \;\;=\;\; \infty. \end{align}$ It follows (by the comparison test) that the sum of the harmonic series must be infinite as well. More precisely, the comparison above proves that $\sum_{n=1}^{2^k} \,\frac{1}{n} \;\geq\; 1 + \frac{k}{2}$ for every positive integer k. This proof, due to Nicole Oresme, is considered by some a high point of medieval mathematics. It is still a standard proof taught in mathematics classes today. Cauchy's condensation test is a generalization of this argument. ### Integral test It is possible to prove that the harmonic series diverges by comparing its sum with an improper integral. Specifically, consider the arrangement of rectangles shown in the figure to the right. Each rectangle is 1 unit wide and 1 / n units high, so the total area of the rectangles is the sum of the harmonic series: $\begin{array}{c} \text{area of}\\ \text{rectangles} \end{array} = 1 \,+\, \frac{1}{2} \,+\, \frac{1}{3} \,+\, \frac{1}{4} \,+\, \frac{1}{5} \,+\, \cdots.$ However, the total area under the curve y = 1 / x from 1 to infinity is given by an improper integral: $\begin{array}{c} \text{area under}\\ \text{curve} \end{array} = \int_1^\infty\frac{1}{x}\,dx \;=\; \infty.$ Since this area is entirely contained within the rectangles, the total area of the rectangles must be infinite as well. More precisely, this proves that $\sum_{n=1}^k \, \frac{1}{n} \;>\; \int_1^{k+1} \frac{1}{x}\,dx \;=\; \ln(k+1).$ The generalization of this argument is known as the integral test. ## Rate of divergence The harmonic series diverges very slowly. For example, the sum of the first 1043 terms is less than 100.[4] This is because the partial sums of the series have logarithmic growth. In particular, $\sum_{n=1}^k\,\frac{1}{n} \;=\; \ln k + \gamma + \varepsilon_k$ where $\gamma$ is the Euler–Mascheroni constant and $\varepsilon_k$ ~ $\frac{1}{2k}$ which approaches 0 as $k$ goes to infinity. Leonhard Euler proved both this and also the more striking fact that the sum which includes only the reciprocals of primes also diverges, i.e. $\sum_{p\text{ prime }}\frac1p = \frac12 + \frac13 + \frac15 + \frac17 + \frac1{11} + \frac1{13} + \frac1{17} +\cdots = \infty.$ ## Partial sums The nth partial sum of the diverging harmonic series, $H_n = \sum_{k = 1}^n \frac{1}{k},\!$ is called the nth harmonic number. The difference between the nth harmonic number and the natural logarithm of n converges to the Euler–Mascheroni constant. The difference between distinct harmonic numbers is never an integer. No harmonic numbers are integers, except for n = 1.[5] ## Related series ### Alternating harmonic series The first fourteen partial sums of the alternating harmonic series (black line segments) shown converging to the natural logarithm of 2 (red line). The series $\sum_{n = 1}^\infty \frac{(-1)^{n + 1}}{n} \;=\; 1 \,-\, \frac{1}{2} \,+\, \frac{1}{3} \,-\, \frac{1}{4} \,+\, \frac{1}{5} \,-\, \cdots$ is known as the alternating harmonic series. This series converges by the alternating series test. In particular, the sum is equal to the natural logarithm of 2: $1 \,-\, \frac{1}{2} \,+\, \frac{1}{3} \,-\, \frac{1}{4} \,+\, \frac{1}{5} \,-\, \cdots \;=\; \ln 2.$ This formula is a special case of the Mercator series, the Taylor series for the natural logarithm. A related series can be derived from the Taylor series for the arctangent: $\sum_{n = 0}^\infty \frac{(-1)^{n}}{2n+1} \;\;=\;\; 1 \,-\, \frac{1}{3} \,+\, \frac{1}{5} \,-\, \frac{1}{7} \,+\, \cdots \;\;=\;\; \frac{\pi}{4}.$ This is known as the Leibniz formula for pi. ### General harmonic series The general harmonic series is of the form $\sum_{n=0}^{\infty}\frac{1}{an+b} ,\!$ where $a \ne 0$ and $b$ are real numbers. By the comparison test, all general harmonic series diverge. [6] ### P-series A generalization of the harmonic series is the p-series (or hyperharmonic series), defined as: $\sum_{n=1}^{\infty}\frac{1}{n^p},\!$ for any positive real number p. When p = 1, the p-series is the harmonic series, which diverges. Either the integral test or the Cauchy condensation test shows that the p-series converges for all p > 1 (in which case it is called the over-harmonic series) and diverges for all p ≤ 1. If p > 1 then the sum of the p-series is ζ(p), i.e., the Riemann zeta function evaluated at p. ### φ-series For any convex, real-valued function φ such that $\limsup_{u\to 0^{+}}\frac{\varphi(\frac{u}{2})}{\varphi(u)}< \frac{1}{2}$ the series ∑n≥1 φ(n−1) is convergent. ### Random harmonic series The random harmonic series $\sum_{n=1}^{\infty}\frac{s_{n}}{n},\!$ where the sn are independent, identically distributed random variables taking the values +1 and −1 with equal probability 1/2, is a well-known example in probability theory for a series of random variables that converges with probability 1. The fact of this convergence is an easy consequence of either the Kolmogorov three-series theorem or of the closely related Kolmogorov maximal inequality. Byron Schmuland of the University of Alberta further examined[7][8] the properties of the random harmonic series, and showed that the convergent is a random variable with some interesting properties. In particular, the probability density function of this random variable evaluated at +2 or at −2 takes on the value 0.124999999999999999999999999999999999999999764…, differing from 1/8 by less than 10−42. Schmuland's paper explains why this probability is so close to, but not exactly, 1/8. The exact value of this probability is given by the infinite cosine product integral $C_2$[9] divided by π. ### Depleted harmonic series Main article: Kempner series The depleted harmonic series where all of the terms in which the digit 9 appears anywhere in the denominator are removed can be shown to converge and its value is less than 80.[10] In fact when terms containing any particular string of digits are removed the series converges. ## See also Wikimedia Commons has media related to: Harmonic series ## References 1. George L. Hersey, Architecture and Geometry in the Age of the Baroque, p 11-12 and p37-51. 2. ^ a b Graham, Ronald; Knuth, Donald E.; Patashnik, Oren (1989), Concrete Mathematics (2nd ed.), Addison-Wesley, pp. 258–264, ISBN 978-0-201-55802-9 3. Sharp, R.T. (1954), "Problem 52: Overhanging dominoes", Pi Mu Epsilon Journal: 411–412 4. "Sloane's A082912 : Sum of a(n) terms of harmonic series is > 10^n", . OEIS Foundation. 5. "Random Harmonic Series", American Mathematical Monthly 110, 407-416, May 2003
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9051029086112976, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/48372/is-time-speeding-up-due-to-the-expansion-of-space/48376
# Is time speeding up due to the expansion of space? If we just look at our local galactic cluster, if all of the galaxies that are a part of it are moving away from each other, and so the overall 'density' of the strength of gravity in the cluster is decreasing over time, does that mean that time itself is also speeding up? If this is the case, would it be on a noticeable scale, and if so, would there be any way would could measure it happening (seeing as we'd still be experiencing everything at exactly the same rate since the things/people around us would be just as affected by the weakening strength of gravity)? Let me know if it isn't clear what I'm asking and I'll try to make it more concise, also, be gentle, I'm a layman ;) - 1 Essentially a duplicate of physics.stackexchange.com/a/43073/2451 and physics.stackexchange.com/q/37629/2451 – Qmechanic♦ Jan 5 at 0:40 The way I read this, it is not a duplicate. Correct me if I'm wrong, but isn't this asking whether time is speeding up because we are sitting in an ever-shallowing potential well? (cf the slowing of time close to a black hole) – Chris White Jan 5 at 3:50 ## 2 Answers machinemessiah, if you understood hwlin's answer please accept it. It is elegant, correct and really says all that needs saying. However, Wouter raised a reasonable concern that you or others reading this might not be familiar with the concepts taken for granted in hwlin's answer. In that case I offer what may be an intelligible answer, which also addresses the breakdown of the cosmological assumption made by hwlin (and everyone else). Apologies if it is too wordy. :) A key insight of relativity is that any discussion of physics must be rooted in operational procedures for performing measurements. All invisible concepts in physics - space, time, energy, fields, etc. - are shorthands, or powerful organising principles, which govern relationships between measurements which are made by some procedure, however informal such procedures may be. (Have you ever seen space? I haven't. But I have noticed that I routinely avoid banging my head against the wall, though, when the circumstance warrants it, such contact can be easily arranged. These and many other routine observations support the notion of space as a powerful organising principle. Usually the observations are so routine that we don't even think of them as observations, but observations they are.) So to your question: is time speeding up? How would you measure such a speed-up? You are correct that if everything else is speeding up in the same way then there is no way to measure a speed-up. Because of this the idea of a global speedup of time is operationally meaningless. It could only make sense to some god-like observer standing outside of our (portion of) spacetime looking in. This is essentially what hwlin told you. You can always come up with a notion of time in cosmology that just "trucks along," without changing pace, and this cosmic clock serves as a good definition for time: it is simple, robust, ticks all the necessary boxes, makes the equations very nice and is hard to disagree with. Another key insight of relativity is that every reference frame is as good as any other. So for intance, every observer has the right to construct their own clock and call the measurement "time." Though there is a complete democracy of observers, each with their own definition of time, the laws of relativity tell how any two definitions of time, measured by any two well constructed clocks are related. (A bad clock would be a grandfather clock leaning over sideways, for example. No one is required to pay attention to a bad clock.) So now we can ask how clocks in the local cluster of galaxies are related to clocks in the distant universe. This is where it gets interesting. Cosmologists tend to make an approximation that the whole universe is homogenous. That is, all of the matter is spread evenly throughout the universe. This approximation dramatically simplifies the equations! If the universe is not homogenous then you need to specify a bunch of numbers at every point of space and keep track of how they are all changing. If instead you pretend that every point in space is equivalent then you only need to keep track of one set of numbers! hwlin's reply was in the context of this approximation. It is correct insofar as the homogeneity assumption is correct. Of course the universe is not homogenous, but on cosmological scales it comes pretty close. hwlin's answer, addressed at these scales, is completely appropriate. Unfortunately you mentioned galaxy clusters which, by definition, violate the homogeneity assumption. So let's see what we can say about the local neighbourhood, where the cosmological assumption breaks down. Suppose that the cluster is neighboured by a void (perhaps the Local Void? I don't know my extra-galactic geography). In this case clocks inside the cluster run slower than clocks outside the cluster. How to measure this? We can arrange that somebody inside the cluster, "lower down in the gravity well," sends a pulse of light once every second - by their clock - to an observer situated outside the galaxy cluster. In this case there is an observable difference: the observer outside the cluster receives the pulses at intervals greater than one second. He says the the clock of the guy inside is running slower. This is the well known gravitational redshift, and is well established by now. It was observed at two different heights on the Earth in the Pound-Rebka experiment. An observer inside the cluster could attempt to observe the faint light emitted by extra-galactic hydrogen, which has a characteristic frequency determined by quantum mechanics. What we would see is a subtle blue-shift of radiation coming from the void. This measurement is similar in principle to the measurement of the Lyman-alpha forest. It is also similar to the integrated Sachs-Wolfe effect (thanks Chris White), which measures the photons from the cosmic microwave background rather than nearby voids. I'm not sure if present day observations are able to see it. If the cluster lost mass for some reason and the mass didn't go into the void, then you could in principle measure a slight decrease of the blueshift. If the mass went into the void it becomes a very complicated thing that depends on the distribution of matter where you are looking. In any case the expansion of the universe wouldn't cause this as the local cluster is held together by the gravity of the galaxies inside it, and the cosmic expansion really only separates unbound systems. - 1 Regarding seeing the blueshift of photons leaving a void: I can't recall if this has been done directly (I bet exoplanet RV instruments have the sensitivity, but the effect is degenerate with peculiar motion), but we do have the integrated Sachs-Wolfe effect, which basically measures how the depth of the potential well changes over the travel time of the photon. – Chris White Jan 7 at 7:53 There it is! I was banging my head trying to think of the name. Sachs-Wolfe, Sachs-Wolfe... Thank you. :) – Michael Brown Jan 7 at 7:56 No, time is not speeding up. When we talk of "the universe speeding up," we mean that distant galaxy clusters are receding at an ever-increasing rate. Time is not. The easiest way to see that is to compare the metric of cosmology $ds^2=-dt^2 + a^2(t)dr^2$ with the metric of special relativity $ds^2=-dt^2 + dr^2$. You see that the only difference is that in cosmology, there is this factor $a(t)$. $a(t)$ accounts for the expansion of space; but it does NOT multiply the time component. Thus, cosmological time is basically just trucking along. I should mention that this is really due to our definition of cosmological time. We could invent some crazy time coordinate that speeds up. In fact, we could define space coordinates that expand even in perfectly static space. Such definitions are not useful and not used in cosmology. Our definitions are a good one, because a (comoving) clock on Earth measures the same time as the one in our equations. - 1 Note: edited your answer to remove the incorrect factor of $a^2$ from the SR metric. Otherwise your explanation is perfect! – Michael Brown Jan 5 at 3:21 Ah, thanks for catching that typo! – hwlin Jan 5 at 3:32 1 I'm not sure how much of a layman the OP is, but perhaps some expansion is useful/necessary on the subject of metrics and maybe even on the symbols used (depending on the exact knowledge of the OP). For people familiar with these concepts and symbols it's a very clear and concise explanation though. +1 – Wouter Jan 5 at 11:55 ## protected by Qmechanic♦Mar 6 at 16:53 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582306742668152, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Low_pass_filter
# Low-pass filter (Redirected from Low pass filter) A low-pass filter is an electronic filter that passes low-frequency signals and attenuates (reduces the amplitude of) signals with frequencies higher than the cutoff frequency. The actual amount of attenuation for each frequency varies from filter to filter. It is sometimes called a high-cut filter, or treble cut filter when used in audio applications. A low-pass filter is the opposite of a high-pass filter. A band-pass filter is a combination of a low-pass and a high-pass. Low-pass filters exist in many different forms, including electronic circuits (such as a hiss filter used in audio), anti-aliasing filters for conditioning signals prior to analog-to-digital conversion, digital filters for smoothing sets of data, acoustic barriers, blurring of images, and so on. The moving average operation used in fields such as finance is a particular kind of low-pass filter, and can be analyzed with the same signal processing techniques as are used for other low-pass filters. Low-pass filters provide a smoother form of a signal, removing the short-term fluctuations, and leaving the longer-term trend. An optical filter could correctly be called low-pass, but conventionally is described as "longpass" (low frequency is long wavelength), to avoid confusion. ## Examples of low-pass filters ### Acoustic A stiff physical barrier tends to reflect higher sound frequencies, and so acts as a low-pass filter for transmitting sound. When music is playing in another room, the low notes are easily heard, while the high notes are attenuated. ### Electronic In an electronic low-pass RC filter for voltage signals, high frequencies contained in the input signal are attenuated but the filter has little attenuation below its cutoff frequency which is determined by its RC time constant. For current signals, a similar circuit using a resistor and capacitor in parallel works in a similar manner. See current divider discussed in more detail below. Electronic low-pass filters are used on input signals to subwoofers and other types of loudspeakers, to block high pitches that they can't efficiently reproduce. Radio transmitters use low-pass filters to block harmonic emissions which might cause interference with other communications. The tone knob found on many electric guitars is a low-pass filter used to reduce the amount of treble in the sound. An integrator is another example of a single time constant low-pass filter.[1] Telephone lines fitted with DSL splitters use low-pass and high-pass filters to separate DSL and POTS signals sharing the same pair of wires. Low-pass filters also play a significant role in the sculpting of sound for electronic music as created by analogue synthesisers. See subtractive synthesis. ## Ideal and real filters The sinc function, the impulse response of an ideal low-pass filter. An ideal low-pass filter completely eliminates all frequencies above the cutoff frequency while passing those below unchanged; its frequency response is a rectangular function and is a brick-wall filter. The transition region present in practical filters does not exist in an ideal filter. An ideal low-pass filter can be realized mathematically (theoretically) by multiplying a signal by the rectangular function in the frequency domain or, equivalently, convolution with its impulse response, a sinc function, in the time domain. However, the ideal filter is impossible to realize without also having signals of infinite extent in time, and so generally needs to be approximated for real ongoing signals, because the sinc function's support region extends to all past and future times. The filter would therefore need to have infinite delay, or knowledge of the infinite future and past, in order to perform the convolution. It is effectively realizable for pre-recorded digital signals by assuming extensions of zero into the past and future, or more typically by making the signal repetitive and using Fourier analysis. Real filters for real-time applications approximate the ideal filter by truncating and windowing the infinite impulse response to make a finite impulse response; applying that filter requires delaying the signal for a moderate period of time, allowing the computation to "see" a little bit into the future. This delay is manifested as phase shift. Greater accuracy in approximation requires a longer delay. An ideal low-pass filter results in ringing artifacts via the Gibbs phenomenon. These can be reduced or worsened by choice of windowing function, and the design and choice of real filters involves understanding and minimizing these artifacts. For example, "simple truncation [of sinc] causes severe ringing artifacts," in signal reconstruction, and to reduce these artifacts one uses window functions "which drop off more smoothly at the edges."[2] The Whittaker–Shannon interpolation formula describes how to use a perfect low-pass filter to reconstruct a continuous signal from a sampled digital signal. Real digital-to-analog converters use real filter approximations. ## Continuous-time low-pass filters The gain-magnitude frequency response of a first-order (one-pole) low-pass filter. Power gain is shown in decibels (i.e., a 3 dB decline reflects an additional half-power attenuation). Angular frequency is shown on a logarithmic scale in units of radians per second. There are many different types of filter circuits, with different responses to changing frequency. The frequency response of a filter is generally represented using a Bode plot, and the filter is characterized by its cutoff frequency and rate of frequency rolloff. In all cases, at the cutoff frequency, the filter attenuates the input power by half or 3 dB. So the order of the filter determines the amount of additional attenuation for frequencies higher than the cutoff frequency. • A first-order filter, for example, will reduce the signal amplitude by half (so power reduces by a factor of 4), or 6 dB, every time the frequency doubles (goes up one octave); more precisely, the power rolloff approaches 20 dB per decade in the limit of high frequency. The magnitude Bode plot for a first-order filter looks like a horizontal line below the cutoff frequency, and a diagonal line above the cutoff frequency. There is also a "knee curve" at the boundary between the two, which smoothly transitions between the two straight line regions. If the transfer function of a first-order low-pass filter has a zero as well as a pole, the Bode plot will flatten out again, at some maximum attenuation of high frequencies; such an effect is caused for example by a little bit of the input leaking around the one-pole filter; this one-pole–one-zero filter is still a first-order low-pass. See Pole–zero plot and RC circuit. • A second-order filter attenuates higher frequencies more steeply. The Bode plot for this type of filter resembles that of a first-order filter, except that it falls off more quickly. For example, a second-order Butterworth filter will reduce the signal amplitude to one fourth its original level every time the frequency doubles (so power decreases by 12 dB per octave, or 40 dB per decade). Other all-pole second-order filters may roll off at different rates initially depending on their Q factor, but approach the same final rate of 12 dB per octave; as with the first-order filters, zeroes in the transfer function can change the high-frequency asymptote. See RLC circuit. • Third- and higher-order filters are defined similarly. In general, the final rate of power rolloff for an order-$\scriptstyle n$ all-pole filter is $\scriptstyle 6n$ dB per octave (i.e., $\scriptstyle 20n$ dB per decade). On any Butterworth filter, if one extends the horizontal line to the right and the diagonal line to the upper-left (the asymptotes of the function), they will intersect at exactly the "cutoff frequency". The frequency response at the cutoff frequency in a first-order filter is 3 dB below the horizontal line. The various types of filters (Butterworth filter, Chebyshev filter, Bessel filter, etc.) all have different-looking "knee curves". Many second-order filters are designed to have "peaking" or resonance, causing their frequency response at the cutoff frequency to be above the horizontal line. Furthermore, the actual frequency at which this peaking occurs can be predicted without calculus, as shown by Cartwright[3] et al. See electronic filter for other types. The meanings of 'low' and 'high' – that is, the cutoff frequency – depend on the characteristics of the filter. The term "low-pass filter" merely refers to the shape of the filter's response; a high-pass filter could be built that cuts off at a lower frequency than any low-pass filter—it is their responses that set them apart. Electronic circuits can be devised for any desired frequency range, right up through microwave frequencies (above 1 GHz) and higher. ### Laplace notation Continuous-time filters can also be described in terms of the Laplace transform of their impulse response in a way that allows all of the characteristics of the filter to be easily analyzed by considering the pattern of poles and zeros of the Laplace transform in the complex plane (in discrete time, one can similarly consider the Z-transform of the impulse response). For example, a first-order low-pass filter can be described in Laplace notation as $\frac{\text{Output}}{\text{Input}} = K \frac{1}{1 + s \tau}$ where s is the Laplace transform variable, τ is the filter time constant, and K is the filter passband gain. ## Electronic low-pass filters ### Passive electronic realization Passive, first order low-pass RC filter One simple electrical circuit that will serve as a low-pass filter consists of a resistor in series with a load, and a capacitor in parallel with the load. The capacitor exhibits reactance, and blocks low-frequency signals, causing them to go through the load instead. At higher frequencies the reactance drops, and the capacitor effectively functions as a short circuit. The combination of resistance and capacitance gives the time constant of the filter $\scriptstyle \tau \;=\; RC$ (represented by the Greek letter tau). The break frequency, also called the turnover frequency or cutoff frequency (in hertz), is determined by the time constant: $f_\mathrm{c} = {1 \over 2 \pi \tau } = {1 \over 2 \pi R C}$ or equivalently (in radians per second): $\omega_\mathrm{c} = {1 \over \tau} = {1 \over R C}$ One way to understand this circuit is to focus on the time the capacitor takes to charge. It takes time to charge or discharge the capacitor through that resistor: • At low frequencies, there is plenty of time for the capacitor to charge up to practically the same voltage as the input voltage. • At high frequencies, the capacitor only has time to charge up a small amount before the input switches direction. The output goes up and down only a small fraction of the amount the input goes up and down. At double the frequency, there's only time for it to charge up half the amount. Another way to understand this circuit is with the idea of reactance at a particular frequency: • Since DC cannot flow through the capacitor, DC input must "flow out" the path marked $\scriptstyle V_\mathrm{out}$ (analogous to removing the capacitor). • Since AC flows very well through the capacitor — almost as well as it flows through solid wire — AC input "flows out" through the capacitor, effectively short circuiting to ground (analogous to replacing the capacitor with just a wire). The capacitor is not an "on/off" object (like the block or pass fluidic explanation above). The capacitor will variably act between these two extremes. It is the Bode plot and frequency response that show this variability. ### Active electronic realization An active low-pass filter Another type of electrical circuit is an active low-pass filter. In the operational amplifier circuit shown in the figure, the cutoff frequency (in hertz) is defined as: $f_{\text{c}} = \frac{1}{2 \pi R_2 C}$ or equivalently (in radians per second): $\omega_{\text{c}} = \frac{1}{R_2 C}$ The gain in the passband is −R2/R1, and the stopband drops off at −6 dB per octave (that is −20 dB per decade) as it is a first-order filter. ### Discrete-time realization For another method of conversion from continuous- to discrete-time, see Bilinear transform. Many digital filters are designed to give low-pass characteristics. Both infinite impulse response and finite impulse response low pass filters as well as filters using fourier transforms are widely used. #### Simple infinite impulse response filter The effect of an infinite impulse response low-pass filter can be simulated on a computer by analyzing an RC filter's behavior in the time domain, and then discretizing the model. A simple low-pass RC filter From the circuit diagram to the right, according to Kirchhoff's Laws and the definition of capacitance: $v_{\text{in}}(t) - v_{\text{out}}(t) = R \; i(t)$ () $Q_c(t) = C \, v_{\text{out}}(t)$ () $i(t) = \frac{\operatorname{d} Q_c}{\operatorname{d} t}$ () where $Q_c(t)$ is the charge stored in the capacitor at time $\scriptstyle t$. Substituting equation Q into equation I gives $\scriptstyle i(t) \;=\; C \frac{\operatorname{d}v_{\text{out}}}{\operatorname{d}t}$, which can be substituted into equation V so that: $v_{\text{in}}(t) - v_{\text{out}}(t) = RC \frac{\operatorname{d}v_{\text{out}}}{\operatorname{d}t}$ This equation can be discretized. For simplicity, assume that samples of the input and output are taken at evenly-spaced points in time separated by $\scriptstyle \Delta_T$ time. Let the samples of $\scriptstyle v_{\text{in}}$ be represented by the sequence $\scriptstyle (x_1,\, x_2,\, \ldots,\, x_n)$, and let $\scriptstyle v_{\text{out}}$ be represented by the sequence $\scriptstyle (y_1,\, y_2,\, \ldots,\, y_n)$ which correspond to the same points in time. Making these substitutions: $x_i - y_i = RC \, \frac{y_{i}-y_{i-1}}{\Delta_T}$ And rearranging terms gives the recurrence relation $y_i = \overbrace{x_i \left( \frac{\Delta_T}{RC + \Delta_T} \right)}^{\text{Input contribution}} + \overbrace{y_{i-1} \left( \frac{RC}{RC + \Delta_T} \right)}^{\text{Inertia from previous output}}.$ That is, this discrete-time implementation of a simple RC low-pass filter is the exponentially-weighted moving average $y_i = \alpha x_i + (1 - \alpha) y_{i-1} \qquad \text{where} \qquad \alpha \triangleq \frac{\Delta_T}{RC + \Delta_T}$ By definition, the smoothing factor $\scriptstyle 0 \;\leq\; \alpha \;\leq\; 1$. The expression for $\scriptstyle \alpha$ yields the equivalent time constant $\scriptstyle RC$ in terms of the sampling period $\scriptstyle \Delta_T$ and smoothing factor $\scriptstyle \alpha$: $RC = \Delta_T \left( \frac{1 - \alpha}{\alpha} \right)$ If $\scriptstyle \alpha \;=\; 0.5$, then the $\scriptstyle RC$ time constant is equal to the sampling period. If $\scriptstyle \alpha \;\ll\; 0.5$, then $\scriptstyle RC$ is significantly larger than the sampling interval, and $\scriptstyle \Delta_T \;\approx\; \alpha RC$. The filter recurrence relation provides a way to determine the output samples in terms of the input samples and the preceding output. The following pseudocode algorithm will simulate the effect of a low-pass filter on a series of digital samples: ``` // Return RC low-pass filter output samples, given input samples, // time interval dt, and time constant RC function lowpass(real[0..n] x, real dt, real RC) var real[0..n] y var real α := dt / (RC + dt) y[0] := x[0] for i from 1 to n y[i] := α * x[i] + (1-α) * y[i-1] return y ``` The loop that calculates each of the n outputs can be refactored into the equivalent: ``` for i from 1 to n y[i] := y[i-1] + α * (x[i] - y[i-1]) ``` That is, the change from one filter output to the next is proportional to the difference between the previous output and the next input. This exponential smoothing property matches the exponential decay seen in the continuous-time system. As expected, as the time constant $\scriptstyle RC$ increases, the discrete-time smoothing parameter $\scriptstyle \alpha$ decreases, and the output samples $\scriptstyle (y_1,\, y_2,\, \ldots,\, y_n)$ respond more slowly to a change in the input samples $\scriptstyle (x_1,\, x_2,\, \ldots,\, x_n)$; the system will have more inertia. This filter is an infinite-impulse-response (IIR) single-pole lowpass filter. #### Finite impulse response Finite impulse response filters can be built that approximate to the ideal sinc time domain response. In practice the time domain response must be time truncated and is often of a simplified shape; in the simplest case, a running average can be used giving a square time response.[4] ## References 1. Sedra, Adel; Smith, Kenneth C. (1991). Microelectronic Circuits, 3 ed. Saunders College Publishing. p. 60. ISBN 0-03-051648-X. 2. Signal recovery from noise in electronic instrumentation - T H Whilmshurst
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 40, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9111512899398804, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/191020/a-question-about-ideals-in-operator-algebras
# A question about ideals in operator algebras Is it true that in a unital C*-algebra $A$ every closed left ideal $L\subset A$ is an intersection of all maximal left ideals which contain $L$? I know that $L$ is the left-kernel of some state but I am not sure whether this helps. Edit: This is true. See Banach Algebras and the General Theory of *-Algebras: Volume 2, *-Algebras; Theorem 10.5.2(b). You may delete this question. - 2 Rather than delete the question you should add the answer as an answer! (And, after a couple of days, accept it) – Mariano Suárez-Alvarez♦ Sep 4 '12 at 18:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304434657096863, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/174697/why-do-we-distinguish-the-continuous-spectrum-and-the-residual-spectrum
# Why do we distinguish the continuous spectrum and the residual spectrum? As we know, continuous spectrum and residual spectrum are two cases in the spectrum of an operator, which only appear in infinite dimension. If $T$ is a operator from Banach space $X$ to $X$, $aI-T$ is injective, and $R(aI-T)$ is not $X$. If $R(aI-T)$ is dense in $X$, then $a$ belongs to the continuous specturm, it not, $a$ belongs to the residual spectrum. I want to know why do we care about whether $R(aI-T)$ is dense, thanks. - It seems as we just emphasize this difference for the operator, but not for the Banach algebra, why? – Strongart Jul 29 '12 at 6:03 ## 2 Answers The point $\lambda\in\mathbb{C}$ belongs to the spectrum of operator $T$ if the operator $T_\lambda:=T-\lambda I$ is not invertible. What can prevent $T_\lambda$ from being invertible? Recall that we are working in Banach space $X$ so invertibility is equivalent to bijectivity. Thus we need to study reasons why operator $T_\lambda$ can't be bijective. We can distinguish two cases: • the operator $T_\lambda$ is not injective • operator $T_\lambda$ is injective but not surjective Now we discuss these cases. 1) The first one is the most common. In this case $\mathrm{Ker}(T_\lambda)$ is non-trivial so $T_\lambda$ is not invertible, and we say that $\lambda$ is in the point spectrum. If $X$ is finite dimensional this is the only possible case for operator not to be bijective. The reason is that an injective operator on a finite dimensional space is automatically surjective. But in case $X$ is infinite dimensional there are examples of injective but not surjective operators! 2) In the second case we have injective but not surjective operators. This means that the image of the operator $\mathrm{Im}(T)$ (which is a linear subspace) is not the whole space $X$. If $X$ is finite dimensional it is impossible for the operator $T_\lambda$ to be injective but not surjective, so this is not the case. If $X$ is infinite dimensional there two possibilities for the subspace $\mathrm{Im}(T_\lambda)$ not to be the whole $X$. Here we have two cases: 2.1) $\overline{\mathrm{Im}(T_\lambda)}=X$, speaking informally $T_\lambda$ is "almost surjective". In this case we say that $\lambda$ is in continuous spectrum. 2.2) $\overline{\mathrm{Im}(T_\lambda)}\neq X$, speaking informally $T_\lambda$ is "essentially non-surjective", even the closure of its image is a proper subspace of $X$! In this case we say that $\lambda$ is in the residual spectrum. There are other classifications of points of the spectrum, but this one is the most common. - 2 Yes, this is nice, but I don't quite understand how this answers the question which is "why do we distinguish cases 2.1) and 2.2)?" – t.b. Jul 25 '12 at 2:31 1 Well... this explanation shows that the two cases are different, so it is natural that they are distinguished! – Mariano Suárez-Alvarez♦ Jul 25 '12 at 2:48 Theo, thanks for your edit. I see that my English is so poor... As for your question, I think we distinguish this cases because the measure "non-surjectiveness" of our operator. May this a bad explanation, this is just how I understand that. – Norbert Jul 25 '12 at 9:16 Oh, my English is also poor. In fact, I want to know why do we emphasize the "almost surjective" from the "nonsurjective". – Strongart Jul 25 '12 at 15:29 1 That is a good reason, but I do not think it is enough. Maybe the following claim is helpful: T is invertible iff T is bounded below and dense range. – Strongart Jul 26 '12 at 15:25 show 9 more comments As pointed out by M. Reed, B. Simon in the section VI.3 of their "Methods of Modern Mathematical Physics: Functional Analysis": The reason that we single out the residual spectrum is that it does not occur for a large class of operators, for example, for self-adjoint operators. - Yes, it is also true for the norm operator. – Strongart Jul 30 '12 at 15:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390429854393005, "perplexity_flag": "head"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/S/s19acc.html
# NAG Library Function Documentnag_kelvin_ker (s19acc) ## 1  Purpose nag_kelvin_ker (s19acc) returns a value for the Kelvin function $\mathrm{ker}x$. ## 2  Specification #include <nag.h> #include <nags.h> double nag_kelvin_ker (double x, NagError *fail) ## 3  Description nag_kelvin_ker (s19acc) evaluates an approximation to the Kelvin function $\mathrm{ker}x$. The function is based on several Chebyshev expansions. For large $x$, $\mathrm{ker}x$ is so small that it cannot be computed without underflow and the function evaluation fails. ## 4  References Abramowitz M and Stegun I A (1972) Handbook of Mathematical Functions (3rd Edition) Dover Publications ## 5  Arguments 1:     x – doubleInput On entry: the argument $x$ of the function. Constraint: ${\mathbf{x}}>0$. 2:     fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_REAL_ARG_GT On entry, ${\mathbf{x}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{x}}\le 〈\mathit{\text{value}}〉$. x is too large, the result underflows and the function returns zero. NE_REAL_ARG_LE On entry, x must not be less than or equal to 0.0: ${\mathbf{x}}=〈\mathit{\text{value}}〉$. The function is undefined and returns zero. ## 7  Accuracy Let $E$ be the absolute error in the result, $\epsilon $ be the relative error in the result and $\delta $ be the relative error in the argument. If $\delta $ is somewhat larger than the machine precision, then we have $E\simeq \left|x\left({\mathrm{ker}}_{1}x+{\mathrm{kei}}_{1}x\right)/\sqrt{2}\right|\delta $, $\epsilon \simeq \left|x\left({\mathrm{ker}}_{1}x+{\mathrm{kei}}_{1}x\right)/\sqrt{2}\mathrm{ker}x\right|\delta $. For very small $x$, the relative error amplification factor is approximately given by $1/\left|\mathrm{log}x\right|$, which implies a strong attenuation of relative error. However, $\epsilon $ in general cannot be less than the machine precision. For small $x$, errors are damped by the function and hence are limited by the machine precision. For medium and large $x$, the error behaviour, like the function itself, is oscillatory, and hence only the absolute accuracy for the function can be maintained. For this range of $x$, the amplitude of the absolute error decays like $\sqrt{\pi x/2}{e}^{-x/\sqrt{2}}$ which implies a strong attenuation of error. Eventually, $\mathrm{ker}x$, which asymptotically behaves like $\sqrt{\pi /2x}{e}^{-x/\sqrt{2}}$, becomes so small that it cannot be calculated without causing underflow, and the function returns zero. Note that for large $x$ the errors are dominated by those of the math library function exp. ## 8  Further Comments Underflow may occur for a few values of $x$ close to the zeros of $\mathrm{ker}x$, which causes a failure NE_REAL_ARG_GT. ## 9  Example The following program reads values of the argument $x$ from a file, evaluates the function at each value of $x$ and prints the results. ### 9.1  Program Text Program Text (s19acce.c) ### 9.2  Program Data Program Data (s19acce.d) ### 9.3  Program Results Program Results (s19acce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6597682237625122, "perplexity_flag": "middle"}
http://sbseminar.wordpress.com/2007/08/11/algebraic-topology-of-finite-topological-spaces/
## Algebraic topology of finite topological spaces August 11, 2007 Posted by Noah Snyder in Algebraic Topology, fun problems. trackback Here’s a fun question that was floating around Mathcamp last week: find a finite topological space which has a nontrivial fundamental group. One answer to this question after the jump. One example is a space S with 4 points, two of which are open and two of which are closed. First, consider the line with the origin doubled. Now quotient out by setting all positive points equal to each other, and all negative points equal to each other. This gives a four point space S. There’s a map from the circle to S given by sending your favorite two points on the circle to the closed points, and the two open intervals between them to the open points. It is not difficult to see that this cannot be extended to the disc. A better proof is to exhibit S’ the universal cover of S. The space S’ looks like: The points in the middle column are closed. The points in the other two columns are open, and the closure of any such point contains the two nearest points in the middle column. S’ is not contractable, but any compact (i.e. finite) subset of it is contractable, so it is simply connected. Hence $\pi_1(S) \cong \mathbb{Z}$ since the deck transformations of S’ just come from shifting up and down. Here are two more fun problems: find all the homology and homotopy groups of this 4 point space. ## Comments» 1. James - August 11, 2007 For general topological spaces, you wouldn’t expect the usual fundamental group defined in terms of paths to still classify covering spaces. But I think I remember hearing that there is still a Grothendieck-style definition of a fundamental group that classifies finite-degree covering spaces. In the special case of a CW complex, this would be the profinite completion of the usual fundamental group. I don’t know if possible to do it without the finite-degree restriction. Maybe it’s just what comes out of Grothendieck’s formalism, which was creating with algebraic fundamental groups in mind. 2. Noah Snyder - August 12, 2007 Acording to Wikipedia (and to Eric, a camper and our resident expert on point-set topology) a space has a universal cover if and only if it is path-connected, locally path-connected, and semi-locally simply connected. All of these conditions are easy to check for S. 3. Ben Webster - August 12, 2007 James- There’s a more categorical way of thinking about the fundamental group: if you have any reasonable notion of “covering space,” then one can take the category of covering spaces of a given one. Call this $\mathcal C$. For any reasonable notion of covering space (in particular, the usual topological one), one has a functor $\mathcal{C}\to \mathsf{Set}$ sending a cover to the fiber over a generic point. This functor is even monoidal for the “tensor product” on coverings given by fiber product. By analogy with the Tannakian formalism, one can define the fundamental group of $X$ for a given notion of covering to be the automorphism group of this forgetful functor. If $X$ has a universal cover, then you can check that you’ll get back the usual fundamental group. If you restrict to finite covers, you’ll get the profinite completion of the fundamental group. If you switch to the algebraic category, you should get Grothendieck’s algebraic fundamental group. The reason that you get a profinite group here is that the algebraic restriction forces you to only consider finite covers. 4. James - August 13, 2007 Right. What I meant was that I remember hearing that you always have a Galois category (i.e. the finite discrete version of a Tannakian category) for any topological space whatsoever. And so even though you can’t always define pi_1, you can always define its completion, or rather what would be its completion if pi_1 existed. 5. carnahan - August 20, 2007 It looks like any sufficiently subdivided CW complex can be rendered as a locally finite topological space in the same way as you did with the circle. In particular, you should be able to get any finitely presented group as $\pi_1$ of a finite topological space. Are there interesting questions about finite “homotopy types”? It’s not clear that this adds anything new to algebraic topology. Incidentally, there are multiple algebraic categories (e.g., tame, etale, Nisnevich), coming from different notions of cover, and they yield very different fundamental groups. 6. Todd Trimble - August 20, 2007 Cool “postcards” from mathcamp, Noah! Your entry here got me thinking: There is an equivalence of categories O: FinTop –> FinPreOrd between finite topological spaces and finite preorders, where the order –> in O(X) is defined by x –> y iff x is contained in the closure of y. For Noah’s 4-point example S, the associated preorder O(S) looks like a d a d with b and c both pointing to a and to d (no other relations). On the other hand, one can take the classifying space of a finite preorder B: FinPreOrd –> Top as usual, by taking geometric realization of the nerve of the preorder (considered as a category). On Noah’s example S, the classifying space of the associated preorder, BO(S), is a circle S^1. The map S^1 –> S that Noah defined generalizes: for finite topological spaces X, I believe I can define a continuous map BO(X) –> X, almost as a piece of pure category theory. In the end, it comes down to defining a continuous map Aff(n) –> D(n) from the n-dimensional affine simplex to the finite topological space with n+1 points represented by the preorder Delta_n = (0 –> 1 –> … –> n). I’ll leave this to the imagination for now (details available on request). Then, does anyone know what can be said of this map BO(X) –> X in terms of homotopy? For example, does pi_1 induce an isomorphism? What happens with higher homotopy groups? 7. Eric - August 21, 2007 If X is T_0 (I haven’t checked whether it still works for non-T_0 spaces, the map BO(X) -> X (which is a quotient map) turns out to have a nice universal property: any map Y -> X lifts to BO(X), as long as Y is sufficiently nice (metrizable or a CW complex, say; the actual condition is hereditary perfect normality). Furthermore, the lift is unique up to a homotopy such that every stage of the homotopy is a lift. It’s easy to see that this implies that the map induces isomorphisms on all homotopy groups. You can either use this to show it also induces isomorphisms on homology, or you can prove that directly by induction on the number of points and Mayer-Vietoris. Any barycentric subdivision of a simplicial complex C is BO(X), where X is the poset of faces of C ordered by inclusion. Thus every finite simplicial complex has a finite “model”. 8. Todd Trimble - August 21, 2007 Thanks, Eric — very useful reply. I think the weak homotopy equivalence for finite T_0 spaces implies the same holds for all finite spaces: A finite space X is T_0 iff its associated preorder is a poset, and every preorder P is equivalent as a category to a (unique up to isomorphism) poset P’, with P’ a retract of P. It’s well known that the categorical equivalence implies BP and BP’ are homotopy equivalent. On the other hand, the equivalence P ~ P’ means there is a preorder map (0 –> 1) = 2 –> hom(P, P) sending 1 to the identity and 0 to a factoring through P’. Now switch to the topological picture, and pull back along the evident continuous map I = [0, 1] –> 2 to conclude that P and P’ are homotopy equivalent as spaces. It now follows from naturality of BO(X) –> X that this map is a weak homotopy equivalence for all finite X. 9. Benjamin Steinberg - June 1, 2012 This is way late, but McCord showed any finite simplicial complex is weakly equivalent to its poset of faces with the Alexandrov topology so you can get any finitely presented group. In particular the nerve of a poset is weakly equivalent to the poset. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9079614877700806, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/87452/which-chapters-of-euclids-elements-would-be-helpful-for-drawing-a-grid?answertab=active
# Which chapters of Euclid's elements would be helpful for drawing a grid? I am drawing a $19 \times 19$ grid on my desk. For aesthetic purposes, I don't want to use a ruler. Rather, I want to use Euclidean theorems to 'prove' to myself that such and such line meets at a right angle. I have already marked out the four points that determine the edges of the roughly square rectangle that will contain the grid. I imagine there is some chapter of the elements that would contain a proof that two lines are perpendicular and at right angles to one another. I imagine myself being able to apply that theorem to each intersection on the grid, piecemeal, in order to 'grow' it, starting at an arbitrary edge, or perhaps starting at all four and working toward the center. Of course, this is all for fun, and because I love the thinking style of the Elements. But how would I use the book to do that. What process would I use for 'prooving' 90 degree perpendicularity, taking each intersection in turn, like an automaton? - – Phira Dec 1 '11 at 17:45 2 I have to admit that I am seriously toying with making a youtube clip on "How to draw a straight line without a ruler using Euclid's elements" where the book is used as a ruler. – Phira Dec 1 '11 at 17:46 (I'm assuming you are using a straight edge, a.k.a. an unmarked ruler, to aid in the drawing of straight lines.) I can't see you needing anything outside of the first book. Note that the Elements contains many instructions for producing an object with desired properties, and also a proof that they work, so that as long as you follow the instructions, you will produce the desired object without any further argument needed. Assuming, that is, that you can draw a perfectly straight line, a perfect circle, etc. – Arthur Fischer Dec 1 '11 at 18:16 @Phira I'm using a knotted cord, ancient Mesoamerican-style. – ixtmixilix Dec 1 '11 at 21:13 i'm not actually sure this requires euclid's elements. it seems to me that i just needed the parallel postulate. i couldn't write a proof of that, though. not yet anyway. i'll have to put it in my list of things to consider. – ixtmixilix Feb 3 '12 at 2:18 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9554263353347778, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/242402/product-lebesgue-measure-on-infinite-dimensional-spaces
# (Product) Lebesgue measure on infinite dimensional spaces? I am trying to understand measure construction procedures on infinite-dimensional spaces. Why is it not possible in general to construct Lebesgue measure on $\mathbb{R}^\mathbb{N}$ or $\mathbb{R}^\mathbb{R}$? - 3 – Samuel Nov 22 '12 at 3:41 Thanks. This is a very nice link! – Learner Nov 22 '12 at 3:44 @Samuel: Note that the proof in the link needs modification, since it is written for Banach spaces, but the idea is the same: any open set contains infinitely many translates of a smaller open set. – Nate Eldredge Nov 22 '12 at 3:55 1 @NateEldredge How would you modify this proof if we drop the Banach assumption? – Thomas E. Nov 22 '12 at 7:56 ## 3 Answers I will prove it for $\mathbb R^{\mathbb N}$. A proof for all Banach spaces is given at http://en.wikipedia.org/wiki/There_is_no_infinite-dimensional_Lebesgue_measure#Proof_of_the_theorem . Consider the cube $(-1,1]^{\mathbb N}$. It can be partitioned into infinitely many translated copies of the cube $(0,1]^{\mathbb N}$, so if we want them all to have the same volume, and the cube $(-1,1]^{\mathbb N}$ to have finite volume, then the volume of each must be $0$. Now, we can cover the entire space $\mathbb R^{\mathbb N}$ with countably many copies of the cube $(0,1]^{\mathbb N}$, so the entire space $\mathbb R^{\mathbb N}$ must have measure $0$, and thus all subsets must also have measure $0$. Thus the only finite translation-invariant complete measure on $\mathbb R^{\mathbb N}$ is the trivial measure $0$. - First constructions of Lebesgue measure" on~$\mathbb{R}^{\infty}$ can be found in papers: [1] Baker R., Lebesgue measure" on~$\mathbb{R}^{\infty}$, \textit{Proc. Amer. Math. Soc.}, vol. 113, no. 4, 1991, pp.1023--1029. [2] Baker R., Lebesgue measure" on $\mathbb{R}^{ \infty}$. II. \textit{Proc. Amer. Math. Soc.} vol. 132, no. 9, 2003, pp. 2577--2591. Some generalizations of Baker constructions can be found in the following articles: [3] G.Pantsulaia , On ordinary and Standard Lebesgue Measures on $R^{\infty}$, \textit{Bull. Polish Acad. Sci.} 73(3) (2009), 209-222. [4] G.Pantsulaia , On a standard product of an arbitrary family of -finite Borel measures with domain in Polish spaces, \textit{Theory Stoch. Process,} vol. 16(32), 2010, no 1, p.84-93. [5] G.Pantsulaia , On ordinary and standard products of infinite family of $\sigma$ -finite measures and some of their applications. \textit{Acta Math. Sin. (Engl. Ser.)} 27 (2011), no. 3, 477--496 - Thanks for the references! – Learner Dec 29 '12 at 6:21 I am not quite sure what you mean. But the construction of Lebesgue measure on $X$ I learnt is by using the Riesz representation theorem to identify Lebesgue measure as one continuous linear functional over $C(X)$. But this requires the $X$ to be locally compact, and if $X$ is a vector space, the only locally compact ones are $\mathbb{R}^n$. - The following old article of Oxtoby [Oxtoby J.C., Invariant measures in groups which are not locally compact, Trans. AMS., 60 (1946), 215--237] contains general constructions of left-invariant quasi-finite Borel measures on Polish groups that are not locally compact. This article contains answers to all questions stated above. – Gogi Pantsulaia Dec 30 '12 at 6:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 3, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8760204911231995, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/47086?sort=oldest
## Some questions about causal structure of space-time. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) • Let $(\hat{M},\hat{g})$ be the conformal compactification of a space-time $(M,g)$. Let $I^+$ be the conformal null infinity and $J^{-}(I^+)$ be its causal past. Then the spacetime will be called "asymptotically strongly predictable" if there exists a subset $\hat{V}$ of $\hat{M}$ such that $(\hat{V},\hat{g})$ is globally hyperbolic and contains the closure in $\hat{M}$ of $M \cap J^{-}(I^+)$. Define as a "black hole" the region $B=M /\ J^{-}(I^+)$. 1. I think from the above it should imply that there cannot be a causal curve which starts in the region $B$ and enters $M \cap \hat{V}$. Is this correct and if yes then how does one prove it? (In specific examples what happens is that such a curve cannot remain always causal if it has to do such a journey. But I guess this not a good way of thinking since the restriction is purely topological) 1. Under the above conditions when does it also follow that $(M \cap \hat{V},\hat{g})$ and/or $(M\cap \hat{V},g)$ is globally hyperbolic? 2. What is the meaning of "trace of a terminally indecomposable past set" ? Is there any special significance if a compact set happens to be the trace of a terminally indecomposable past set of a maximal future directed time-like curve? (Reading around I get the impression that such compact sets somehow specify physically reasonable initial conditions) - ## 1 Answer 1st statement is false. You need to add "future directed", else a past causal curve can of course leave the black hole region, since time-reversed, it is just a white-hole. For the correct statement, the proof is immediate following the usual causal relations of Penrose and Kronheimer: if $\exists$ such a causal curve, there must $\exists p,q$ such that $p\in B$ and $q\in \hat{V}$ with $q \in J^+(p)$. Which is equivalent to $p \in J^-(q)$. Now using that the causal relations form an ordering, you have that $$p \in J^-(q), q\in J^-(I^+) \implies p \in J^-(I^+)$$ contradicting the assumption that $p\in B$. (As a side remark, this is very, very basic stuff. If the question were just about this fact, I would've voted to close. You should read E. H. Kronheimer and R. Penrose (1967). On the structure of causal spaces. Mathematical Proceedings of the Cambridge Philosophical Society, 63.) (Second side remark: I don't think the word topological means what you think it means. Pure topology is not enough to contrain causal structure, beyond some trivial things about Euler characteristic and completely compact space-times.) For the second question (labeled 1 again by the Markdown software), it is important to remember that the causal structure of space-time is conformally invariant, and that global hyperbolicity is an intrinsic notion rather than extrinsic. To say it another way, you need to remember that 1. If $(V,g)$ is a globally hyperbolic space-time domain, then for any conformal change of metric $g\to \hat{g}$, $(V,\hat{g})$ is also globally hyperbolic. (This is immediate using the Cauchy surface definition, since conformal changes preserve causal relations; using the compact causal diamond definition, you need to remember that continuous functions attain its maxima and minima on a compact set.) 2. If $(V,g)$ is a globally hyperbolic space-time domain, and let $\phi$ be an isometric embedding of $(V,g)\to (M,\hat{g})$, then, the restricion $(\phi(V),\hat{g})$ is also globally hyperbolic. So, $\hat V$ embeds into $\hat V \cap M$ isometrically by the identity map. And $(\hat V, \hat g)$ is globally hyperbolic, so by the above two points its image must be globally hyperbolic. (By the way, the above two observations are trivial consequences of the definitions for global hyperbolicity. You should be able to prove them yourself.) For your third question (as a side remark: the question is actually pretty poorly posed. If you are going to ask about terminology, please at least provide a reference on which paper it is in which you found the phrase that is confusing you; my crystal ball tells me that you are asking about Christodoulou's 1999 CQG paper "On the global initial value problem and the issue of singularities", but most other people won't have a convenient divination device in their office.), it helps to note that the original phrase is about the trace of a terminally indecomposable past set on a space-like hypersurface. And as such it means the same thing generally meant when you take the trace of any object onto a hypersurface: as a set you want to consider the intersection of whatever object you are talking about with the hypersurface, and you want to inherit any geometric structure by pushing forward with the restriction operator. In context the condition that "the trace of a TIP on a space-like hypersurface is compact" means that the intersection of the TIP with the space-like hypersurface is a compact set in the induced topology of the hypersurface. The intuition is that in Minkowski space (or any small perturbations of it), the only TIP associated to maximal time-like geodesics is the whole space-time. And in particular, its trace on any Cauchy hypersurface cannot be compact. (For maximal time-like curves, you can get particle horizons if the curves become asymptotically null, but it is easy to see that those TIPs correspond to a half-space in Minkowski space that lies below a null hyperplane, so the same conclusion holds.) In general, if you have a compactification of space-time with a future time-like infinity (the terminus of all maximally extended, complete future time-like curve), then the TIP associated to it must have non-compact trace on any Cauchy hypersurface. So in fact, I think your parenthetical impression is completely wrong. That a TIP has compact trace should suggest to you that the maximal time-like curve your are looking at is incomplete, which should suggest to you that its terminus is a "singularity". (You should compare this picture with the usual picture in the Penrose singularity theorem.) Which is why Christodoulou formulated his version of weak cosmic censorship conjecture in terms of such sets. (It would also do you good to work through the paper of Geroch, Kronheimer and Penrose, "Ideal Points in Space-time", Proc. Roy. Soc. London (1972). ) - @ I guess somehow I couldn't communicate the point that was confusing me about the first question. If $(\hat{M},\hat{g})$ is the conformal compactification of $(M,g)$ then I am a bit confused about the notation of saying $M\cap J^−(I^+)$ since the two sets among which the intersection is being taken don't belong in the same space. If say $\phi$ is the conformal isometric embedding map from $(M,g)$ to $(\hat{M},\hat{g})$ then do these notations imply $ϕ(M)\cap J^−(I^+)$ and $ϕ(M) /\ J^−(I^+)$? {After that the ordering relation argument is obvious} – Anirbit Nov 23 2010 at 22:15 Similarly I find your statement about $\hat{V}$ isometrically embedding into $\hat{V} \cap M$ bit confusing. I again seem to run into the same confusion with notation. I would think that $(\phi(M) \cap \hat{V},\hat{g})$ is also globally hyperbolic but for reasons not completely clear to me. – Anirbit Nov 23 2010 at 22:26 Ah, so it is just a problem with notation. By typical abuse of notation, $M$ is identified with $\phi(M)$ and $g$ with $\phi_*g$ where $\phi:M\to \hat{M}$ is the map from the manifold $M$ to the manifold with boundary $\hat{M}$ such that $\hat{M}$ is a compactification of $M$. So you are right technically, that one should speak of instead of $M\cap J^-(I^+)$, the set $\phi^{-1}( \phi(M) \cap J^-(I^+))$, but seeing that written out you'll probably agree that notation-wise, it is pedantic and cumbersome, and doesn't really convey useful additional meaning. – Willie Wong Nov 23 2010 at 22:35 So by trace of a TIP on a Cauchy surface one means the intersection of the TIP with the Cauchy surface? Can you explain your comment about the maximal time-like curve being incomplete if this intersection is compact? (I guess I am missing some basic differential geometry here) – Anirbit Nov 23 2010 at 22:40 Yes. That's generally what is meant by the trace. About maximal time-like curve: think Schwarzschild. A complete time-like geodesic must hit future time infinity, and its past will be the entire domain of outer communications, hence non-compact. So any time-like geodesic whose past, when intersected against a Cauchy slice, is compact, must be one that terminates at the singularity. Note the word "intuition" used in my answer: this is in no way precise, just a simple guess based on the cases we do understand. (Incomplete is used here to mean inextensible but has finite affine parameter.) – Willie Wong Nov 23 2010 at 23:44 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430459141731262, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/69302/list
Return to Answer 3 added 9 characters in body The following proof was inspired by Fedor Petrov's and Gjergji's Zaimi's argument, but it is simpler. By a scaling argument we may assume $a_i\in[1,A]$, $b_i\in[1,B]$. The inequality can be rewritten as $$x^{1/p}y^{1/q} \leq (A^pB^q-1)\sum_{i=1}^n a_ib_i,$$ where $$x:=p(AB^q-B)\sum_{i=1}^na_i^p\qquad\text{and}\qquad y:=q(BA^p-A)\sum_{i=1}^nb_i^q.$$ By Young's inequality $x^{1/p}y^{1/q}\le \frac{x}{p}+\frac{y}{q}$, the above follows from $$\frac{x}{p}+\frac{y}{q}\leq (A^pB^q-1)\sum_{i=1}^n a_ib_i.$$ Therefore it suffices to show, for any $i$, $$(AB^q-B)a_i^p+(BA^p-A)b_i^q\leq (A^pB^q-1)a_ib_i.$$ The difference LHS-RHS is a convex function of $a_i$ and $b_i$, hence we can assume that `$a_i\in\{1,A\}$`, `$b_i\in\{1,B\}$`. The inequality is trivial becomes an identity when exactly one of $a_i$ and $b_i$ equals 1, while in the other two cases it is equivalent to $$(1-A^{1-p})(1-B^{1-q})\leq(A-1)(B-1)\leq (A^p-A)(B^q-B).$$ By convexity again, $$1-A^{1-p}\leq(p-1)(A-1)\leq A^p-A,$$ $$1-B^{1-q}\leq(q-1)(B-1)\leq B^q-B,$$ whence the required inequality follows upon noting that $(p-1)(q-1)=1$. 2 Small change in the endgame. The following proof was inspired by Fedor Petrov's and Gjergji's Zaimi's argument, but it is simpler. By a scaling argument we may assume $a_i\in[1,A]$, $b_i\in[1,B]$. The inequality can be rewritten as $$x^{1/p}y^{1/q} \leq (A^pB^q-1)\sum_{i=1}^n a_ib_i,$$ where $$x:=p(AB^q-B)\sum_{i=1}^na_i^p\qquad\text{and}\qquad y:=q(BA^p-A)\sum_{i=1}^nb_i^q.$$ By Young's inequality $x^{1/p}y^{1/q}\le \frac{x}{p}+\frac{y}{q}$, the above follows from $$\frac{x}{p}+\frac{y}{q}\leq (A^pB^q-1)\sum_{i=1}^n a_ib_i.$$ Therefore it suffices to show, for any $i$, $$(AB^q-B)a_i^p+(BA^p-A)b_i^q\leq (A^pB^q-1)a_ib_i.$$ The difference LHS-RHS is a convex function of $a_i$ and $b_i$, hence we can assume that `$a_i\in\{1,A\}$`, `$b_i\in\{1,B\}$`. The resulting 4 inequalities are all inequality is trivial , except the when exactly one for of $a_i=b_i=1$ which a_i$and$b_i$equals 1, while in the other two cases it is equivalent to $$(A-1)(B-1)\leq $(1-A^{1-p})(1-B^{1-q})\leq(A-1)(B-1)\leq (A^p-A)(B^q-B).$$ By convexity again(or by Bernoulli's inequality), $$(p-1)(A-1)\leq (A^p-A)\qquad\text{and}\qquad (q-1)(B-1)\leq (B^q-B),$$$1-A^{1-p}\leq(p-1)(A-1)\leq A^p-A,1-B^{1-q}\leq(q-1)(B-1)\leq B^q-B, whence the required inequality follows upon noting that $(p-1)(q-1)=1$. 1 The following proof was inspired by Fedor Petrov's and Gjergji's Zaimi's argument, but it is simpler. By a scaling argument we may assume $a_i\in[1,A]$, $b_i\in[1,B]$. The inequality can be rewritten as $$x^{1/p}y^{1/q} \leq (A^pB^q-1)\sum_{i=1}^n a_ib_i,$$ where $$x:=p(AB^q-B)\sum_{i=1}^na_i^p\qquad\text{and}\qquad y:=q(BA^p-A)\sum_{i=1}^nb_i^q.$$ By Young's inequality $x^{1/p}y^{1/q}\le \frac{x}{p}+\frac{y}{q}$, the above follows from $$\frac{x}{p}+\frac{y}{q}\leq (A^pB^q-1)\sum_{i=1}^n a_ib_i.$$ Therefore it suffices to show, for any $i$, $$(AB^q-B)a_i^p+(BA^p-A)b_i^q\leq (A^pB^q-1)a_ib_i.$$ The difference LHS-RHS is a convex function of $a_i$ and $b_i$, hence we can assume that `$a_i\in\{1,A\}$`, `$b_i\in\{1,B\}$`. The resulting 4 inequalities are all trivial, except the one for $a_i=b_i=1$ which is equivalent to $$(A-1)(B-1)\leq (A^p-A)(B^q-B).$$ By convexity again (or by Bernoulli's inequality), $$(p-1)(A-1)\leq (A^p-A)\qquad\text{and}\qquad (q-1)(B-1)\leq (B^q-B),$$ whence the required inequality follows upon noting that $(p-1)(q-1)=1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 20, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456861019134521, "perplexity_flag": "head"}
http://mathoverflow.net/questions/88017/sylow-subgroups-of-projective-general-linear-groups
## Sylow subgroups of projective general linear groups ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is known about the Sylow 2-subgroups of $\rm{PGL}_n(\mathbb{F}_q)$, where $q$ is any prime power? For example, according to Theorem 7.9 in Isaacs's Character Theory, these Sylow 2-subgroups cannot be generalised quaternion. Is there a classification of all these Sylows available? Concretely, let me make the following semi-conjecture, which, if true, would make me very happy: The irreducible complex representations of Sylow 2-subgroups of $\rm{PGL}_n(\mathbb{F}_q)$ have trivial Schur index. Is this known? Any references to literature that discusses such questions would be very welcome. If anyone has information on Sylow $l$-subgroups for arbitrary $l$, I would also be very interested. - 3 R.Carter and P.Fong wrote a paper about the structure of Sylow $q$-subgroups of finite groups of Lie type when $q$ is a prime other than the natural characteristic of the group. Whether it would answer a question as precise as your semi-conjecture is another matter. – Geoff Robinson Feb 9 2012 at 19:34 2 For $l$ odd and different to $p$, see this paper by A. Weir: jstor.org/pss/2033424. Notice that they closely resemble the Sylow subgroups of symmetric groups. I don't know much this helps with the representation theory, though. – Colin Reid Feb 9 2012 at 23:34 ## 3 Answers The irreducibles of the Sylow 2-subgroups of ${\rm GL}(n,q)$, $q$ odd, have indeed trivial Schur indices: Let $S$ be such a Sylow 2-subgroup. First, observe that the natural module $(\mathbb{F}_q)^n$ splits into a sum of simple modules $U_1\oplus \dotsb \oplus U_l$, and the dimension of each simple module is a power of $2$. This shows that $S \cong S_1 \times \dotsb \times S_l$, where the $S_i$'s are Sylow 2-subgroups of a ${\rm GL}(2^k, q)$. Thus, w.l.o.g. we may assume that $n=2^k$. Then I use induction on $k$. First note that if $S$ is a Sylow 2-subgroup of ${\rm GL}(2^k, q)$, then ```$$ T=\lbrace \begin{pmatrix} s & \\ & t \end{pmatrix} \mid s, t\in S \rbrace \cup \lbrace \begin{pmatrix} & s \\ t & \end{pmatrix} \mid s,t\in S\rbrace \cong S\wr C_2 $$``` is a Sylow 2-subgroup of ${\rm GL}(2^{k+1}, q)$, except when $k=0$ and $q\equiv 3\mod 4$. This follows from the description in Derek Holt's answer, but it can also be seen directly by observing that $T$ has the right order. Write $N=S\times S$, so that $T= C_2 N$, and let $\chi\in {\rm Irr} (T)$. Three cases have to be considered: 1. We have $\chi_N \in {\rm Irr} (N)$. By induction, $\chi_N$ has trivial Schur index. By Lemma~10.4 in Isaacs' character theory book, for example, it follows that $\chi$ has trivial Schur index. 2. $\chi_N = \theta + \theta^g$, where $\theta$ and $\theta^g$ are not Galois conjugate. Then $\chi = \theta^T$ and $\theta$ have the same field of values and the same Schur index (again, see Lemma~10.4 in Isaacs). 3. $\chi_N = \theta + \theta^g$, where $\theta$ and $\theta^g$ are Galois conjugate. This means that $\chi=\theta^T$, but $|\mathbb{Q}(\theta):\mathbb{Q}(\chi)|=2$. Now a representation $R\colon N\to {\rm GL}(\chi(1), \mathbb{Q}(\chi) )$ affording the character $\theta+\theta^g$ can be extended to a representation over the same field, since $N$ has a complement (of order 2) in $T$. To get the induction going when $q\equiv 3\mod 4$, one has to check that the Sylow 2-subgroup of ${\rm GL}(2,q)$ has trivial Schur indices, but that is clear since it's a semidihedral group. - Sorry, it took me a while to get round to processing your post. This seems to work nicely, thank you! – Alex Bartel Feb 15 2012 at 19:33 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For $q$ odd, it is not difficult to describe the structure of a Sylow 2-subgroup of $G={\rm GL}(n,q)$, so let me do that. If $q \equiv 1 \bmod 4$, then the subgroup of monomial matrices of $G$ contains a Sylow $2$-subgroup of $G$, which is a wreath product of a cyclic group of order $t$ (where $t$ is the 2-part of $q-1$), and a Sylow 2-subgroup of the symmetric group $S_n$. The Sylow subgroups of $S_n$ are of course themselves built up as wreath products of cyclic groups. It is a little more complicated when $q \equiv 3 \bmod 4$. In that case, let $t$ be the 2-part of $q^2-1$. Then, if $n$ is even, a Sylow 2-subgroup of $G$ is a wreath product of a Sylow 2-subgroup of ${\rm GL(2,q)}$, which is semidihedral of order $2t$, with a Sylow 2-subgroup of $S_{n/2}$. If $n$ is odd, then it is a direct product of a Sylow 2-subgroup of ${\rm GL}(n-1,q)$ with a cyclic group of order 2. To get a Sylow 2-subgroup of ${\rm PGL}(n,q)$, you have to factor our the cyclic scalar subgroup, which is a diagonal of the base group of the wreath product. For $q$ even, the upper unitriangular matrices form a Sylow 2-subgroup of ${\rm PGL}(n,q)$. I don't know much about Schur indices but I did some quick computations in Magma, and I found that for Sylow 2-subgroups of ${\rm GL}(n,q)$ for small $n,q$, going up $(n,q)=(6,5)$ and $(8,3)$, the Schur indices of all irreducible representations are indeed 1, which seems to provide good experimental evidence for your conjecture. You might find it easier to try and prove it for ${\rm GL}(n,q)$, since the structure of the Sylow 2-subgroups can be described so explicitly, at least for odd $q$. Added later. For odd prime $r$ not dividing $q$, let $d$ be minimal such that $r$ divides $q^d-1$, let $t$ be the $r$-part of $q^d-1$, and let `$m = \lfloor n/d \rfloor$`. Then a Sylow $r$-subgroup of ${\rm GL}(n,q)$ is a wreath product of a cyclic group of order $t$ with a Sylow $r$-subgroup of $S_m$. For all of the classical groups, Sylow subgroups in coprime characteristic arise in a similar way, as wreath products of a group coming from a base case with a Sylow subgroup of a symmetric group. But of course there are lots of little complications for the individual types. I had a student a few years ago write Magma code to construct all of these, so I can produce very explicit descriptions! - This is very nice and explicit, thank you! I will have to think about whether this description allows me to get a handle on the Schur indices. Does this generalise somehow to other Sylows? – Alex Bartel Feb 9 2012 at 22:54 Yes, I've added something to my post about other primes. – Derek Holt Feb 9 2012 at 23:58 Is there a good reference for these facts? The paper by Weir mentioned in the comments above does not do the p=2 case. – Steve D Feb 10 2012 at 1:02 2 That is covered by the paper mentioned by Geoff Robinson: R. Carter and P. Fong. The Sylow 2-subgroups of the finite classical groups. Journal of Algebra, 1:139--151, 1964. But for the Sylow 2-subgroups of ${\rm GL}_n(q)$, just by calculating their orders, you can see that the description I gave effectively reduces the problem to the dimension 1 or 2 case. – Derek Holt Feb 10 2012 at 1:29 To reinforce Geoff Robinson's cautious comment, I'd encourage you to dig into a volume of the ongoing treatise by Gorenstein-Lyons-Solomon The Classification of the Finite Simple Groups, Number 3 (Amer. Math. Soc., 1998): Section 3.3 on "Equicharacteristic Sylow Structure" and Section 4.10 on "Cross-characteristic Sylow Structure", supplemented by references to the literature. The question raised concerns just one of the adjoint-type groups of Lie type, but even here the difficulty is apparent in the treatment by G-L-S of standard structural matters. As their section headings indicate, there are fundamental differences depending on whether or not the prime r (such as 2) involved in the Sylow subgroup is the natural/defining prime for the Lie-type group (having q as a power). But in either case it's already nontrivial to compute the r-rank and some of the commutator structure. The references (as of 1998) seem to cover the essential literature, including the work of Aschbacher and the paper by Carter and Fong mentioned by Geoff. All of this is above my pay grade, as they say, so I can only wish you luck. - I will be sure to check out this book, thank you! – Alex Bartel Feb 9 2012 at 22:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 92, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265527725219727, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/219/what-is-the-intuition-behind-cointegration/222
# What is the intuition behind cointegration? What is the intuition behind cointegration? What does the Dickey-Fuller test do to test for it? Ideally, a non-technical explanation would be appreciated. Say you need to explain it to an investor and justify why your pairs trading strategy should make him rich! - ## 5 Answers The standard story (also told by @vonjd) is of "The Drunk and Her Dog". This is based on "A Drunk and Her Dog: An Illustration of Cointegration and Error Correction" (1994). The story is itself based on the standard illustration for a random walk known as the "drunkard's walk". The Dickey-Fuller test is used to check for a unit root. It can be used as part of the general Engle-Granger two-step method (although it isn't he only option). In this case, while the two assets themselves are not be stationary, you are looking to see if the residuals between a regression of the two assets is stationary. Most people prefer another approach, the Johansen test, which uses a VECM model. The intuition behind pairs trading is that two cointegrated instruments will follow the same long-run path (since they presumably have some common factor, such as they are both oil companies and are heavily influenced by the price of oil), and any deviations will ultimately return back to the mean. Needless to say, pairs trading (or any other form of statistical arbitrage) is still a risky endeavor, as should be clear by the performance of arbitrage funds. - Nice.... What about an intuition for multiple time series and simultaneous cointegration? – user40 Feb 7 '11 at 16:27 +1: I didn't know this article - thank you for the link! – vonjd Feb 7 '11 at 16:32 – RockScience Jan 2 '12 at 7:28 This one is quite easy: Think of a man walking his dog. He will go along and his dog will stroll along running back and forth. Man and dog are mathematically "cointegrated". As an investor you bet that the dog is coming back to his master or that the leash has only a certain length. - Okay... Maybe little bit more technical please! Does it mean that in cointegration one is concerned about the difference between two price time-series? – user40 Feb 7 '11 at 16:23 5 I find it unfair to downvote this answer. It was asked to give a non-technical explanation! This one an investor will understand! The same association came Shane to mind... – vonjd Feb 7 '11 at 16:27 2 I agree completely and gave you a +1. Your answer was spot on for the question. – Shane Feb 7 '11 at 16:29 1 This is really fantastic. – Jase Dec 28 '12 at 16:35 Two time series $X_1$ and $X_2$ are cointegrated if a linear combination $aX_1+bX_2$ is stationary i.e. it has constant mean, standard deviation and autocorrelation function for some $a$ and $b$. In other words, the two series never stray very far from one another. Cointegration might provide a more robust measure of the linkage between two financial quantities than correlation which is very unstable in practice. I have borrowed the following two examples from Willmot's Frequently Asked Questions in Quantitative Finance, one may be typical for a hedge fund trader and another illustrates the job of a mutual fund manager. A. Suppose you have two stocks $S_1$ and $S_2$ and you find that $S_1 − 3 S_2$ is stationary, so that this combination never strays too far from its mean. If one day this ‘spread’ is particularly large then you would have sound statistical reasons for thinking the spread might shortly reduce, giving you a possible source of statistical arbitrage profit. This can be the basis for pairs trading. B. Suppose we find that the S&P500 index is cointegrated with a portfolio of 15 stocks. We can then use these fifteen stocks to track the index. The error in this tracking portfolio will have constant mean and standard deviation, so should not wander too far from its average. This is clearly easier than using all 500 stocks for the tracking (when, of course, the tracking error would be zero). - The somewhat tongue-in-cheek blog post http://www.portfolioprobe.com/2010/10/18/american-tv-does-cointegration/ includes the example of two classes of shares on the same company. In this case you have two assets that are essentially the same but with a few details different. The buying and selling of these assets will make the prices fluctuate from each other. However they are unlikely to stray too far from each other because there will be arbitrageurs that will bring the prices back together. Arbitrage is the leash in the human-canine analogy. But there is a difference between cointegration and high correlation. I'm guessing that a lot of pairs trading based on "cointegration" is actually based on high correlation. The difference is risk: if two assets are truly cointegrated, then they will eventually snap back towards each other; two assets that have a history of high correlation need not snap back together. - Here is an empirical strategy to test for cointegration. FIRST, check whether both $X_t$ and $Y_t$ contain an unit root. • If they are both stationary then model $Y_t$ or $X_t$ in levels (and nothing is wrong). • If one of the two is $I(1)$ (non-stationary for one level), then take differences to ensure stationarity. • If they are both non-stationary, and hence $I(1)$, then test for co-integration: 1. if the residuals are $I(0)$, then we speak of the presence of cointegration. Estimate then an ECM model ($Y_t = \beta_0 + \beta_1 X_t + \eta_t$ obtaining $\hat{\beta_0}$ and $\hat{\beta_1}$ and using it in: $\Delta Y_t = \Delta X_t'\phi - \psi(Y_{t-1}-\hat{\beta_0} - \hat{\beta_1}X_t) + \varepsilon_t$. When $\varepsilon_t \sim N(0,1)$ then both $\psi$ and $\phi$ are asymptotically valid. 2. if the residuals are $I(1)$ then we speak of spurious regression. In that case you should model both variables by taking the first differences. - – chrisaycock♦ Apr 13 '12 at 2:33 that is true, but I think that the strategy is useful in answering (for a part) the two questions right? – JohnAndrews Apr 13 '12 at 2:36 – chrisaycock♦ Apr 13 '12 at 2:49 ok, I will try to customize my answers more in the following... – JohnAndrews Apr 13 '12 at 2:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9477694034576416, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/general-relativity?page=4&sort=faq&pagesize=50
# Tagged Questions A theory that describes how matter produces and responds to the geometry of space and time. It was first published by Einstein in 1915 and is currently used to study the structure and evolution of the universe, as well as having practical applications like GPS. 3answers 276 views ### How close can two extremal black holes with the same charge can get? Here's a puzzle I have been pondering over. If we have two extremal black holes with the same charge, the electrostatic repulsion between them ought to cancel the gravitational attraction between ... 4answers 313 views ### Can a photon get emitted without a receiver? It is generally agreed upon that electromagnetic waves from an emitter does not have to connect to a receiver, but how can we be sure this is a fact? The problem is that we can never observe non ... 2answers 353 views ### Why is Mendel Sachs's work not taken seriously? Or is it? Back in college I remember coming across a few books in the physics library by Mendel Sachs. Examples are: General Relativity and Matter Quantum Mechanics and Gravity Quantum Mechanics from General ... 2answers 89 views ### Gravitational distortion of an object's diameter, at a distance, Does the curvature of space-time cause objects to look smaller than they really are? What is the relationship between the optical distortion and the mass of the objects? 2answers 207 views ### What do you feel when crossing the event horizon? I have heard the claim over and over that you won't feel anything when crossing the event horizon as the curvature is not very large. But the fundamental fact remains that information cannot pass ... 3answers 1k views ### Harold White's work on the Alcubierre warp drive I've read a bit on Harold White's recent work. (A paper on Nasa's site) I haven't been able to find any comments by people claiming to know anything about the physics involved. Is this really serious? ... 7answers 1k views ### According to General Relativity, Does The Past “Exist”? I'm curious about just what is meant by time being another dimension, like the three (observable) spatial dimensions. Does this imply, according to General Relativity, that the past and the future ... 1answer 117 views ### geometry inside the event horizon I'm trying to understand intuitively the geometry as it would look to an observer entering the event horizon of a schwarszchild black hole. I would appreciate any insights or corrections to the above. ... 1answer 115 views ### models for astrophysical relativistic jets from compact objects what is the simplest way to understand the physics of relativistic jets? we know that they have axial symmetry with very tight angular spread, presumably aligned with the axis of rotation of the ... 3answers 835 views ### Is it possible/correct to describe electromagnetism using curved space(-time)? Comparing the simples form of the forces of both phenomena: the law of Newton for gravitation $V\propto \frac{1}{r}$, and the Coulomb law for electrostatics $V\propto \frac{1}{r}$, one might think ... 1answer 101 views ### Would dense matter around a black hole event horizon eventually form a secondary black hole? [duplicate] Possible Duplicate: Black hole formation as seen by a distant observer Given that matter can never cross the event horizon of a black hole (from an external observer point of view), if a ... 4answers 561 views ### How long would it take to travel through a wormhole? Assuming wormholes exist and you put some matter into one, how long would it take to reach the other end versus how far apart the two ends are? Basically, by how much does a wormhole stretch ... 3answers 205 views ### Mechanism for the gravitational field generated by photons This question follows from a schooling I received in this thread. I figured that photons do not interact with gravity, except when they've spontaneously converted into a particle-antiparticle pair. ... 5answers 792 views ### Is the law of conservation of energy still valid? Is the law of conservation of energy still valid or have there been experiments showing that energy could be created or lost? 1answer 224 views ### metric signature explanation Can anyone explain what metric signature is? I have a basic knowledge regarding tensors, btw. Also, how is it related to fundamental understanding of general relativity? Thanks. 1answer 296 views ### Blandford-Znajek process: Why/how does the current flow along the magnetic field lines Related: How would a black hole power plant work? I have put a bit of commentary enumerating my confusions in parentheses I read in Black Holes and Time Warps (Kip Thorne), that quasars can generate ... 2answers 176 views ### curvature tensor component capable of doing work on $T_{\mu \nu}$ I'm wondering what part of the curvature tensor is able to do work (and hence transfer energy) in matter. I'm wondering if this tensor: http://en.wikipedia.org/wiki/Stress-energy-momentum_pseudotensor ... 1answer 150 views ### Can a super-extremal charged black hole be made out of electrons only? In a previous Question it was argued that it would be impossible to add enough charge to a black hole to make it pass the extremal black hole limit since adding charge would increase the mass of the ... 3answers 224 views ### Is there a single metric for a given system? Let imagine a tunnel that connect two distant places at the globe (eastern-western or north-south) There are a lot of posible "distances" or metrics, defined by maps, routes, "as the crow flies", ... 3answers 437 views ### Anti-matter repelled by gravity - is it a serious hypothesis? [duplicate] Possible Duplicate: Why would Antimatter behave differently via Gravity? Regarding the following statement in this article: Most important of these is whether ordinary gravity attracts ... 0answers 115 views ### Is it mathematically possible or topologically allowable for cutouts, or cavities, to exist in a 3-manifold? A few weeks back, I posted a related question, Could metric expansion create holes, or cavities in the fabric of spacetime?, asking if metric stretching could create cutouts in the spacetime manifold. ... 2answers 154 views ### Newton's third law and General relativity Is Newton's third law valid at the General Relativity? Newton's second law, the force exerted by body 2 on body 1 is: $$F_{12}$$ The force exerted by body 1 on body 2 is: $$F_{21}$$ According to ... 4answers 614 views ### Can black holes form in a finite amount of time? One thing I know about black holes is that an object gets closer to the event horizon, gravitation time dilation make it move more slower from an outside perspective, so that it looks like it take an ... 2answers 171 views ### What is the definition of a timelike and spacelike singularity? What is the definition of a timelike and spacelike singularity? Trying to find, but haven't yet, what the definitions are. 1answer 158 views ### Can a deformable object “swim” in curved space-time? [duplicate] Possible Duplicate: Swimming in Spacetime - apparent conserved quantity violation It is well known that a deformable object can perform a finite rotation in space by performing deformations ... 3answers 432 views ### Is the total energy of the universe constant? If total energy is conserved just transformed and never newly created, is there a sum of all energies that is constant? Why is it probably not that easy? 2answers 246 views ### How can one reconcile the temperature of a black hole with asymptotic flatness? A stationary observer very close to the horizon of a black hole is immersed in a thermal bath of temperature that diverges as the horizon is approached. $$T^{-1} = 4\pi \sqrt{2M(r-2M)}$$ The ... 8answers 448 views ### Gravity theories with the equivalence principle but different from GR Einstein's general relativity assumes the equivalence of acceleration and gravitation. Is there a general class of gravity theories that have this property but disagree with general relativity? Will ... 2answers 398 views ### Can we have a black hole without a singularity? Assuming we have a sufficiently small and massive object such that it's escape velocity is greater than the speed of light, isn't this a black hole? It has an event horizon that light cannot escape, ... 1answer 114 views ### Source term of the Einstein field equation My copy of Feynman's "Six Not-So-Easy Pieces" has an interesting introduction by Roger Penrose. In that introduction (copyright 1997 according to the copyright page), Penrose complains that Feynman's ... 3answers 203 views ### From the perspective of an observer inside a black hole's horizon, where does the energy for Hawking radiation come from? Would energy be seen to "flow" to the outside of the black hole? Through what mechanism? 2answers 278 views ### Conical spacetime of cosmic string Inspired by: Angular deficit The 2+1 spacetime is easier for me to visualize, so let's use that here. (so I guess the cosmic string is now just a 'point' in space, but a 'line' in spacetime) Edward ... 2answers 794 views ### Why does pressure act as a source for the gravitational field? I'm asking for a qualitative explanation if there is one. My own answer doesn't work. I would have guessed it's because when a gas has pressure the kinetic energy adds to the rest mass of a given ... 2answers 181 views ### How (or why) equivalence principle led to Einstein field equations? If equivalence principle was origin of general relativity what was the process that this principle led Einstein to developed his theory of general relativity? 1answer 141 views ### Does the curvature of space-time cause objects to look smaller than they really are? What's the difference between looking at a star from a black hole and looking at it from empty space? My guess is that the curvature of space-time distorts the wavelength of light thus changing the ... 1answer 175 views ### Is the quantization of gravity necessary for a quantum theory of gravity? Part II (At the suggestion of the user markovchain, I have decided to take a very large edit/addition to the original question, and ask it as a separate question altogether.) Here it is: I have since ... 1answer 95 views ### Flat space metrics This question concerns the metric of a flat space: $$ds^2=dr^2+cr^2\,\,d\theta^2$$ where $c$ is a constant. Why is it necessary to set $c=1$ to avoid singularities and to restrict $r\ge 0$? Thanks. 3answers 230 views ### Can spacetime exist in the absence of matter and energy? I'm pretty sure Ernst Mach would have said that spacetime cannot exist without matter in it. But I'm also pretty sure that a black hole can be described as a self-sustaining gravitational field, ... 3answers 354 views ### How does faster than light travel violate causality? Let's say I have two planets that are one hundred thousand lightyears away from each other. I and my immortal friend on the other planet want to communicate, with a strong laser and a tachyon ... 0answers 45 views ### Kerr solution for finite collapse time The Kerr black hole solutions gives an analytic continuation that is asymptotically flat. Some people have argued that this is another universe, but others state that the analytic continuation ... 2answers 300 views ### What are the units of the quantities in the Einstein field equation? The Einstein field equations (EFE) may be written in the form: $$R_{\mu\nu}-\frac {1}{2}g_{\mu\nu}R+g_{\mu\nu}\Lambda=\frac {8\pi G}{c^4}T_{\mu\nu}$$ where the units of the gravitational constant $G$ ... 1answer 193 views ### Falling into a black hole I've heard it mentioned many times that "nothing special" happens for an infalling observer who crosses the event horizon of a black hole, but I've never been completely satisfied with that statement. ... 1answer 282 views ### Is 4-volume element a scalar or a pseudoscalar in special relativity? In general relativity 4-volume element $\mathrm{d}^4 x = \mathrm{d} x^0\mathrm{d} x^1 \mathrm{d} x^2\mathrm{d} x^3$ is clearly a pseudoscalar (or scalar density) of weight 1 since it transforms as ... 2answers 131 views ### Effect of gravity at near-lightspeeds Let's say I'm in a space station, hurtling towards our galaxy nearly close to the speed of light. From my reference frame, I see the galaxy coming towards my ship at the same speed. I pass the Sun, ... 2answers 332 views ### Deriving Birkhoff's Theorem I am trying to derive Birkhoff's theorem in GR as an exercise: a spherically symmetric gravitational field is static in the vacuum area. I managed to prove that $g_{00}$ is independent of t in the ... 4answers 401 views ### what is the difference between a blackhole and a point particle Theoretically, What is the difference between a black hole and a point particle of certain nonzero mass. Of-course the former exists while its not clear whether the later exists or not, but both have ... 2answers 716 views ### What does “foliation” mean in the context of a “foliation of spacetime?” I've seen foliation used in the context of "foliation of spacetime" here and elsewhere in papers and such. Generally defined in reference to a "sequence of spatial hypersurfaces." But I don't know ... 0answers 95 views ### Would warp bubbles emit gravitational Cerenkov radiation in general relativity? Inspired by the gravtiomagnetic analogy, I would expect that just as a charged tachyon would emit normal (electromagetic) Cerenkov radiation, any mass-carrying warp drive would emit gravitational ... 2answers 191 views ### About space-time and its four dimensions I explained to someone I know about General Relativity (as much as I know). He said that he didn't see how it could be correct. He argued: How is 4-dimensional space-time space different to ... 2answers 250 views ### Gravitational wave energy Electromagnetic energy can be related to it's frequency via $E=h\nu$. Is there a comparable relationship between gravitational wave energy and frequency?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9350301027297974, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/840/the-core-question-of-topology
## The core question of topology ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) As I see it, the core question of topology is to figure out whether a homeomorphism exists between two topological spaces. To answer this question, one defines various properties of a space such as connectedness, compactness, the fundamental group, betti numbers etc. However, it seems that these properties can at best be used to distinguish two spaces - i.e. if X is a space with property Q, and Y is a space without property Q, then we can say with certainty that X & Y are not homeomorphic. My question is this: given two arbitrary spaces, how does one show that they are homeomporphic, without explicitly showing a homeomorphism? - 5 I'm not sure the premise of this question is valid any more than the core question of group theory is to figure out whether an isomorphism exists between two groups. – Qiaochu Yuan Oct 17 2009 at 6:37 4 You might rephrase this question, removing the claim that this is the core question of topology, especially in the light of negative answers below. – Scott Morrison♦ Oct 17 2009 at 14:02 Thanks so much guys, very helpful! – Tejus Oct 18 2009 at 8:39 I agree, the question is too general and not properly phrased. But thanks for the responses, very helpful to see the big picture of what topology is all about. – Tejus Oct 18 2009 at 8:43 What about homotopy? – Harry Gindi Jan 6 2010 at 7:41 ## 8 Answers As others have noted, it's hopeless to try to answer this question for general topological spaces. However, there are a few positive results if you assume, say, that X and Y are both simply connected closed manifolds of a given dimension. For example, Freedman showed that if X and Y are oriented and have dimension four, then to check whether they're homeomorphic you just need to compute (i) the bilinear "intersection" forms on H^2(X;Z) and H^2(Y;Z) induced by the cup product; and (ii) a Z/2-valued invariant called the Kirby-Siebenmann invariant. The invariant in (ii) obstructs the existence of a smooth structure, so if you happened to know that both X and Y were smooth manifolds (hence that their Kirby-Siebenmann invariants vanished) you'd just have to look at their intersection forms to determine whether they're homeomorphic (however a great many examples show that this wouldn't suffice to show that they're diffeomorphic). In higher dimensions, Smale's h-cobordism theorem shows that two simply connected smooth manifolds are diffeomorphic as soon as there is a cobordism between them for which the inclusion of both manifolds is a homotopy equivalence. Checking this criterion can still be subtle, but work of Wall and Barden shows that in the simply-connected 5-dimensional case it suffices to check that there's an isomorphism on second homology H2 which preserves both (i) the second Stiefel-Whitney classes, and (ii) a certain "linking form" on the torsion subgroup of H2. If you drop the simply-connected assumption, things get rather harder--indeed if n>3 then any finitely presented group is the fundamental group of a closed n-manifold (which can be constructed in a canonical way given a presentation), and Markov (son of the probabilist) showed that the impossibility of algorithmically distinguishing whether two presentations yield the same group translates to the impossibility of algorithmically classifying manifolds. Even assuming you already knew the fundamental groups were isomorphic, there are still complications beyond what happens in the simply-connected case, but these can sometimes be overcome with the s-cobordism theorem. In a somewhat different direction, in dimension 3 one can represent manifolds by link diagrams, and Kirby showed that two such manifolds are diffeomorphic (which in dimension 3 is equivalent to homeomorphic) iff you can get from one diagram to the other by a sequence of moves of a certain kind. (see Kirby calculus in Wikipedia; similar statements exist in dimension 4). I suppose that one could argue that this isn't an example of what you were looking for, since if one felt like it one could extract diffeomorphisms from the moves in a fairly explicit way, and one can't (AFAIK) just directly extract some invariants from the diagrams which completely determine whether the moves exist. - Thank you! That does help a lot actually. – Tejus Oct 18 2009 at 8:42 Could you provide a reference that homotopy equivalence which pulls back tangent bundles is a diffeomorphism? Does it somehow follow from the H-cobordism theorem? – Jason DeVito Nov 16 2009 at 19:05 Smale did not show that "two smooth manifolds are diffeomorphic as soon as there is a homotopy equivalence between them which pulls back the tangent bundle on one to the tangent bundle of the other", because this is not true. I think, exotic 7-spheres should give a counterexample. – Igor Belegradek Feb 28 2010 at 3:42 Sorry--in a hasty effort to find a clean statement in the literature without using cobordism language I overlooked some obviously-rather-important parts of the hypothesis of Theorem 7.1 of Smale's "On the structure of manifolds"...namely that the manifolds need to have vanishing cohomology in degrees above around half the dimension (so obviously they can't be closed, among other serious restrictions). I've edited the error – Mike Usher Mar 1 2010 at 17:39 In your comment you sound like the h-cobordism theorem does NOT apply to manifolds that are closed. In fact it applies to closed simply-connected manifolds of dimension >4: two such manifolds are h-cobordant iff they are diffeomorphic. – Igor Belegradek Mar 1 2010 at 22:41 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Maybe this is my algebraic topology bias but I'm not sure there's anything one can say about this question in general--there are just too many topological spaces to try to classify them in any sense. If you only want to know whether two spaces are homotopy equivalent, you can do a lot better. For example, if X and Y are simply connected CW complexes, you can (in principle) show they are homotopy equivalent without writing down any map from X to Y, by computing k-invariants. - 3 What are k-invariants? – Kevin Lin Feb 27 2010 at 8:38 I'll describe these in the simplest case (where $\pi_1$ acts trivially on all homotopy groups). In that case for a (nice) space X we have a Postnikov tower $$X \to \cdots P_2 \to P_1 \to P_0$$ Each of the maps $P_{i+1} \to P_i$ is a fibration with fiber $K( \pi_{i+1}X, i+1)$. Under the hypothesis, one can show that each of these is (up to homotopy) a principal bundle and classified by a k-invariant $P_i \to K( \pi_{i+1}X, i+2)$, i.e. a certain cohomology class. This is described in Hatcher's Alg. Top. book. The general case is more complicated. One needs twisted cohomology. – Chris Schommer-Pries Mar 1 2010 at 13:25 Thanks, Chris. In brief, the k-invariants are algebraic data which tell you how to solve the extension problems as you go up the Postnikov tower. You can read a bit about them near the end of section 4.3 in Hatcher's book (p. 412 in the online copy). – Reid Barton Mar 1 2010 at 18:02 Among topological spaces simplicial complexes are very nice. But even then we run into problems answering your question. Determining whether two finite simplicial complexes are homeomorphic is an undecidable problem. That means there is no algorithm that can tell you if two finite simplicial complexes are homeomorphic, in finite time. Note that these are particularly nice topological spaces and in general topological spaces can be horrendous. I'm not an expert, but considering this I would say that in general the answer to your question would be: we can't. - I would guess that there is a semi-algorithm that will tell you reliably if two simplicial complexes are homeomorphic - which is what the question asks for. (Though it probably does this by exhibiting an explicit homeomorphism, which the question rules out!) – HW Jan 6 2010 at 17:43 Like the other responders, I find your question a bit too general to address sensibly. However, I'll give one example of a way to prove two spaces are homeomorphic without providing a homeomorphism: Let M be a connected manifold and f: M --> B a submersion to another manifold. Then any two fibers of f are homeomorpic, but it can be very hard to extract an explicit homemorphism from this data. Rather than requiring that f be a submersion it is enough to require that the critical points of f have codimension 2 in B. For example, this is the easiest way to show that any two smooth hypersurfaces in CP^n of the same degree are homeomorphic. - Sometimes we can uniquely categorise a space $X$. We can then find a (preferably) finite list of topological properties, such that any space $Y$ that satisfies these properties must be homeomorphic to $X$. Such characterisations exist for many classical spaces like the Cantor set $C$, the rationals $Q$, the irrationals $P$, the Cantor set minus a point, the real line, the plane $R^2$, the Hilbert cube $I^N$, etc. In that case, in the proof of such theorems, we do show a homeomorphism exists, but once we have this theorem, other mathematicians need not find explicit homeomorphisms any more. I have found such theorems to be quite useful. Of course, only sufficiently nice and/or simple spaces can be characterised in this way, and the reach of such a method is quite limited, as there are far many more spaces than there are such nice lists of properties. But using such theorems, topologists could show that all completely metrisable separable topological linear spaces are homeomorphic, e.g. - 1 Aren't there some neat results from Hilbert manifold theory (due to Chapman perhaps) related to your point. I vaguely remember that for Hilbert cube manifolds proper homotopy equivalence implies homeomorphism (perhaps faulty memory)?? – Tim Porter Feb 27 2010 at 8:00 You often need to put some assumptions on your spaces to have a sensible answer to this question; there are just too many terrible spaces out there, and you can write down a host of topological invariants that detect differences between weird spaces but never fully answer the question. One of the most studied classification attempts is the study of the classification of smooth closed manifolds. This leads to a lot of topics like surgery theory and Morse theory that allows you to give constructive procedures to build any manifold by elementary moves, and so the main question becomes one of extracting invariants. Homotopy type is one invariant, and homology groups can be extracted from it. And then you're led into questions like the Poincare conjecture, or topological quantum field theories, or the classification of simply-connected 4-manifolds, et cetera, et cetera. - The only way I can think of giving a meaningful answer is by listing examples, interpreting this as a "big list" question. I'll give an example of proving homeomorphism to S3 non-constructively. • Surgery along a framed link in S3 gives rise to a 3-manifold M, and a presentation for the fundamental group π of M. If π turns out to be the trivial group (you might prove this by the Todd-Coxeter process or something), the Poincare conjecture tells us that M is homeomorphic to S3. To exhibit that homeomorphism might be painful, because the proofs of the Kirby theorem are non-constructive and gives no algorithm to simplify surgery presentations of 3-manifolds. I'm using the fact that there is a unique 3-manifold with trivial fundamental group (whose proof is non-constructive) and then finding an arbitrarily complicated construction to give you a manifold with those properties (and there are many variations on that theme). These constructions, based on surgery or some other violent operation on the manifold, give no hint of how a homeomorphism might look or how one might try to find one. - 2 For 3-manifolds there is an effective algorithm to do what you're talking about. There are standard procedures to construct a triangulation of a 3-manifold obtained by surgery on a link in $S^3$. Then you apply the Rubinstein 3-sphere recognition algorithm to that triangulation, and you're done. It has exponential run-time in the number of tetrahedra in the 3-manifold triangulation and that in turn looks something like the number of crossings in your diagram times a function that measures the size of the surgery coefficients. – Ryan Budney Jan 6 2010 at 9:23 I would respectful suggest that there are other important problems in topology. One thing that the wikipedia article linked doesn't spend much time on is the "placement problem". Instead of attempting a definition, here are the first two examples that spring to my mind: classify curves in a fixed Riemann surface or classify curves in the three-sphere, each time up to isotopy. The first leads to the study of the mapping class group and perhaps Teichmuller spaces. The second leads one towards knot theory. Notice that if $\alpha$ and $\beta$ are curves in a surface $S$ then deciding if the pairs $(S, \alpha)$ and $(S, \beta)$ are homeomorphic reduces to the classification of surfaces and so is "easy". The mapping class group still manages to be important, however! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458312392234802, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/151654-sequences-step-functions.html
# Thread: 1. ## Sequences of Step Functions Suppose $\{ s_{n} \}$ a sequence of increasing step functions which approach a function $f$ on an unbounded interval $I$, and $f(x) \geq 1$ a.e. on $I$. I am asked to show that $\{ \int s_{n} \}$ diverges, where this is the Lebesgue integral. Here's what I have so far: [Edit: Nevermind, I think I have it. I'll post what I did below, though, in case anybody is a) curious or b) sees a problem with my solution, since doesn't go into a whole lot of detail.] Suppose $\{ \int_{I} s_{n} \}$ converges to M. We may assume w.l.o.g. that $I$ is of the form $[a, \infty)$, since we may extend the following argument to the other possibilities: $(a, \infty), (-\infty, a], (-\infty, a),$ and $(-\infty, \infty)$. First, we select any $\delta > 0$ and consider the interval $$I_{0} = [a + \delta, a + \delta + 3M$]$. I claim that there is some \$N\$ such that for all $$x \in I_{0}$,$ $n \geq N \Rightarrow s_{n}(x) \geq \frac{1}{2}$. For suppose this is false, then there is some point at which $s_{N}(x) < \frac{1}{2}$ for all $N$, contradicting our assumptions. We then know that $\displaystyle \frac{1}{2} \cdot 3M \leq \int^{a + \delta + 3M}_{a + \delta} s_{N}$, which contradicts the fact that $\displaystyle \lim_{m \rightarrow \infty} \int_{I} s_{m} \leq \int^{a + \delta + 3M}_{a + \delta} s_{N}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528423547744751, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/294221/geometry-internal-angle-sum-of-triangle
# geometry-internal angle sum of triangle Problem: Let A, B, C be three non-collinear points. Let D, E, F be points on the respective interiors of segments BC, AC and AB. Let θ, φ and ψ be the measures of the respective angles ∠BFC, ∠CDA and ∠AEB. Prove IAS(ABC) < θ +φ + ψ < 540 - IAS(ABC).(IAS means internal angle sum). Now im supposed to use the external angle inequality which is the measure of an exterior angle of a triangle is greater than that of either opposite interior angle. Not sure how to do it. Ive been struggling for hours with it. Oh i forgot to mention this is still in absolute geometry so we cant use that the the angles of a triangle add up to 180*. - $\theta = \angle BFC \ge \angle BAC$, $\varphi = \angle CDA \ge \angle CBA$, etc? – achille hui Feb 4 at 5:06 umm ok but that doesnt help me too much. – user60887 Feb 4 at 5:08 $\psi = \angle AEB \ge \angle BCA$, add them up and isn't you get $\theta + \varphi + \psi \ge \operatorname{IAS}(ABC)$? – achille hui Feb 4 at 5:18 So I assume θ=∠BFC≥∠BAC, φ=∠CDA≥∠CBA, and ψ=∠AEB≥∠ACB OK so IAS(ABC)<= θ+φ+ψ. I get that part. – user60887 Feb 4 at 5:26 This is precisely the "external angle inequality" you mentioned. For the other part, look at the angles $\angle CFA, \angle ADB, \angle BEC$. You should draw everything on a piece of paper and view them as if you are doing Euclidean geometry, then the assignment of angles will become obvious. – achille hui Feb 4 at 5:33 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.897292971611023, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/52190/ring-automorphism-in-an-algebraically-closed-field
# Ring automorphism in an algebraically closed field Let $K$ be an algebraically closed field and let $\mathfrak{m}$ be a maximal ideal of $K[x_{1},..,x_{n}]$. How to show there is a ring automorphism $f$ of $K[x_{1},..,x_{n}]$ such that: $f(\mathfrak{m}) = (x_1, x_{2},..,x_{n})$ here $()$ denotes the ideal generated by. - 1 – Dylan Moreland Jul 18 '11 at 16:22 ## 2 Answers I think, since $K$ is algebraically closed, all maximal ideals are of the form $m=(X_1-a_1,\dots,X_n-a_n)$. To specify a ring homomorphism from $K[X_1,\dots,X_n]$ to any other (commutative) ring $A$, you only need to specify a ring homomorphism $\rho:K\rightarrow A$ and $n$ points in $A$ that the $X_i$ will be mapped to. So in your case you can take the following ring homomorphism : $\rho:K\hookrightarrow K[X_1,\dots,X_n]$ and send $X_i$ to $X_i+a_i$. This defines a unique ring homomorphism $K[X_1,\dots,X_n]\rightarrow K[X_1,\dots,X_n]$, and it's an isomorphism (you can construct its inverse in the same manner). This sends your maximal ideal $m=(X_1-a_1,\dots,X_n-a_n)$ to $(X_1,\dots,X_n)$. - It is a consequence of Hilbert Nullstellensatz that if $K$ is algebraically closed then ${\bf m}=(x_1-a_1,\ldots,x_n-a_n)$ for some $(a_i)\in K^n$. Then $f(x_i)=x_i+a_i$ does the trick. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8983277082443237, "perplexity_flag": "head"}
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_5&redirect=no
# User:Michiexile/MATH198/Lecture 5 ## Contents ### 1 Cartesian Closed Categories and typed lambda-calculus A category is said to have pairwise products if for any objects A,B, there is a product object $A\times B$. A category is said to have pairwise coproducts if for any objects A,B, there is a coproduct object A + B. Recall when we talked about internal homs in Lecture 2. We can now define what we mean, formally, by the concept: Definition An object C in a category D is an internal hom object or an exponential object $[A\to B]$ or BA if it comes equipped with an arrow $ev: [A\to B] \times A \to B$, called the evaluation arrow, such that for any other arrow $f: C\times A\to B$, there is a unique arrow $\lambda f: C\to [A\to B]$ such that the composite $C\times A\to^{\lambda f\times 1_A} [A\to B]\times A\to^{ev} B$ is f. The idea here is that with something in an exponential object, and something in the source of the arrows we imagine live inside the exponential, we can produce the evaluation of the arrow at the source to produce something in the target. Using global elements, this reasoning comes through in a more natural manner: given $f: 1\to [A\to B]$ and $x: 1\to A$ we can produce the global element $f(x) = ev \circ f\times x: 1\to B$. Furthermore, we can always produce something in the exponential whenever we have something that looks as if it should be there. And with this we can define Definition A category C is a Cartesian Closed Category or a CCC if: 1. C has a terminal object 1 2. Each pair of objects $A, B\in C_0$ has a product $A\times B$ and projections $p_1:A\times B\to A$, $p_2:A\times B\to B$. 3. For every pair $A, B\in C_0$ of objects, there is an exponential object $[A\to B]$ with an evaluation map $[A\to B]\times A\to B$. #### 1.1 Currying Note that the exponential as described here is exactly what we need in order to discuss the Haskell concept of multi-parameter functions. If we consider the type of a binary function in Haskell: `binFunction :: a -> a -> a` This function really lives in the Haskell type a -> (a -> a) , and thus is an element in the repeated exponential object $[A \to [A\to A]]$. Evaluating once gives us a single-parameter function, the first parameter consumed by the first evaluation, and we can evaluate a second time, feeding in the second parameter to get an end result from the function. On the other hand, we can feed in both values at once, and get `binFunction' :: (a,a) -> a` which lives in the exponential object $[A\times A\to A]$. These are genuinely different objects, but they seem to do the same thing: consume two distinct values to produce a third value. The resolution of the difference lies, again, in a recognition from Set theory: there is an isomorphism $Hom(S, Hom(T, V)) = Hom(S\times T, V)$ which we can use as inspiration for an isomorphism $Hom(S,[T\to V]) = Hom(S\times T, V)$ valid in Cartesian Closed Categories. #### 1.2 Typed lambda-calculus The lambda-calculus, and later the typed lambda-calculus both act as foundational bases for computer science, and computer programming in particular. The idea in both is that everything is a function, and we can reduce the act of programming to function application; which in turn can be analyzed using expression rewriting rules that encapsulate the act of computation in a sequence of formal rewrites. Definition A typed lambda-calculus is a formal theory with types, terms, variables and equations. Each term a has a type A associated to it, and we write a:A or $a\in A$. The system is subject to a sequence of rules: 1. There is a type 1. Hence, the empty lambda calculus is excluded. 2. If A,B are types, then so are $A\times B$ and $[A\to B]$. These are, initially, just additional symbols, not imbued with the associations we usually give the symbols used. 3. There is a term * :1. Hence, the lambda calculus without any terms is excluded. 4. For each type A, there is an infinite (countable) supply of terms $x_A^i:A$. 5. If a:A,b:B are terms, then there is a term $(a,b):A\times B$. 6. If $c:A\times B$ then there are terms proj1(c):A,proj2(c):B. 7. If a:A And $f:[A\to B]$, then there is a term fa:B. 8. If x:A is a variable and φ(x):B is a term, then there is a $\lambda_{x\in A}\phi(x):[A\to B]$. Note that here, φ(x) is a meta-expression, meaning we have SOME lambda-calculus expression that may include the variable x. 9. There is a relation a = Xa' for each set of variables X that occur freely in either a or a'. This relation is reflexive, symmetric and transitive. Recall that a variable is free in a term if it is not in the scope of a λ-expression naming that variable. 10. If a:1 then a = {} * . In other words, up to lambda-calculus equality, there is only one value of type * . 11. If $X\subseteq Y$, then a = Xa' implies a = Ya'. Binding more variables gives less freedom, not more, and thus cannot suddenly make equal expressions differ. 12. a = Xa' implies fa = Xfa'. 13. f = Xf' implies fa = Xf'a. So equality plays nice with function application. 14. $\phi(x) =_{X\cup \{x\}} \phi'(x)$ implies λxφ(x) = Xλxφ'(x). Equality behaves well with respect to binding variables. 15. proj1(a,b) = Xa, proj2(a,b) = Xb, c = X(proj1(c),proj2(c)) for all a,b,c,X. 16. λxφ(x)a = Xφ(a) if a is substitutable for x in φ(x) and φ(a) is what we get by substituting each occurrence of x bya in φ(x). A term is substitutable for another if by performing the substitution, no occurrence of any variable in the term becomes bound, 17. $\lambda_{x\in A}f x =_X f$, provided $x\not\in X$. 18. $\lambda_{x\in A}\phi(x) =_X \lambda_{x'\in A}\phi(x')$ if x' is substitutable for x in φ(x) and each variable is not free in the other expression. Note that = X is just a symbol. The axioms above give it properties that work a lot like equality, but two lambda calculus-equal terms are not equal unless they are identical. However, a = Xb tells us that in any model of this lambda calculus - where terms, types, et.c. are replaced with actual things (mathematical objects, say, or a programming language semantics embedding typed lambda calculus) - then the things given by translating a and b into the model should end up being equal. Any actual realization of typed lambda calculus is bound to have more rules and equalities than the ones listed here. With these axioms in front of us, however, we can see how lambda calculus and Cartesian Closed Categories fit together: We can go back and forth between the wo concepts in a natural manner: ##### 1.2.1 Lambda to CCC Given a typed lambda calculus L, we can define a CCC C(L). Its objects are the types of L. An arrow from A to B is an equivalence class (under = {x}) of terms of type B, free in a single variable x:A. We need the equivalence classes because for any variable x:A, we want $\lambda_xx: 1\to [A\to A]$ to be the global element of $[A\to A]$ corresponding to the identity arrow. Hence, that variable must itself correspond to an identity arrow. And then the rules for the various constructions enumerated in the axioms correspond closely to what we need to prove the resulting category to be cartesian closed. ##### 1.2.2 CCC to Lambda To go in the other direction, starting out with a Cartesian Closed Category and finding a typed lambda calculus corresponding to it, we construct its internal language. Given a CCC C, we can assume that we have chosen, somehow, one actual product for each finite set of factors. Thus, both all products and all projections are well defined entities, with no remaining choice to determine them. The types of the internal language L(C) are just the objects of C. The existence of products, exponentials and terminal object covers axioms 1-2. We can assume the existence of variables for each type, and the remaining axioms correspond to definition and behaviour of the terms available. Using the properties of a CCC, it is at this point possible to prove a resulting equivalence of categories C(L(C)) = C, and similarly, with suitable definitions for what it means for formal languages to be equivalent, one can also prove for a typed lambda-calculus L that L(C(L)) = L. More on this subject can be found in: • Lambek & Scott: Aspects of higher order categorical logic and Introduction to higher order categorical logic More importantly, by stating λ-calculus in terms of a CCC instead of in terms of terms and rewriting rules is that you can escape worrying about variable clashes, alpha reductions and composability - the categorical translation ignores, at least superficially, the variables, reduces terms with morphisms that have equality built in, and provides associative composition for free. At this point, I'd recommend reading more on Wikipedia [1] and [2], as well as in Lambek & Scott: Introduction to Higher Order Categorical Logic. The book by Lambek & Scott goes into great depth on these issues, but may be less than friendly to a novice. ### 2 Limits and colimits One design pattern, as it were, that we have seen occur over and over in the definitions we've seen so far is for there to be some object, such that for every other object around, certain morphisms have unique existence. We saw it in terminal and initial objects, where there's a unique map from or to every other object. And in products/coproducts where a wellbehaved map, capturing any pair of maps has unique existence. And finally, above, in the CCC characterization of the internal hom, we had a similar uniqueness requirement for the lambda map. One thing we can notice is that the isomorphisms theorems for all these cases look very similar to each other: in each isomorphism proof, we produce the uniquely existing morphisms, and prove that their uniqueness and their other properties force the maps to really be isomorphisms. Now, category theory has a philosophy slightly similar to design patterns - if we see something happening over and over, we'll want to generalize it. And there are generalizations available for these! #### 2.1 Diagrams, cones and limits Definition A diagram D of the shape of an index category J (often finite or countable), in a category C is just a functor $D:J\to C$. Objects in J will be denoted by i,j,k,... and their images in C by Di,Dj,Dk,.... This underlines that when we talk about diagrams, we tend to think of them less as just functors, and more as their images - the important part of a diagram D is the objects and their layout in C, and not the process of going to C from D. Definition A cone over a diagram D in a category C is some object C equipped with a family $c_i:C\to D_i$ of arrows, one for each object in J, such that for each arrow $\alpha:i\to j$ in J, the following diagram commutes, or in equations, Dαci = cj. A morphism $f:(C,c_i)\to(C',c'_i)$ of cones is an arrow $f:C\to C'$ such that each triangle commutes, or in equations, such that cj = c'jf. This defins a category of cones, that we shall denote by Cone(D). And we define, hereby: Definition The limit of a diagram D in a category C is a terminal object in Cone(D). We often denote a limit by $p_i:\lim_{\leftarrow_j} D_j \to D_i$ so that the map from the limit object $\lim_{\leftarrow_j} D_j$ to one of the diagram objects Di is denoted by pi. The limit being terminal in the category of cones nails down once and for all the uniqueness of any map into it, and the isomorphism of any two terminal objects carries over to a proof once and for all for the limit case. Specifically, since the morphisms of cones are morphisms in C, and composition is carried straight over, so proving a map is an isomorphism in the cone category implies it is one in the target category as well. Definition A category C has all (finite) limits if all diagrams (of finite shape) have limit objects defined for them. #### 2.2 Limits we've already seen The terminal object of a category is the limit object of an empty diagram. Indeed, it is an object, with no specified maps to no other objects, such that every other object that also maps to the same empty set of objects - which is to say all other objects - have a uniquely determined map to the limit object. The product of some set of objects it she limit object of the diagram containing all these objects and no arrows; a diagram of the shape of a discrete category. The condition here becomes the requirement of maps to all factors so any other cone factors through these maps. To express the exponential as a limit, we need to go to a different category than the one we started in. Take the category with objects given by morphisms $X\times Y\to Z$ for fixed objects Y,Z, and morphisms given by morphisms $X\times Y\to X'\times Y$ commuting with the 'objects' they run between and fixing Y. The exponential is a terminal object in this category. Adding further arrows to diagrams amounts to adding further conditions on the products, as the maps from the product to the diagram objects need to factor through any arrows present in the diagram. These added relations, however, is exactly what trips things up in Haskell. The idealized Haskell category does not have even all finite limits. At the core of the issue here is the lack of dependent types: there is no way for the type system to guarantee equations, and hence only the trivial limits - the products - can be guaranteed by the Haskell type checked. In order to get that kind of guarantees, the type checker would need an implementation of Dependent type, something that can be simulated in several ways, but is not (yet) an actual part of Haskell. Other languages, however, cover this - most notably Epigram, Agda and Cayenne - which the latter is much stronger influenced by constructve type theory and category theory even than Haskell. The kind of equations that show up in a limit, however, could be thought of as invariants for the type - and thus something that can be tested for. The resulting equations can be plugged into a testing framework - such as QuickCheck to verify that the invariants hold under the functions applied. #### 2.3 Colimits The dual concept to a limit is defined using the dual to the cones: Definition A cocone over a diagram $D:J\to C$ is an object C with arrows $c_j:D_j\to C$ such that for each arrow $\alpha:i\to j$ in J, the following diagram commutes, or in equations, such that cjDα = ci. A morphism $f:(C,c_i)\to (C',c'_i)$ of cocones is an arrow $f:C\to C'$ such that each triangle commutes, or in equations, such that cj = c'jf. Just as with the category of cones, this yields a category of cocones, that we denote by Cocone(D), and with this we define: Definition The colimit of a diagram $D:J\to C$ is an initial object in Cocone(D). We denote the colimit by $i_i:D_i\to\lim_{\rightarrow_j}D_j$ so that the map from one of the diagram objects Di to the colimit object $\lim_{\rightarrow_j}D_j$ is denoted by ii. Again, the isomorphism results for coproducts and initial objects follow from that for the colimit, and the same proof ends up working for all colimits. And again, we say that a category has (finite) colimits if every (finite) diagram admits a colimit. #### 2.4 Colimits we've already seen The initial object is the colimit of the empty diagram. The coproduct is the colimit of the discrete diagram. For both of these, the argument is almost identical to the one in the limits section above. ### 3 Homework Credit will be given for up to 4 of the 6 exercises. 1. Prove that currying/uncurrying are isomorphisms in a CCC. Hint: the map $f\mapsto\lambda f$ is a map $Hom(C\times A, B)\to Hom(C,[A\to B])$. 2. Prove that in a CCC λev is $\lambda ev = 1_{[A\to B]}: [A\to B] \to [A\to B]$. 3. What is the limit of a diagram of the shape of the category 2? 4. Is the category of Sets a CCC? Prove it. 5. Is the category of vector spaces a CCC? Prove it. 6. * Implement a typed lambda calculus as an EDSL in Haskell.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 55, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9105414748191833, "perplexity_flag": "middle"}
http://all-science-fair-projects.com/science_fair_projects_encyclopedia/Polarization
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Polarization This article treats polarization in electrodynamics. Other articles treat polarization in electrostatics, polarization in politics and polarization in psychology. In electrodynamics, polarization is a property of waves, such as light and other electromagnetic radiation. Unlike more familiar wave phenomena such as waves on water or sound waves, electromagnetic waves are three-dimensional, and it is their vector nature that gives rise to the phenomenon of polarization. Contents ## Theory ### Basics: plane waves The simplest manifestation of polarization to visualize is that of a plane wave, which is a good approximation to most light waves. A plane wave is one where the direction of the magnetic and electric fields are confined to a plane perpendicular to the propagation direction. Simply because the plane is two-dimensional, the electric vector in the plane at a point in space can be decomposed into two orthogonal components. Call these the x and y components (following the conventions of analytic geometry). For a simple harmonic wave, where the amplitude of the electric vector varies in a sinusoidal manner, the two components have exactly the same frequency. However, these components have two other defining characteristics that can differ. First, the two components may not have the same amplitude. Second, the two components may not have the same phase, that is they may not reach their maxima and minima at the same time in the fixed plane we are talking about. By considering the shape traced out in a fixed plane by the electric vector as such a plane wave passes over it (a Lissajous figure), we obtain a description of the polarization state. The following figures show some examples of the evolution of the electric field vector (blue) with time, along with its x and y components (red/left and green/right) and the path made by the vector in the plane (purple): Linear Circular Elliptical Consider first the special case (left) where the two orthogonal components are in phase. In this case the strength of the two components are always equal or related by a constant ratio, so the direction of the electric vector (the vector sum of these two components) will always fall on a single line in the plane. We call this special case linear polarization. The direction of this line will depend on the relative amplitude of the two components. This direction can be in any angle in the plane, but the direction never varies. Now consider another special case (center), where the two orthogonal components have exactly the same amplitude and are exactly ninety degrees out of phase. In this case one component is zero when the other component is at maximum or minimum amplitude. Notice that there are two possible phase relationships that satisfy this requirement. The x component can be ninety degrees ahead of the y component or it can be ninety degrees behind the y component. In this special case the electric vector in the plane formed by summing the two components will rotate in a circle. We call this special case circular polarization. The direction of rotation will depend on which of the two phase relationships exists. We call these cases right-hand circular polarization and left-hand circular polarization, depending on which way the electric vector rotates. All the other cases, that is where the two components are not in phase and either do not have the same amplitude and/or are not ninety degrees out of phase (e.g. right) are called elliptical polarization because the sum electric vector in the plane will trace out an ellipse (the "polarization ellipse"). ### Incoherent radiation In nature, electromagnetic radiation is often produced by a large ensemble of individual radiators, producing waves independently of each other. This type of light is termed incoherent. In general there is no single frequency but rather a spectrum of different frequencies present, and even if filtered to an arbitrarily narrow frequency range, there may not be a consistent state of polarization. However, this does not mean that polarization is only a feature of coherent radiation. Incoherent radiation may show statistical correlation between the components of the electric field, which can be interpreted as partial polarization. In general it is possible to describe an observed wave field as the sum of a completely incoherent part (no correlations) and a completely polarized part. One may then describe the light in terms of the degree of polarization, and the parameters of the polarization ellipse. ### Parameterizing polarization For ease of visualization, polarization states are often specified in terms of the polarization ellipse, specifically its orientation and elongation. A common parameterization uses the azimuth angle, ψ (the angle between the major semi-axis of the ellipse and the x-axis) and the ellipticity, ε (the ratio of the two semi-axes). Ellipticity is used in preference to the more common geometrical concept of eccentricity, which is of limited physical meaning in the case of polarization. An ellipticity of zero corresponds to linear polarization and an ellipticity of 1 corresponds to circular polarization. The arctangent of the ellipticity, χ = tan−1 ε (the "ellipticity angle"), is also commonly used. An example is shown in the diagram to the right. Full information on a completely polarized state is also provided by the amplitude and phase of oscillations in two components of the electric field vector in the plane of polarization. This representation was used above to show how different states of polarization are possible. The amplitude and phase information can be conveniently represented as a two-dimensional complex vector (the Jones vector): $\mathbf{e} = \begin{bmatrix} a_1 e^{i \theta_1} \\ a_2 e^{i \theta_2} \end{bmatrix} .$ Notice that the product of a Jones vector with a complex number of unit modulus gives a different Jones vector representing the same ellipse, and thus the same state of polarization. The physical electric field, as the real part of the Jones vector, would be altered but the polarization state itself is independent of absolute phase. Note also that the basis vectors used to represent the Jones vector need not represent linear polarization states (i.e. be real). In general any two orthogonal states can be used, where an orthogonal vector pair is formally defined as one having a zero inner product. A common choice is left and right circular polarizations, for example to model the different propagation of waves in two such components in circularly birefringent media (see below) or signal paths of coherent detectors sensitive to circular polarization. Reflection of a plane wave from a surface perpendicular to the page. The p-components of the waves are in the page, while the s components are perpendicular to it. Regardless of whether polarization ellipses are represented using geometric parameters or Jones vectors, implicit in the parameterization is the orientation of the coordinate frame. This permits a degree of freedom, namely rotation about the propagation direction. When considering light that is propagating parallel to the surface of the Earth, the terms "horizontal" and "vertical" polarization are often used, with the former being associated with the first component of the Jones vector, or zero azimuth angle. On the other hand, in astronomy the equatorial coordinate system is generally used instead, with the zero azimuth (or position angle, as it is more commonly called in astronomy to avoid confusion with the horizontal coordinate system) corresponding to due north. Another coordinate system frequently used relates to the plane made by the propagation direction and a vector normal to the plane of a reflecting surface. This is illustrated in the diagram to the right. The components of the electric field parallel and perpendicular to this plane are termed "p-like" (parallel) and "s-like" (senkrecht, i.e. perpendicular in German). Alternative terms are pi-polarized, tangential plane polarized, vertically polarized, or a transverse-magnetic (TM) wave for the p-component; and sigma-polarized, sagittal plane polarized, horizontally polarized, or a transverse-electric (TE) wave for the s-component. In the case of partially polarized radiation, the Jones vector varies in time and space in a way that differs from the constant rate of phase rotation of monochromatic, purely polarized waves. In this case, the wave field is likely stochastic, and only statistical information can be gathered about the variations and correlations between components of the electric field. This information is embodied in the coherency matrix: $\mathbf{\Psi} = \left\langle\mathbf{e} \mathbf{e}^\dagger \right\rangle\,$ $=\left\langle\begin{bmatrix} e_1 e_1 & e_1 e_2^* \\ e_2 e_1^* & e_2 e_2 \end{bmatrix} \right\rangle$ $=\left\langle\begin{bmatrix} a_1^2 & a_1 a_2 e^{i (\theta_1-\theta_2)} \\ a_1 a_2 e^{-i (\theta_1-\theta_2)}& a_2^2 \end{bmatrix} \right\rangle$ where angular brackets denote averaging over many wave cycles. Several variants of the coherency matrix have been proposed: the Wiener coherency matrix and the spectral coherency matrix of Richard Barakat measure the coherence of a spectral decomposition of the signal, while the Wolf coherency matrix averages over all time/frequencies. The coherency matrix contains all of the information on polarization that is obtainable using second order statistics. It can be decomposed into the sum of two idempotent matrices, corresponding to the eigenvectors of the coherency matrix, each representing a polarization state that is orthogonal to the other. An alternative decomposition is into completely polarized (zero determinant) and unpolarized (scaled identity matrix) components. In either case, the operation of summing the components corresponds to the incoherent superposition of waves from the two components. The latter case gives rise to the concept of the "degree of polarization", i.e. the fraction of the total intensity contributed by the completely polarized component. The coherency matrix is not easy to visualize, and it is therefore common to describe incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. An alternative and mathematically convenient description is given by the Stokes parameters, introduced by George Gabriel Stokes in 1852. The relationship of the Stokes parameters to intensity and polarization ellipse parameters is shown in the equations and figure below. $S_0 = I \,$ $S_1 = I p \cos 2\psi \cos 2\chi\,$ $S_2 = I p \sin 2\psi \cos 2\chi\,$ $S_3 = I p \sin 2\chi\,$ Here Ip, 2ψ and 2χ are the spherical coordinates of the polarization state in the three-dimensional space of the last three Stokes parameters. Note the factors of two before ψ and χ corresponding respectively to the facts that any polarization ellipse is indistinguishable from one rotated by 180°, or one with the semi-axis lengths swapped accompanied by a 90° rotation. The Stokes parameters are sometimes denoted I, Q, U and V. The Stokes parameters contain all of the information of the coherency matrix, and are related to it linearly by means of the identity matrix plus the three Pauli matrices: $\mathbf{\Psi} = \frac{1}{2}\sum_{j=0}^3 S_j \mathbf{\sigma_j},\;\mbox{where}$ $\begin{matrix} \mathbf{\sigma_0} &=& \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} & \mathbf{\sigma_1} &=& \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \\ \\ \mathbf{\sigma_2} &=& \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} & \mathbf{\sigma_3} &=& \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} \end{matrix}$ Mathematically, the factor of two relating physical angles to their counterparts in Stokes space derives from the use of second-order moments and correlations, and incorporates the loss of information due to absolute phase invariance. The figure above makes use of a convenient representation of the last three Stokes parameters as components in a three-dimensional vector space. This space is closely related to the Poincaré sphere, which is the spherical surface occupied by completely polarized states in the space of the vector $\mathbf{u} = \frac{1}{S_0}\begin{bmatrix} S_1\\S_2\\S_3\end{bmatrix}.$ All four Stokes parameters can also be combined into the four-dimensional Stokes vector, which can be interpreted as four-vectors of Minkowski space. In this case, all physically realizable polarization states correspond to time-like, future-directed vectors. ### Propagation, reflection and scattering In a vacuum, the components of the electric field propagate at the speed of light, so that the phase of the wave varies in space in time while the polarization state does not. That is: $\mathbf{e}(z+\Delta z,t+\Delta t) = \mathbf{e}(z, t) e^{i k (c\Delta t - \Delta z)},$ where k is the wavenumber and positive z is the direction of propagation. As noted above, the physical electric vector is the real part of the Jones vector. When electromagnetic waves interact with matter, their propagation is altered. If this depends on the polarization states of the waves, then their polarization may also be altered. In many types of media, electromagnetic waves are decomposed into two orthogonal components that encounter different propagation effects. A similar situation occurs in the signal processing paths of detection systems that record the electric field directly. Such effects are most easily characterized in the form of a complex 2×2 transformation matrix called the Jones matrix: $\mathbf{e'} = \mathbf{J}\mathbf{e}.$ In general the Jones matrix of a medium depends on the frequency of the waves. For propagation effects in two orthogonal modes, the Jones matrix can be written as: $\mathbf{J} = \mathbf{T} \begin{bmatrix} g_1 & 0 \\ 0 & g_2 \end{bmatrix} \mathbf{T}^{-1},$ where g1 and g2 are complex numbers representing the change in amplitude and phase caused in each of the two propagation modes, and T is a unitary matrix representing a change of basis from these propagation modes to the linear system used for the Jones vectors. For those media in which the amplitudes are unchanged but a differential phase delay occurs, the Jones matrix is unitary, while those affecting amplitude without phase have Hermitian Jones matrices. In fact, since any matrix may be written as the product of unitary and positive Hermitian matrices, any sequence of linear propagation effects, no matter how complex, can be written as the a product of these two basic types of transformations. Paths taken by vectors in the Poincaré sphere under birefringence. The propagation modes (=rotation axes) are shown with red, blue and yellow lines, the initial vectors by thick black lines, and the paths they take by colored ellipses (which represent circles in three dimensions). Media in which the two modes accrue a differential delay are called birefringent. Well known manifestations of this effect appear in optical wave plates/retarders (linear modes) and in Faraday rotation/optical rotation (circular modes). An easily visualized example is one where the propagation modes are linear, and the incoming radiation is linearly polarized at a 45° angle to the modes. As the phase difference starts to appear, the polarization becomes elliptical, eventually changing to purely circular polarization (90° phase difference), then to elliptical and eventually linear polarization (180° phase) with an azimuth angle perpendicular to the original direction, then through circular again (270° phase), then elliptical with the original azimuth angle, and finally back to the original linearly polarized state (360&deg phase) where the cycle begins anew. In general the situation is more complicated and can be characterized as a rotation in the Poincaré sphere about the axis defined by the propagation modes (this is a consequence of the isomorphism of SU(2) with SO(3)). Examples for linear (blue), circular (red) and elliptical (yellow) birefringence are shown in the figure on the left. The total intensity and degree of polarization are unaffected. If the path length in the birefringent medium is sufficient, plane waves will exit the material with a significantly different propagation direction, due to refraction. For example, this is the case with macroscopic crystals of calcite, which present the viewer with two offset, orthogonally polarized images of whatever is viewed through them. It was this effect that provided the first discovery of polarization, by Erasmus Bartholinus in 1669. In addition, the phase shift, and thus the change in polarization state, is usually frequency dependent, which, in combination with dichroism, often gives rise to bright colors and rainbow-like effects. Media in which the amplitude of waves propagating in one of the modes is reduced are called dichroic. Devices that block nearly all of the radiation in one mode are known as polarizing filters or simply "polarizers". In terms of the Stokes parameters, the total intensity is reduced while vectors in the Poincaré sphere are "dragged" towards the direction of the favored mode. Mathematically, under the treatment of the Stokes parameters as a Minkowski 4-vector, the transformation is a scaled Lorentz boost (due to the isomorphism of SL(2,C) and the restricted Lorentz group, SO(3,1)). Just as the Lorentz transformation preserves the proper time, the quantity det Ψ = S02-S12-S22-S32 is invariant within a multiplicative scalar constant under Jones matrix transformations (dichroic and/or birefringent). In birefringent and dichroic media, in addition to writing a Jones matrix for the net effect of passing through a particular path in a given medium, the evolution of the polarization state along that path can be characterized as the (matrix) product of an infinite series of infinitesimal steps, each operating on the state produced by all earlier matrices. In a uniform medium each step is the same, and one may write $\mathbf{J} = Je^{\mathbf{D}},$ where J is an overall (real) gain/loss factor. Here D is a traceless matrix such that αDe gives the derivative of e with respect to z. If D is Hermitian the effect is dichroism, while a unitary matrix models birefringence. The matrix D can be expressed as a linear combination of the Pauli matrices, where real coefficients give Hermitian matrices and imaginary coefficients give unitary matrices. The Jones matrix in each case may therefore be written with the convenient construction: $\begin{matrix} \mathbf{J_b} &=& J_be^{\beta \mathbf{\sigma}\cdot\mathbf{\hat{n}}} \\ \\ \mathbf{J_r} &=& J_re^{\phi i\mathbf{\sigma}\cdot\mathbf{\hat{m}}}, \end{matrix}$ where σ is a 3-vector composed of the Pauli matrices (used here as generators for the Lie group SL(2,C)) and n and m are real 3-vectors on the Poincaré sphere corresponding to one of the propagation modes of the medium. The effects in that space correspond to a Lorentz boost of velocity parameter 2β along the given direction, or a rotation of angle 2φ about the given axis. These transformations may also be written as biquaternions (quaternions with complex elements), where the elements are related to the Jones matrix in the same way that the Stokes parameters are related to the coherency matrix. They may then be applied in pre- and post-multiplication to the quaternion representation of the coherency matrix, with the usual exploitation of the quaternion exponential for performing rotations and boosts taking a form equivalent to the matrix exponential equations above (See: Quaternion rotation). In addition to birefringence and dichroism in extended media, polarization effects describable using Jones matrices can also occur at (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations. Part of the wave is transmitted and part is reflected, with the ratio depending on angle of incidence and the angle of refraction. In addition, if the plane of the reflecting surface is not aligned with the plane of propagation of the wave, the polarization of the two parts is altered. In general, the Jones matrices of the reflection and transmission are real and diagonal, making the effect similar to that of a simple linear polarizer. For unpolarized light striking a surface at a certain optimum angle of incidence known as Brewster's angle, the reflected wave will be completely s-polarized. Certain effects do not produce linear transformations of the Jones vector, and thus cannot be described with (constant) Jones matrices. For these cases it is usual instead to use a 4×4 matrix that acts upon the Stokes 4-vector. Such matrices were first used by Paul Soleillet in 1929, although they have come to be known as Mueller matrices. While every Jones matrix has a Mueller matrix, the reverse is not true. Mueller matrices are frequently used to study the effects of the scattering of waves from complex surfaces or ensembles of particles. ## Polarization in nature, science, and technology ### Observing polarization effects in everyday life All light which reflects off a flat surface is at least partially polarized. You can take a polarizing filter and hold it at 90 degrees to the reflection, and it will be reduced or eliminated. Polarizing filters remove light polarized at 90 degrees to the filter. This is why you can take 2 polarizers and lay them atop one another at 90 degree angles to each other and no light will pass through. Polarized light can be observed all around you if you know what it is and what to look for. (the lenses of Polaroid® sunglasses will work to demonstrate). While viewing through the filter, rotate it, and if linear or elliptically polarized light is present the degree of illumination will change. Polarization by scattering is observed as light passes through our atmosphere. The scattered light often produces a glare in the skies. Photographers know that this partial polarization of scattered light produces a washed-out sky. An easy first phenomenon to observe is at sunset to view the horizon at a 90° angle from the sunset. Another easily observed effect is the drastic reduction in brightness of images of the sky and clouds reflected from horizontal surfaces, which is the reason why polarizing filters are often used in sunglasses. Also frequently visible through polarizing sunglasses are rainbow-like patterns caused by color-dependent birefringent effects, for example in toughened glass (e.g. car windows) or items made from transparent plastics. The role played by polarization in the operation of liquid crystal displays (LCDs) is also frequently apparent to the wearer of polarizing sunglasses, which may reduce the contrast or even make the display unreadable. In fact, the naked human eye is weakly sensitive to polarization, without the need for intervening filters. See: Haidinger's brush. ### Biology Many animals are apparently capable of perceiving the polarization of light, which is generally used for navigational purposes, since the linear polarization of sky light is always perpendicular to the direction of the sun. This ability is very common among the insects, including bees, which use this information to orient their communicative dances. Polarization sensitivity has also been observed in species of octopus, squid, cuttlefish, and mantis shrimp. The rapidly changing, vividly colored skin patterns of cuttlefish, used for communication, also incorporate polarization patterns, and mantis shrimp are known to have polarization selective reflective tissue. Sky polarization can also be perceived by some vertebrates, including pigeons, for which the ability is but one of many aids to homing. ### Geology The property of (linear) birefringence is widespread in crystalline minerals, and indeed was pivotal in the initial discovery of polarization. In mineralogy, this property is frequently exploited using polarization microscopes, for the purpose of identifying minerals. See pleochroism. ### Chemistry Polarization is principally of importance in chemistry due to the circular dichroism and "optical rotation" (circular birefringence) exhibited by optically active (chiral) molecules. ### Astronomy See: Polarization in astronomy In many areas of astronomy, the study of polarized electromagnetic radiation from outer space is of great importance. Although not usually a factor in the thermal radiation of stars, polarization is also present in radiation from coherent astronomical sources (e.g. hydroxyl or methanol masers), and incoherent sources such as the large radio lobes in active galaxies, and pulsar radio radiation (which may, it is speculated, sometimes be coherent), and is also imposed upon starlight by scattering from interstellar dust. Apart from providing information on sources of radiation and scattering, polarization also probes the interstellar magnetic field via Faraday rotation. The polarization of the cosmic microwave background is being used to study the physics of the very early universe. ### Technology Technological applications of polarization are extremely widespread. Perhaps the most commonly encountered example is the liquid crystal display. All radio transmitters and receivers are intrinsically polarized, special use of which is made in radar. In engineering, the relationship between strain and birefringence motivates the use of polarization in characterizing the distribution of stress and strain in prototypes. Electronically controlled birefringent devices are used in combination with polarizing filters as modulators in fiber optics. Polarizing filters are also used in photography. They can deepen the color of a blue sky and eliminate reflections from windows. Sky polarization has been exploited in the "sky compass ", which was used in the 1950s when navigating near the poles of the Earth's magnetic field when neither the sun nor stars were visible (e.g. under daytime cloud or twilight). It has been suggested, controversially, that the Vikings exploited a similar device (the "sunstone ") in their extensive expeditions across the North Atlantic in the 9th - 11th centuries, before the arrival of the magnetic compass in Europe in the 12th century. Related to the sky compass is the "polar clock ", invented by Charles Wheatstone in the late 19th century. ## References • Principles of Optics, M. Born & E. Wolf, Cambridge University Press, 7th edition 1999, ISBN 0521642221 • Fundamentals of polarized light : a statistical optics approach, C. Brosseau, Wiley, 1998, ISBN 0-471-14302-2 • Polarized Light, Production and Use, William A. Shurcliff, Harvard University Press, 1962. • Optics, Eugene Hecht, Addison Wesley, 4th edition 2002, hardcover, ISBN 0-8053-8566-5 • Polarised Light in Science and Nature, D. Pye, Institute of Physics Publishing, 2001, ISBN 0750306734 • Polarized Light in Nature, G. P. Können, Translated by G. A. Beerling, Cambridge University Press, 1985, hardcover, ISBN 0-521-25862-6 ## External links • polarization.com: Polarized Light in Nature and Technology • Polarized Light Digital Image Gallery: Microscopic images made using polarization effects • The relationship between photon spin and polarization • A virtual polarization microscope 03-10-2013 05:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.906795084476471, "perplexity_flag": "middle"}
http://math.berkeley.edu/~oeding/RTG/index.html
RTG Workshop U.C. Berkeley September 26-29, 2012 Invited lecturers: The U.C. Berkeley NSF RTG in representation theory, geometry, and combinatorics will host a 4-day workshop September 26-29, 2012, featuring three minicourses (abstracts), as well as talks by some participants. There are no registration fees, but individuals interested in attending are kindly requested to register (here ).  There is funding available for graduate students and postdocs, persons from under-represented groups are especially encouraged to apply. The deadline to apply for funding is August 13, 2012. If you have any questions, do not hesitate to contact the organizers: Luke Oeding, Noah Giansiracusa. Schedule Wednesday, September 26, 2012 9:00AM - 9:30AM Alcove Coffee/Tea 9:30AM - 9:45AMSibley Auditorium Welcome 9:45AM - 11:00AMSibley Auditorium Giorgio OttavianiTensor decomposition and tensor rank. (slides) 11:00AM - 12:30PMSibley Auditorium Jan Draisma 12:30PM - 2:00PM Northside Lunch 2:00PM - 3:30PMSibley Auditorium Andrew SnowdenTwisted Commutative Algebras: (slides) 3:30PM - 4:00PM Alcove Coffee/Tea 4:00PM - 4:30PMSibley Auditorium Andrew CritchAlgebraic constraints on MPS-entangled qubits (slides) 4:30PM - 5:00PMSibley Auditorium Carmeliza Navasca New algorithms based on the reduced least-squares functional of the canonical polyadic decomposition (slides) 5:00PM - 5:30PMSibley Auditorium Shenglong Hu 6:30PM - 8:30PMCha-Am Conference Dinnerat Cha-Am, 1543 Shattuck Avenue, Berkekey (website) Thursday, September 27, 2012 9:00AM - 9:30AMAlcove Coffee/Tea 9:30PM - 11:00AMSibley Auditorium Giorgio OttavianiNon abelian apolarity and applications (slides) 11:00AM - 11:30PMSibley Auditorium Galya Dobrovolska 11:30AM - 12:00PMSibley Auditorium Robert Krone 12:00PM - 1:30PMNorthside Lunch 1:30PM - 2:00PMSibley Auditorium Daniel Litt 2:00PM - 2:30PMSibley Auditorium Gus Schrader 2:30PM - 3:00PMSibley Auditorium Qingchun Ren 3:00PM - 4:00PMEvans 1015 Coffee/Tea 4:10PM - 5:00PMEvans 60 Jan Draisma Friday, September 28, 2012 9:00AM - 9:30AMEvans 1015 Coffee/Tea 9:30AM - 11:00AMEvans 1015 Giorgio Ottaviani The complexity of the Matrix Multiplication Algorithm. (slides) 11:00AM - 12:30PMEvans 1015 Andrew SnowdenDelta Modules. (slides) 12:30PM - 2:00PM Northside Lunch 2:00PM - 2:30PMEvans 1015 Shamil ShakirovUndulation invariant of plane curves. (slides) 2:30 - 3:00PMEvans 1015 Claudiu Raicu Representation stability for syzygies of line bundles on Segre--Veronese varieties. (slides) 3:00PM - 3:30PMEvans 1015 Alexandru ConstantinescuKalai's conjecture and beyond. (slides) 3:30PM - 4:00PMEvans 1015 Coffee/Tea 4:00PM - 4:30PMEvans 1015 Daniel Halpern-Leistner The derived category of a GIT quotient (slides) 4:30PM - 5:00PMEvans 1015 Will Traves From Pascal's Mystic Hexagon Theorem to secant varieties. (slides) Saturday, September 29, 2012 9:00AM - 9:30AMEvans 1015 Coffee/Tea 9:30AM - 11:00AMEvans 1015 Andrew SnowdenSyzygies of Segre embeddings and related varieties. (slides) 11:00AM - 12:30PMEvans 1015 Jan Draisma Tensors of bounded rank. (slides) 12:30PM - 2:00PM Northside Lunch 2:00PM - 2:30PMEvans 1015 Pablo SolisA wonderful embedding of the loop group. (slides) 2:30PM - 3:00PMEvans 1015 Greg BlekhermanReal Waring Rank. (slides) 3:00PM - 3:30PMEvans 1015 Jason Morton 3:30PM - 4:00PMEvans 1015 tea/coffee 4:00PM - 5:00PMEvans 1015 Ravi Vakil Stabilization of discriminants in the Grothendieck ring (slides) # Abstracts (we're using mathjax) Let $$V_1, \dots, V_n$$ be complex vector spaces.  Inside the tensor product $$V_1 \otimes \dots \otimes V_n$$, there are many interesting subvarieties:  the Segre variety of pure tensors; higher subspace varieties; secant and tangent varieties to these varities; etc.  As one varies the $$V_i$$ and $$n$$, the coordinate rings, defining ideals and syzygy modules of these varieties each form an algebraic structure called a Delta-module.  Delta-modules are reasonable objects --- for instance, finitely generated Delta-modules are noetherian and have rational Hilbert series, in a suitable sense --- and thus provide a tool to study these varieties.  I plan to discuss the theory of Delta-modules, other closely related algebraic structures and how they can be used to study varieties like those mentioned above. Systems of polynomial equations in infinitely many variables arise naturally in many areas of applied algebraic geometry. For instance, they may be limits of systems in finitely many variables that describe statistical models where some of the parameters tend to infinity. Typically, these infinite-dimensional systems have a lot of symmetry. In these lectures I will explain how to exploit this symmetry to obtain finiteness results, both theoretical and computational. In particular, I will give a detailed exposition of joint work with Kuttler showing that tensors of bounded (border) rank are defined in bounded degree. Beautiful combinatorics of well-quasi-ordered sets plays a fundamental role. Every matrix of rank $$r$$ can be decomposed as the sum of exactly $$r$$ matrices of rank one. This elementary fact is the starting point of a broad theory that generalizes this decomposition and the concept of rank to tensors (multidimensional matrices) and to polynomials (symmetric tensors). The roots of this theory were developed in the 19th century by Sylvester and others, who studied the notion of apolarity and the Waring decomposition of a polynomial. In recent years tensor decomposition has found many striking applications in several fields like signal processing and phylogenetics. We review classical apolarity, Sylvester's algorithm and its modern generalizations in the setting of vector bundles (non abelian apolarity). We discuss the open problem to compute the rank of a general tensor and uniqueness of tensor decomposition. We apply this machinery to the complexity of Matrix Multiplication with the aid of representation theory. Matrix product states (MPS) are tensors which, as models of 1-dimensional quantum spin systems, approximate the ground states of gapped local Hamiltonians. To classify such states of matter, we are interested in the projective variety of entangled spin systems which can arise as MPS. Algebraically, MPS models bear a similarity to hidden Markov models (HMM), statistical models used in fields as diverse as natural language processing, genomics, and aeronautics. In this talk, I will explain how reparametrization techniques from [C,2012] for binary HMM, including various classical results on trace algebras, aid in computing algebraic constraints on MPS states, and exhibit two interesting hypersurfaces of entangled qubit systems found by this method. (This work is joint with Jason Morton.) We study the reduced least-squares functional of the canonical polyadic decomposition through elimination of one factor matrix. An analysis of the reduced functional gives several equivalent optimization problems, like a Rayleigh quotient or a projection. These formulations are the basis of several new algorithms: the centroid projection method for efficient computation of suboptimal solutions and two fixed point iterations for approximating the best rank-one and best rank-R decompositions under certain non-degeneracy conditions. (This is joint work with S. Kindermann.) E-eigenvalues of tensors was introduced in 2005. For a regular tensor, its E-eigenvalues are exactly the roots of its E-characteristic polynomial. The degree of the E-characteristic polynomials of generic tensors can be computed through Chern classes. The coefficients of the E-characteristic polynomial are invariants under the action of the orthogonal linear group. The constant term of the E-characteristic polynomial has a resultant formula. For tensors of dimension two, explicit formulae for the coefficients of the E-characteristic polynomials are given. Especially, the leading coefficient is a power of a sum of squares. I will derive a result of Brion on Kronecker coefficients from a computation of Fourier-Malgange transform of some local systems on projective space corresponding to representations of the symmetric group. I will also indicate an elementary proof of this result of Brion. A symmetric ideal in the polynomial ring of a countable number of variables is an ideal that is invariant under any permutations of the variables. While such ideals are usually not finitely generated, Aschenbrenner and Hillar proved that such ideals are finitely generated if you are allowed to apply permutations to the generators, and in fact there is a notion of Gröbner bases of these ideals. With Chris Hillar and Anton Leykin, I am working on an algorithm for calculating such Gröbner bases, and implementing this algorithm in Macaulay2. We are also exploring how these methods can be generalized to other group actions, and possible applications of these algorithms. I'll describe the geometry of the space of line bundles on plane curves satisfying various cohomological conditions. In particular, I'll describe a generalization of the 2000 result of Beauville that the relative Jacobian of the universal smooth degree d plane curve is unirational -- I'll show that all "cohomological loci" on the relative Jacobian are unirational as well. I'll also recover Beauville's result via different methods. We use these methods to describe, for example, the space of degree s line bundles on degree d curves globally generated by 2 sections, that is, maps from plane curves to $$\mathbb{P}^1$$. Time permitting, I'll discuss other "motivic" invariants of these cohomological loci, e.g. Poincare and Hodge polynomials. Kummer varieties are quotients of abelian varieties by the map sending an element to its inverse. Over the complex numbers, the second order theta functions embed a Kummer variety in $$\mathbb{P}^{2^g-1}$$. In this talk we focus on the special case of 3-dimensional Kummers. It turns out that Kummer threefolds in $$\mathbb{P}^7$$ can be described as the singular locus of a certain quartic hypersurface called the Coble quartic. We obtain an explicit expression for this quartic polynomial, with its coefficients expressed in terms of second order theta constants. We'll also discuss the universal Kummer threefold, a 9-dimensional variety that represents the total space of the 6-dimensional family of Kummer threefolds, and present some results on the equations defining this variety. This is the second part of our discussion on universal Kummer threefolds. The coefficients of Coble quartics lie in $$\mathbb{P}^{14}$$. The Zariski closure is the Gopel variety. It can be reembedded into $$\mathbb{P}^{134}$$, where elements of $$Sp_6(F_2)$$ act by signed permutations in the coordinates. In this way, the Gopel variety becomes the intersection of $$\mathbb{P}^{14}$$ and a toric variety inside $$\mathbb{P}^{134}$$. This makes it more convenient for studying the tropicalization of the Gopel variety. Finally, this leads to a version of universal Kummer threefold in $$\mathbb{P}^7\times \mathbb{P}^8$$, where the moduli part is parametrized by 7 points in $$\mathbb{P}^2$$. A classical problem in algebraic geometry is to determine, whether a given plane curve has tangent lines of multiplicity four or higher. A.Cayley proved in 19th century that there exists an invariant of degree 6(r-3)(3r-2) that vanishes if and only if such lines exist. Because of extremely high degree, he did not give any explicit formula for these invariants. Using modern methods, we are now able to give an explicit formula for undulation invariants for quartic (r=4) and quintic (r=5) curves. . I will discuss the multivariate version of Church and Farb's notion of representation stability and explain how it applies to the syzygies of line bundles on products of projective spaces. I will give bounds for when stabilization occurs and show that these bounds are sometimes sharp by describing the linear syzygies for a family of line bundles on Segre varieties. Part of the motivation for this work comes from the fact that Ein and Lazarsfeld's conjecture on the asymptotic vanishing of syzygies for arbitrary varieties reduces to the case of line bundles on a product of (at most three) projective spaces. Starting from an unpublished conjecture of Kalai and from a conjecture of Eisenbud, Green and Harris, we study several problems relating h-vectors of Cohen-Macaulay, flag simplicial complexes and face vectors of simplicial complexes. We will first sketch a proof of Kalai's conjecture for vertex decomposable complexes. We then state two new conjectures which arise naturally from this proof, and provide some evidence in their support. These results are part of a joint work with Matteo Varbaro, cf. ArXiv:1004.0170 The derived category of an algebraic variety encodes all of the information about sheaf cohomology and lots of topological information. For a variety $$X$$ acted on by a reductive group, one can consider the derived category of equivariant coherent sheaves on $$X$$, or the derived category of a GIT quotient of $$X$$. I will describe a new method of studying the derived category of a GIT quotient by identifying it with a subcategory the equivariant category. This is analogous to a classical description of the cohomology of a GIT quotient due to F. Kirwan, L. Jeffrey, and others. I will apply this technique to produce examples of non-isomorphic varieties which have equivalent derived categories, and to produce examples of automorphisms of derived categories which don't come from automorphisms of the variety itself. I describe the wonderful compacti cation of loop groups. These compacti cations are obtained by adding normal-crossing boundary divisors to the group LG of loops in a reductive group $$G$$ (or more accurately, to the semi-direct product $$\mathbb{C}^* \times LG$$ in a manner equivariant for the left and right $$\mathbb{C}^* \times LG$$-actions. The analogue for a torus group $$T$$ is the theory of toric varieties; for an adjoint group $$G$$, this is the wonderful compactications of De Concini and Procesi. The loop group analogue is suggested by work of Faltings in relation to the compacti cation of moduli of $$G$$-bundles over nodal curves. Using the loop analogue one can construct a 'wonderful' completion of the moduli stack of $$G$$-bundles over nodal curves. I'll discuss an application of secants to Segre varieties in a constructive geometry problem that has roots in Pascal's Mystic Hexagon Theorem. While symmetric tensor decomposition is usually studied over the complex numbers, the situation for real symmetric tensors is more complicated. I will discuss a proof of the conjecture of Comon and Ottaviani that typical real Waring ranks of bivariate forms of degree $$d$$ take all integer values between $$\lfloor \frac{d+2}{2}\rfloor$$ and $$d$$. That is for all $$d$$ and all $$\lfloor \frac{d+2}{2}\rfloor \leq m \leq d$$ there exists a bivariate form $$f$$ such that $$f$$ can be written as a linear combination of $$m$$ $$d$$-th powers of real linear forms and no fewer, and additionally all forms in an open neighborhood of $$f$$ also possess this property. Equivalently, for all $$d$$ and any $$\lfloor \frac{d+2}{2}\rfloor \leq m \leq d$$ there exists a symmetric real bivariate tensor $$t$$ of order $$d$$ such that $$t$$ can be written as a linear combination of $$m$$ symmetric real tensors of rank 1 and no fewer, and additionally all tensors in an open neighborhood of $$t$$ also possess this property. arXiv:1205.3257 I'll talk about some potential applications of high-dimensional and infinite-dimensional algebraic geometry arising in the study of quantum many-body systems. As with Matrix Product States, we can use what we've learned in algebraic statistics to say something about the models currently used to study such systems. The importance of the thermodynamic limit provides strong motivation for understanding the infinite-dimensional case. We consider the limiting behavior'' of discriminants, by which we mean informally the closure of the locus in some parameter space of some type of object where the objects have certain singularities. We are looking for the kind of stabilization of algebraic structure as the "problem gets large", of the sort you will have seen in the lectures of Draisma and Snowden. We focus on the space of partially labeled points on a variety $$X$$, and linear systems on $$X$$. These are connected --- we use the first to understand the second. We describe their classes in the "ring of motives", as the number of points gets large, or as the line bundle gets very positive. They stabilize in an appropriate sense, and their stabilization can be described in terms of the motivic zeta values. The results extend parallel results in both arithmetic and topology. I will also present a conjecture (on motivic stabilization of symmetric powers'') suggested by our work. Although it is true in important cases, Daniel Litt has shown that it contradicts other hoped-for statements. This is joint work with Melanie Wood. (This is less technical than it sounds, and I will define everything from scratch.) Organized by: Luke Oeding, Noah Giansiracusa and the UC Berkeley RTG in Representation Theory, Geometry, and Combinatorics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 45, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8993800282478333, "perplexity_flag": "middle"}
http://www.ma.utexas.edu/mediawiki/index.php/Semilinear_equations
# Semilinear equations ### From Mwiki An equation is called semilinear if it consists of the sum of a well understood linear term plus a lower order nonlinear term. For elliptic and parabolic equations, the two effective possibilities for the linear term is to be either the fractional Laplacian or the fractional heat equation. Some equations which technically do not satisfy the definition above are still considered semilinear. For example evolution equations of the form $u_t + (-\Delta)^s u + H(x,u,Du) = 0$ can be thought of as semilinear equations even if $s<1/2$. ## Some common semilinear equations ### Stationary equations with zeroth order nonlinearity Adding a zeroth order term to the right hand side to either the Laplace equation or the fractional Laplace equation is probably the theme for which the largest number of papers have been written on PDEs. $(-\Delta)^s u = f(u).$ If $f$ is $C^\infty$ and some initial regularity can be shown to the solution $u$ (like $L^\infty$), then the solution $u$ will also be $C^\infty$, which can be shown by a standard bootstrapping. Natural question to ask about this type of equations are about the existence of nontrivial global solutions that vanish at infinity, positivity of solutions, symmetries, etc... Depending on the structure of the nonlinearity $f(u)$, different results are obtained [1] [2] [3] [4] [5] [6] [7]. ### Reaction diffusion equations This general class refers to the equations we get by adding a zeroth order term to the right hand side of a heat equation. For the fractional case, it would look like $u_t + (-\Delta)^s u = f(u).$ The case $f(u) = u(1-u)$ corresponds to the KPP/Fisher equation. For this and other related models, it makes sense to study solutions restricted to $0 \leq u \leq 1$. The research centers around traveling waves, their stability, limits, asymptotic behavior [8], etc... Solutions are trivially $C^\infty$ so there is no issue about regularity. ### Burgers equation with fractional diffusion It refers to the parabolic equation for a function on the real line $u:[0,+\infty) \times \R \to \R$, $u_t + u \ u_x + (-\Delta)^s u = 0$ The equation is known to be well posed if $s \geq 1/2$ and to develop shocks if $s<1/2$ [9]. Still, if $s \in (0,1/2)$, the solution regularizes for large enough times[10][11]. It refers to the parabolic equation for a scalar function on the plane $\theta:[0,+\infty) \times \R^2 \to \R$, $\theta_t + u \cdot \nabla \theta + (-\Delta)^s \theta = 0$ where $u = R^\perp \theta$ (and $R$ is the Riesz transform). The equation is well posed if $s \geq 1/2$. The well posedness in the case $s < 1/2$ is a major open problem. It is believed that solving the supercritical SQG equation could possibly help understand 3D Navier-Stokes equation. ### Conservation laws with fractional diffusion (aka "fractal conservation laws") It refers to parabolic equations of the form $u_t + \mathrm{div } F(u) + (-\Delta)^s u = 0.$ The Cauchy problem is known to be well posed classically if $s > 1/2$ [12]. For $s<1/2$ there are viscosity solutions that are not $C^1$. The critical case $s=1/2$ appears not to be written anywhere. However, it can be solved following the same method as for the Hamilton-Jacobi equations with fractional diffusion (below) [13] or the modulus of continuity approach [11]. ### Hamilton-Jacobi equation with fractional diffusion It refers to the parabolic equation $u_t + H(\nabla u) + (-\Delta)^s u = 0.$ For Lipschitz initial data, the Cauchy problem always has a viscosity solution which is Lipschitz in space.[12] The problem is well posed classically if $s \geq 1/2$. For $s<1/2$ there are viscosity solutions that are not $C^1$. The subcritical case $s>1/2$ can be solved with classical bootstrapping.[12] The critical case $s=1/2$ was solved using the regularity results for drift-diffusion equations.[13] ## References 1. ↑ Ou, Biao; Li, Congming; Chen, Wenxiong (2006), "Classification of solutions for an integral equation", 59 (3): 330–343, doi:10.1002/cpa.20116, ISSN 0010-3640 2. ↑ Cabre, X.; Sire, Yannick (2010), "Nonlinear equations for fractional Laplacians I: Regularity, maximum principles, and Hamiltonian estimates", Arxiv preprint arXiv:1012.0867 3. ↑ Cabré, Xavier; Cinti, E. (2010), "Energy estimates and 1-D symmetry for nonlinear equations involving the half-Laplacian", Discrete and Continuous Dynamical Systems (DCDS-A) 28 (3): 1179–1206 4. ↑ Frank, R.L.; Lenzmann, E. (2010), "Uniqueness and Nondegeneracy of Ground States for $(-\Delta)^s Q+ Q-Q^{\alpha+1}= 0$ in $\R$", Arxiv preprint arXiv:1009.4042 5. ↑ Felmer, P.; Quaas, A.; Tan, J., Positive Solutions Of Nonlinear Schrodinger Equation With The Fractional Laplacian. 6. ↑ Sire, Yannick; Valdinoci, E. (2009), "Fractional Laplacian phase transitions and boundary reactions: a geometric inequality and a symmetry result", Journal of Functional Analysis (Elsevier) 256 (6): 1842–1864, ISSN 0022-1236 7. ↑ Palatucci, G.; Valdinoci, E.; Savin, O. (2011), "Local and global minimizers for a variational energy involving a fractional norm", Arxiv preprint arXiv:1104.1725 8. ↑ Cabré, Xavier; Roquejoffre, Jean-Michel (2009), "Propagation de fronts dans les équations de Fisher-KPP avec diffusion fractionnaire", Comptes Rendus Mathématique. Académie des Sciences. Paris 347 (23): 1361–1366, doi:10.1016/j.crma.2009.10.012, ISSN 1631-073X 9. ↑ Kiselev, Alexander; Nazarov, Fedor; Shterenberg, Roman (2008), "Blow up and regularity for fractal Burgers equation", Dynamics of Partial Differential Equations 5 (3): 211–240, ISSN 1548-159X 10. ↑ Chan, Chi Hin; Czubak, Magdalena; Silvestre, Luis (2010), "Eventual regularization of the slightly supercritical fractional Burgers equation", Discrete and Continuous Dynamical Systems. Series A 27 (2): 847–861, doi:10.3934/dcds.2010.27.847, ISSN 1078-0947 11. ↑ 11.0 11.1 Kiselev, A. (to appear), "Nonlocal maximum principles for active scalars", Advances in Mathematics 12. ↑ 12.0 12.1 12.2 Droniou, Jérôme; Imbert, Cyril (2006), "Fractal first-order partial differential equations", Archive for Rational Mechanics and Analysis 182 (2): 299–331, doi:10.1007/s00205-006-0429-2, ISSN 0003-9527 13. ↑ 13.0 13.1 Silvestre, Luis (2011), "On the differentiability of the solution to the Hamilton-Jacobi equation with critical fractional diffusion", Advances in Mathematics 226 (2): 2020–2039, doi:10.1016/j.aim.2010.09.007, ISSN 0001-8708
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.802854061126709, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/117147?sort=newest
## Confused about orbits ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am trying to apply the main theorem of this paper to a certain kind of graph and keep getting confused. The theorem uses $rank(Aut\Gamma)$ which is defined as "the number of $Aut \Gamma$ orbits on the set $V(\Gamma) \times V(\Gamma)$". ($\Gamma$ is a graph). Now, my graph $\Gamma$ is built in this way: take a clique of $c$ vertices, labelled $\{1,2,\ldots,c\}$ and add $s=\binom{c-1}{2}$ additional vertices, each of which is connected to a different pair of two vertices from $\{2,\ldots,c\}$. Question: What is $rank(Aut\Gamma)$? My answer is 9 because the automorphism group is (apparently) $S_{c-1} \times S_{s}$ and there are 3 orbits for it. However, when I plug 9 into the theorem I get a contradiction with the rest of it (which involves objects I have a better grasp of so I am pretty sure I got the rest right). Therefore, I suspect that my answer to the above question is wrong and I am in dire need of some enlightenment. EDIT: Let's assume $c \geq 4$ to rule out sporadic cases. - You count three orbits of the group on $V(\Gamma)$. And on $V(\Gamma)\times V(\Gamma)$? – Mariano Suárez-Alvarez Dec 24 at 15:38 Unless I misunderstood the description of the graphs, your automorphism group seems to be wrong, for example when $c=3$, the automorphism group of $\Gamma$ is $S_2\times S_2$, not $S_2\times S_1$ as your formula posits and for $c\geq 4$, the automorphism group seems to be just $S_{c-1}$. – ARupinski Dec 24 at 15:49 @Mariano: Like I said, I get 9, but it seems to be wrong... – Felix Goldberg Dec 24 at 15:56 I don't understand why you say that «you count 3 orbits but you plug in $9$». – Mariano Suárez-Alvarez Dec 24 at 15:59 @ARupinski: Let's assume $c \geq 4$ (I'll update the question too). – Felix Goldberg Dec 24 at 16:02 show 3 more comments ## 2 Answers The automorphism group of this graph is $S_{c-1}$. Note that the vertex 1 in your clique cannot be moved anywhere (look at the degrees). On the other hand, a permutation of the remaining vertices in {2,...,c} induces a permutation on these $s$ vertices $P$. The 14 orbitals (aka orbits on $V\times V$) are as follows: 1. {(1,1)} 2. {(x,x) | x in {2..c}} 3. {(p,p) | p in P} 4. {(x,y) | x,y in {2..c}, x not equal to y} 5. {(p,q) | p,q in P, the corresponding pairs of elements of {2..c} do not intersect} 6. {(p,q) | p,q in P, the corresponding pairs of elements of {2..c} intersect in one element} 7. {(1,x) | x in {2..c}} 8. {(x,1) | x in {2..c}} 9. {(1,p) | p in P} 10. {(p,1) | p in P} 11. {(x,p) | x in {2..c}, p in P, x in p} 12. {(p,x) | x in {2..c}, p in P, x in p} 13. {(x,p) | x in {2..c}, p in P, x not in p} 14. {(p,x) | x in {2..c}, p in P, x not in p} - Thanks! That's very nice. – Felix Goldberg Dec 24 at 19:07 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. First, let me point out that I think ARupinski is right and $\rm{Aut}(\Gamma)$ is simply $S_{c-1}$. (This could be relevant.) Next, note that there are at least 11 orbits on $V(\Gamma)\times V(\Gamma)$. There is an obvious partition into 9 parts (coming from the 3 orbits on $V(\Gamma)$) each belonging to a different orbit, but 2 of these split into two orbits, a diagonal part and a non-diagonal part. For example $(2,2)$ is in a different orbit than $(2,3)$. In fact, there are even more orbits than this. For example, the part corresponding to $s\times s$ splits even further : ({1,2},{2,3}) is not in the same orbit as ({1,2},{3,4}). The first is an ordered pair of vertices having a neighbour in common, while the second is a pair of vertices with no neighbour in common. EDIT: As Dima Pasechnik explained, two of the other parts also split further in two, for a total of 14 orbits. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306444525718689, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/53698/determining-the-peak-speed-of-an-accelerating-decelerating-body-between-two-poin
# Determining the peak speed of an accelerating/decelerating body between two point A point moves from point A to point B. Both points are known, and so is the distance between them. It starts with a known speed of VA, accelerate (with known constant acceleration a) reaching VX (unknown), then start decelerating (with known constant acceleration -a) until it reach the final point B with a known speed of VB. So we know : VA, VB, a, and distance between A and B. How to find VX ? - Two different uses of $a$ here. Perhaps $V_A$ and $V_B$ would be better. – Henry Jul 25 '11 at 16:58 Did it. Thanks. – St0rM Jul 25 '11 at 17:01 ## 2 Answers We write down some equations, in a semi-mechanical way, and then solve them. Let $D$ be the (known) total distance travelled. It is natural to introduce some additional variables. The intuition probably works best if we use time. So let $s$ be the length of time that we accelerated, and $t$ the length of time that we decelerated. The (average) acceleration is the change in velocity, divided by elapsed time. Thus by looking at the acceleration and deceleration phases separately, we obtain the equations $$a=\frac{V_X-V_A}{s} \qquad \text{and}\qquad a=\frac{V_X-V_B}{t}\qquad\qquad \text{(Equations 1)}$$ The net displacement while accelerating under constant acceleration is the average velocity times elapsed time. So while accelerating we covered a distance $(1/2)(V_X+V_A)s$. In the same way, we can see that the total distance covered while decelerating is $(1/2)(V_X+V_B)t$. But the total distance covered was $D$. If we multiply by $2$ to clear fractions, we obtain $$(V_X+V_A)s +(V_X+V_B)t=2D \qquad\qquad \text{(Equation 2)}$$ Note that from Equations 1 we have $$s=\frac{V_X-V_A}{a} \qquad \text{and}\qquad t=\frac{V_X-V_A}{a}\qquad\qquad \text{(Equations 3)}$$ Substitute for $s$ and $t$ in Equation 2, and multiply through by $a$ to clear denominators. We obtain $$(V_X+V_A)(V_X-V_A) +(V_X+V_B)(V_X-V_B)=2aD.$$ The left-hand side simplifies to $2V_X^2 -V_A^2-V_B^2$. We conclude that $$2V_X^2=2aD+V_A^2+V_B^2$$ and therefore $$V_X=\sqrt{aD+(V_A^2+V_B^2)/2}.$$ Comment: The algebra turned out to be pretty simple. With other choices of variable, it might have seemed more complicated. An important simplifying device was to choose notation that treats the acceleration phase and the deceleration phase symmetrically. That saved half the work. Moreover, it was the preserved symmetry that made the equations "clean" and easy to work with. Symmetry is your friend. - Hint: You can calculate the time to reach a speed of $V_x$ and a further time to reach $V_B$, so you can find the total distance traveled as a function of $V_x$. Set this distance equal to $|B-A|$ and solve for $V_x$. - Tried that, didn't worked, I get an identity. Maybe I made some mistake, my math is not strong enough... – St0rM Jul 25 '11 at 17:06 @StorM: Fine: but why not edit your question to show what you have done so far? – Henry Jul 25 '11 at 18:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525032639503479, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/7853/justification-of-one-tailed-hypothesis-testing/7861
Justification of one-tailed hypothesis testing I understand two-tailed hypothesis testing. You have $H_0 : \theta = \theta_0$ (vs. $H_1 = \neg H_0 : \theta \ne \theta_0$). The p-value is the probability that $\theta$ generates data at least as extreme as what was observed. I don't understand one-tailed hypothesis testing. Here, $H_0 : \theta\le\theta_0$ (vs. $H_1 = \neg H_0 : \theta > \theta_0$). The definition of p-value shouldn't have changed from above: it should still be the probability that $\theta$ generates data at least as extreme as what was observed. But we don't know $\theta$, only that it's upper-bounded by $\theta_0$. So instead, I see texts telling us to assume that $\theta = \theta_0$ (not $\theta \le \theta_0$ as per $H_0$) and calculate the probability that this generates data at least as extreme as what was observed, but only on one end. This seems to have nothing to do with the hypotheses, technically. Now, I understand that this is frequentist hypothesis testing, and that frequentists place no priors on their $\theta$s. But shouldn't that just mean the hypotheses are then impossible to accept or reject, rather than shoehorning the above calculation into the picture? - – robin girard Mar 13 '11 at 6:24 3 Answers That's a thoughtful question. Many texts (perhaps for pedagogical reasons) paper over this issue. What's really going on is that $H_0$ is a composite "hypothesis" in your one-sided situation: it's actually a set of hypotheses, not a single one. It is necessary that for every possible hypothesis in $H_0$, the chance of the test statistic falling in the critical region must be less than or equal to the test size. Moreover, if the test is actually to achieve its nominal size (which is a good thing for achieving high power), then the supremum of these chances (taken over all the null hypotheses) should equal the nominal size. In practice, for simple one-parameter tests of location involving certain "nice" families of distributions, this supremum is attained for the hypothesis with parameter $\theta_0$. Thus, as a practical matter, all computation focuses on this one distribution. But we mustn't forget about the rest of the set $H_0$: that is a crucial distinction between two-sided and one-sided tests (and between "simple" and "composite" tests in general). This subtly influences the interpretation of results of one-sided tests. When the null is rejected, we can say the evidence points against the true state of nature being any of the distributions in $H_0$. When the null is not rejected, we can only say there exists a distribution in $H_0$ which is "consistent" with the observed data. We are not saying that all distributions in $H_0$ are consistent with the data: far from it! Many of them may yield extremely low likelihoods. - I see the p-value as the maximum probability of a type I error. If $\theta \ll \theta_0$, the probability of a type I error rate may be effectively zero, but so be it. When looking at the test from a minimax perspective, an adversary would never draw from deep in the 'interior' of the null hypothesis anyway, and the power should not be affected. For simple situations (the t-test, for example) it is possible to construct a test with a guaranteed maximum type I rate, allowing such one sided null hypotheses. - You would use a one-sided hypothesis test if only results in one direction are supportive of the conclusion you are trying to reach. Think of this in terms of the question you are asking. Suppose, for example, you want to see whether obesity leads to increased risk of heart attack. You gather your data, which might consist of 10 obese people and 10 non-obese people. Now let's say that, due to unrecorded confounding factors, poor experimental design, or just plain bad luck, you observe that only 2 of the 10 obese people have heart attacks, compared to 8 of the non-obese people. Now if you were to conduct a 2-sided hypothesis test on this data, you would conclude that there was a statistically significant association (p ~ 0.02)between obesity and heart attack risk. However, the association would be in the opposite direction to that which you were actually expecting to see, hence the test result would be misleading. (In real life, an experiment that produced such a counterintuitive result could lead to further questions that are interesting in themselves: for example, the data collection process might need to be improved, or there might be previously-unknown risk factors at work, or maybe conventional wisdom is simply mistaken. But these issues aren't really related to the narrow question of what sort of hypothesis test to use.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9616445899009705, "perplexity_flag": "head"}
http://mathoverflow.net/questions/110083/understanding-discrete-cosine-transformation/110124
## Understanding Discrete Cosine Transformation ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm currently working on some software and a key component is 2D DCT. But my question is more general, as I'm trying to understand the DCT in general, let's say from engineers point of view. For start, I know that there are 8 types of DCT, and that many authors use different notation, sometimes even different parameterization, but that's doesn't matter as I'm not going to implement DCT, I only want to understand it. I will stick to formula, scavenged from http://www.cs.cf.ac.uk/Dave/Multimedia/node231.html. DCT is defined as following: $$F(u) = \left ( \frac{2}{N} \right )^\frac{1}{2} \sum_{i=0}^{N-1} \Lambda (i)cos\left [ \frac{\pi\cdot u}{2N}(2i+1) \right ]f(i)$$ $N$ is count of samples. $i$ is index of particular sample and $f(i)$ it's value. $\Lambda$- well I'm not sure, but it's only a weight coefficient, so it does not affect the principle of the DCT. What I'm struggling to understand are values $u$ and $F(u)$. I know that DCT transform data to frequency domain, but I have not found the meaning of this values. My guess is that $u$ is particular frequency and $F(u)$ is amount of this frequency in data, e.g. for signal with 8kHz frequency (for example whistle), the DCT would return $0$ for all values of $u$ and some great value for $u=8000$. (this is an ideal case, I know this is overcast example). I've also deducted, that maximum frequency in DCT result will be limited by number of samples, e.g. for sound sampled at 44100kHz there won't be any coefficient for frequency higher than 44100kHz, due to Nyquist criterium. So are my conclusions right, or completely off track? Thanks in advance. - Your question would probably fit one of the other sites mentioned in the FAQ. – Douglas Zare Oct 19 at 12:53 ## 2 Answers You're very close. $u$ corresponds to frequency and $|F(u)|$ is frequency content in the signal. Let me explain relation between variable $u$ and the frquency it corresponds to:- A signal is being sampled at time period of $T_{p}$, then maximum frequency that it can successfully represent is $1/2T_{p}$. Here $f=1/T_{p}$ is sampling rate and that in case of audio signal is 44.1 KHz (so that it can represent 22KHz signal which is close to hearing limit of human ears). Now, what all frequency it can represent depends on $N$ i.e. number of samples that you take. Frequency in this case will take discrete values from $0,f/N,2f/N...(N-1)f/N$ and these frequency will correspond to $u=0,1,2,..N-1$. Frequency beyond that will alias back to one those frequencies. And so more number of samples you take you can represent more number of frequency. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This question would better suit being asked on the dsp stack I believe. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9541428089141846, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/699/understanding-crc?answertab=oldest
# Understanding CRC There are zillions of articles describing CRC. What can I ready to understand more deeply what's really going on? Both from an algebraic perspective and a bit-manipulation perspective, I'd like to understand it well enough to have an intuitive feel for it. (Also see Brute forcing CRC-32 ) - ## 3 Answers You can have a look at the Wikipedia page on the mathematics of CRC. Among freely available resources, see also chapter 2 of the Handbook of Applied Cryptography. The two main ways to view a CRC-32 are: • It is a linear operation in the vector space $\mathbb{Z}_2^{32}$. This means that the $CRC(A \oplus B) = CRC(A) \oplus CRC(B)$ ("$\oplus$" is XOR). • It is a reduction modulo a given polynomial in $\mathbb{Z}_2[X]$ (a polynomial of degree 32 for a CRC-32). Either way, some background on linear algebra and finite field is what you need (i.e. enough math knowledge to recognize the two things I wrote above as a sufficient description of what is to know about CRC). I quite like this book: A Course in Number Theory and Cryptography; but I recognize that it has a relatively steep learning curve, and most of it is about interesting stuff which has little to do with CRC. I have heard a few good reports on that other book but I have not read it. - 3 A CRC-32 is a (linear) mapping from $\mathbb Z_2^*$ to $\mathbb Z_2^{32}$, not an operation in $\mathbb Z_2^{32}$, or am I understanding this wrong? – Paŭlo Ebermann♦ Sep 16 '11 at 19:32 1 – fgrieu Sep 17 '11 at 7:48 ## Did you find this question interesting? Try our newsletter email address If you have the time to spend to really understand CRC's, I would recommend learning from an Error-Correction Coding book. CRC's and (cyclic) Error Correction codes are intimately related, and I've found that in all the literature I've seen, Error Correction Coding texts are the gentlest and most-direct way to learn the math background (linear algebra and finite fields) you need to understand CRC's.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489964246749878, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/242662/two-questions-on-homomorphisms/242670
# Two questions on homomorphisms Say I have a $\alpha: G \to H$. 1. What do I have to do to prove that $\alpha$ is a homomorphism? I thought it was just show $\alpha(g_1g_2) = \alpha(g_1)\alpha(g_2)$. But someone told me today that I also have to show $\alpha(g^{-1}) = \alpha(g)^{-1}$. Is this the case and if so why? 2. Why is $\alpha(g^{-1}) = \alpha(g)^{-1}$? - 3 Homomorphism...of what? Groups, rings, algebras...? – DonAntonio Nov 22 '12 at 15:57 You only need to show $\,\alpha(g_1\cdot g_2)=\alpha(g_1)*\alpha(g_2)\,$, if we're talking of groups $\,(G,\cdot)\,,\,(H,*)\,$ – DonAntonio Nov 22 '12 at 15:59 Set $g_2=e_G$ and you find $\alpha(e_G)=e_H$. Then set $g_2=g_1^{-1}$ and you have your implication. – Nick Kidman Nov 22 '12 at 16:02 ## 5 Answers If $G$ and $H$ are groups and you know that $\alpha(g_1)\alpha(g_2)=\alpha(g_1g_2)$ for all $g_1$ and $g_2$, then it will necessarily be true that $\alpha(g^{-1})=\alpha(g)^{-1}$. Proving this is an interesting exercise (hint: prove first that $\alpha(e_G)=e_H$). Because of this implication, it doesn't matter whether one requires of a homomorphism only that $\alpha(g_1)\alpha(g_2)=\alpha(g_1g_2)$, or additionally $\alpha(g^{-1})=\alpha(g)^{-1}$. Both definitions make "homomorphism" mean the same class of maps, and you can find both of them in different texts. The reason why one would ever want to include the "superfluous" $\alpha(g^{-1})=\alpha(g)^{-1}$ (or $\alpha(e_G)=e_H$) in a definition is that the longer definition can be constructed systematically as "everything you can do in a group must be preserved by the map" -- and there are three things you can do in a group, namely compose elements, take inverses, and find the identity element. This maps naturally to a three-part definition of "homomorphism", and generalizes smoothly to what "homomorphism" means for algebraic structures other than groups. - 1 What a nice answer. ${}{}$ – Matt N. Nov 22 '12 at 16:35 @Matt: Thank you. – Henning Makholm Nov 22 '12 at 17:19 I, personally, like the way you solved the problem. I mean your point of views here. +1 – Babak S. Nov 26 '12 at 15:48 I assume that you mean for $G$ and $H$ to be groups. You don't have to check $\alpha(g^{-1})=\alpha(g)^{-1}$ as it follows from the multiplicative property: \begin{gather*} \alpha(g^{-1})\alpha(g)=\alpha(g^{-1}g)=\alpha(e_G)=e_H\\ \alpha(g)\alpha(g^{-1})=\alpha(gg^{-1})=\alpha(e_G)=e_H\\ \end{gather*} Then $\alpha(g^{-1})=\alpha(g)^{-1}$ by uniqueness of inverses. (Note that $\alpha(e_G)=e_H$ follows by a similar argument, using the multiplicative property, and uniqueness of identity). - That someone told you wrong. To prove that $\alpha: G \to G^{\prime}$ is a homomorphism between groups $(G,\,\cdot)$ and ($G^{\prime},\,*)$, you need to prove $\alpha(g_1\cdot g_2) = \alpha(g_1)*\alpha(g_2)$. You do not also need to prove $\alpha(g^{-1}) = \alpha(g)^{-1}$. But it is true that for all homomorphisms $\alpha: G \to G^{\prime}$, • $\alpha(e) = e^{\prime}$, where $e$ is the identity of $G$ and $e^{\prime}$ is the identity of $G^{\prime}$, and with this, • it can also be proven that $\alpha(g^{-1}) = \alpha(g)^{-1}$ for $g\in G$. • Furthermore, if $H$ is a subgroup of $G$, then $\alpha[G]$ is a subgroup of $G^{\prime}$, and • if $K$ is a subgroup of $G^{\prime}$, then $\alpha^{-1}[K]$ is a subgroup of $G$. In other words, a homomorphism $\alpha: G \to G^{\prime}$ maps identity to identity, inverses to inverses, and subgroups to subgroups. But each of the above properties are necessarily implied by the property that defines a homomorphism: if you can show: $$\alpha(g_1\cdot g_2) = \alpha(g_1)*\alpha(g_2)$$ then you will have proven $\alpha: G \to G^{\prime}$ is a homomorphism between groups $(G,\,\cdot)$ and ($G^{\prime},\,*)$. The properties bulleted above then follow. - Set $g_2=e_G$ and you find $\alpha(e_G)=e_H$. Then set $g_2=g_1^{-1}$ and you have your implication. - By definition, $\alpha:G\to H$ is a group homomorphism if for all $x,y\in G$ we have $\alpha(xy)=\alpha(x)\alpha(y)$. Now let $1_G$ be the identity element of $G$. Then $\alpha(1_G)=1_H\cdot\alpha(1_G)=(\alpha(1_G)^{-1}\cdot\alpha(1_G))\cdot\alpha(1_G)=\alpha(1_G)^{-1}\cdot\alpha(1_G)\alpha(1_G)=\alpha(1_G)^{-1}\cdot\alpha(1_G\cdot1_G)=\alpha(1_G)^{-1}\cdot\alpha(1_G)=1_H.$ That is, $\alpha$ automatically maps the identity of $G$ to the identity of $H$. For any $x\in G$ we then have $\alpha(x)\alpha(x^{-1})=\alpha(x\cdot x^{-1})=\alpha(1_G)=1_H$ by the above; this shows that $\alpha(x^{-1})$ is inverse to $\alpha(x)$, and by uniqueness of inverses (I guess you have shown that / had that in the respective group axiom), $\alpha(x^{-1})=\alpha(x)^{-1}$. So 2. is a property that already follows from the definition of a group homomorphism, hence you don't have to show it separately. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377805590629578, "perplexity_flag": "head"}
http://mathoverflow.net/questions/43513/eigenvalues-and-transpose/43663
## Eigenvalues and transpose ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $F:\mathbb{R}^{k}\to\mathbb{R}^{k}$ be a continuously differentiable mapping; $F_{n}(x)$ be $n$-th iteration of $F(x)$, i.e. $F_{1}(x)=F(x)$, $F_{n}(x)=F(F_{n-1}(x))$; $J_{n}(x)=(F_{n}(x))'$ be Jacobian matrix of $F_{n}(x)$; $\lambda_{n}^{(1)}(x), \lambda_{n}^{(2)}(x), \ldots, \lambda_{n}^{(k)}(x)$ be eigenvalues of $J_{n}(x)$; $\mu_{n}^{(1)}(x), \mu_{n}^{(2)}(x), \ldots, \mu_{n}^{(k)}(x)$ be eigenvalues of $(J_{n}(x))^{T}J_{n}(x)$. How to prove that for all $i=\overline{1,k}$ there exists $j=\overline{1,k}$ such that $\lim\limits_{n\to\infty}\big|\lambda_{n}^{(i)}(x)\big|^{\frac{1}{n}}=\lim\limits_{n\to\infty}\big(\mu_{n}^{(j)}(x)\big)^{\frac{1}{2n}}$? (if this statement is true) As I see, $\big|\lambda_{n}^{(i)}(x)\big|\neq\big(\mu_{n}^{(j)}(x)\big)^{\frac{1}{2}}$ in common case... - ## 2 Answers In general, this may not be true. EDIT (Atending OP´s objection) The important thing is that as stated, the problem is reduced to a linear algebra problem since it is possible to construct a diffeomorphism of $\mathbb{R}^n$ such that matrix $J_n(0)$ for the orbit of $0$ is the product of any sequence of invertible matrices. To construct this, consider a translation of $\mathbb{R}^n$ (say $F(x)= x+b$) and in a neighborhood of $nb$ modify the diffeomorphism so that the derivative in that point is the desired matrix $A_n$. In dimension $2$, a way of getting the desired counterexample is to consider the two times two upper triangular matrices $A_n$ with both eigenvalues $1$ and $K^n$ in the upper right corner (I am not being able to write matrices). We get that the eigenvalues of $J_n$ will be always $1$, but the norm of $J_n$ grows exponentially, so, it is not true that the limits coincide. I haven't thought on how to make a counterexample where the norms of $A_n$ are bounded but it should be not very difficult. However, when some recurrence is added into the game, some results in the direction of what you are looking for are available. A key word for searching is Oseledets Theorem (or Multiplicative ergodic theorem, notice that in some places it is named Oseledec, or with some variations, othe key word for searching is: Lyapunov exponents). In particular, given an invariant probability measure, what you look for is satisfied for almost every point. Playing with the proofs of this results, other more natural'' counterexamples can be constructed. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. But this is not a counterexample. You proposed $F(x)=Ax+b$ if $\|x\|\leq\delta$, $F(x)=x+b$ if $\|x\|>\delta$ where $\|b\|$ is sufficiently large. Here $A$ is a matrix with eigenvalues $\lambda^{(1)}, \lambda^{(2)}, \ldots\lambda^{(k)}$; $\mu^{(1)}, \mu^{(2)}, \ldots\mu^{(k)}$ are eigenvalues of $A^{T}A$; $A$ is such that $\big|\lambda^{(i)}\big|\neq\big(\mu^{(i)}\big)^{\frac{1}{2}}$. Then, as you write, $J_{n}(0)=A$ and we have $\lim\limits_{n\to\infty}\big|\lambda_{n}^{(i)}(0)\big|^{\frac{1}{n}}=\lim\limits_{n\to\infty}\big(\mu_{n}^{(i)}(0)\big)^{\frac{1}{2n}}=1$ since $\lambda_{n}^{(i)}(0)=\lambda^{(i)}$ do not depend on $n$. - Alexandra: generally this should be a comment on rpotrie's answer. Now, the person who posts the question generally can comment on all answers to her questions. In your case, since you provided two different log-ins, the software wasn't able to recognize you as the same person, and demands you to have 50 reputation before allowing the comment. For future reference, please log-in to MathOverflow using the same credentials every time. This way you can post comments such as above directly as a response comment to the answers given, and not as a separate answer. – Willie Wong Oct 26 2010 at 13:45 An additional benefit for posting comments instead of new answers is that the answerer, in this case rpotrie, will be notified of your comment. New answers don't have this benefit and so rpotrie may not notice your objection until much later. – Willie Wong Oct 26 2010 at 13:46 You are right, I wasn't carefull enough to construct the example. The idea is that with that construction (considering translations and including the matrix you want in the orbit of a chosen point, say $0$) you get that you can realize any product of matrices as the product of the derivatives along the orbit. – rpotrie Oct 26 2010 at 16:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328480362892151, "perplexity_flag": "head"}
http://cms.math.ca/10.4153/CJM-2003-040-x
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM Abstract view # Some Convexity Results for the Cartan Decomposition Read article [PDF: 270KB] http://dx.doi.org/10.4153/CJM-2003-040-x Canad. J. Math. 55(2003), 1000-1018 Published:2003-10-01 Printed: Oct 2003 • P. Graczyk • P. Sawyer Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: HTML LaTeX MathJax PDF PostScript ## Abstract In this paper, we consider the set $\mathcal{S} = a(e^X K e^Y)$ where $a(g)$ is the abelian part in the Cartan decomposition of $g$. This is exactly the support of the measure intervening in the product formula for the spherical functions on symmetric spaces of noncompact type. We give a simple description of that support in the case of $\SL(3,\mathbf{F})$ where $\mathbf{F} = \mathbf{R}$, $\mathbf{C}$ or $\mathbf{H}$. In particular, we show that $\mathcal{S}$ is convex. We also give an application of our result to the description of singular values of a product of two arbitrary matrices with prescribed singular values. Keywords: convexity theorems, Cartan decomposition, spherical functions, product formula, semisimple Lie groups, singular values MSC Classifications: 43A90 - Spherical functions [See also 22E45, 22E46, 33C55] 53C35 - Symmetric spaces [See also 32M15, 57T15] 15A18 - Eigenvalues, singular values, and eigenvectors
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6915012001991272, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=Buffon's_Needle&diff=12140&oldid=12134
# Buffon's Needle ### From Math Images (Difference between revisions) Line 17: Line 17: [[Image:willtheneedlecross ps10.jpg|right]] [[Image:willtheneedlecross ps10.jpg|right]] - To prove that the Buffon's Needle experiment will give an approximation of π, we can consider which positions of the needle will cause an intersection. Since the needle drops are random, there is no reason why the needle should be more likely to intersect one line than another. As a result, we can simplify our proof by focusing on a particular strip of the paper bounded by two horizontal lines. + To prove that the Buffon's Needle experiment will give an approximation of π, we can consider which positions of the needle will cause an intersection. Since the needle drops are random, there is no reason why the needle should be more likely to intersect one line than another. As a result, we can simplify our proof by focusing on a particular strip of the paper bounded by two horizontal lines. - The variable θ is the acute angle made by the needle and an imaginary line parallel to the ones on the paper. Finally, ''d'' is the distance between the center of the needle and the nearest line. + The variable θ is the acute angle made by the needle and an imaginary line parallel to the ones on the paper. The distance between the lines is 1 and the needle length is 1. Finally, ''d'' is the distance between the center of the needle and the nearest line. Also, there is no reason why the needle is more likely to fall at a certain angle or distance, so we can consider all values of θ and d equally probable. We can extend line segments from the center and tip of the needle to meet at a right angle. A needle will cut a line if the green arrow, ''d'', is shorter than the leg opposite θ. More precisely, it will intersect when We can extend line segments from the center and tip of the needle to meet at a right angle. A needle will cut a line if the green arrow, ''d'', is shorter than the leg opposite θ. More precisely, it will intersect when Line 30: Line 30: ====The Probability of an Intersection==== ====The Probability of an Intersection==== - In order to show that the Buffon's experiment gives an approximation for π, we need to show that there is a relationship between the probability of an intersection and the value of π. If we graph the outcomes of θ along the X axis and ''d'' along the Y, we have the <balloon title="In probability theory, the sample space of an experiment is the set of all possible outcomes."> sample space</balloon> for the trials. In the diagram below, the sample space is contained by the dashed lines. + In order to show that the Buffon's experiment gives an approximation for π, we need to show that there is a relationship between the probability of an intersection and the value of π. If we graph the possible values of θ along the X axis and ''d'' along the Y, we have the <balloon title="In probability theory, the sample space of an experiment is the set of all possible outcomes."> sample space</balloon> for the trials. In the diagram below, the sample space is contained by the dashed lines. Each point on the graph represents some combination of an angle and a distance that a needle might occupy. Each point on the graph represents some combination of an angle and a distance that a needle might occupy. Line 44: Line 44: <math> \frac {1}{2} * \frac {\pi}{2} = \frac {\pi}{4}</math> <math> \frac {1}{2} * \frac {\pi}{2} = \frac {\pi}{4}</math> - The probability of a hit can be calculated by taking the number of total ways an intersection can occur over the total number possible outcomes (the number of trials). For needle drops, the probability is proportional to the ratio of the two areas in this case because each possible value of θ and ''d'' is equally probable. The probability of an intersection is + The probability is equal to the ratio of the two areas in this case because each possible value of θ and ''d'' is equally probable. The probability of an intersection is <math>P_{hit} = \cfrac{ \frac{1}{2} }{\frac{\pi}{4}} = \frac {2}{\pi} = .6366197...</math> <math>P_{hit} = \cfrac{ \frac{1}{2} }{\frac{\pi}{4}} = \frac {2}{\pi} = .6366197...</math> Line 85: Line 85: [[Image:box1.jpg‎]] [[Image:box1.jpg‎]] - These results show that the Buffon's Needle approximation is a relatively tedious. Even when a large number of needles are dropped, this experiment gave a value of pi that was inaccurate in the third decimal place. Compared to other computer generated techniques, Buffon's method is impractical because the estimates converge towards π rather slowly. Nonetheless, the intriguing relationship between the probability of a needle's intersection and the value of π has attracted mathematicians to study the Buffon's Needle method since its introduction in the 18th century. + These results show that the Buffon's Needle approximation is relatively tedious. Even when a large number of needles are dropped, this experiment gave a value of pi that was inaccurate in the third decimal place. Compared to other computation techniques, Buffon's method is impractical because the estimates converge towards π rather slowly. Nonetheless, the intriguing relationship between the probability of a needle's intersection and the value of π has attracted mathematicians to study the Buffon's Needle method since its introduction in the 18th century. Line 92: Line 92: - The Buffon’s needle problem has been generalized so that the probability of an intersection can be calculated for a needle of any length and paper with any spacing. It has been proven that for a needle shorter than the distance between the lines, the probability of a intersection is <math> \frac {2*l}{\pi*d} </math>. This equation makes sense when we consider the normal case, where ''l'' =1 and ''d'' =1, so these variables disappear and the probability is <math> \frac {2}{\pi} </math>. + The Buffon’s needle problem has been generalized so that the probability of an intersection can be calculated for a needle of any length and paper with any spacing. For a needle shorter than the distance between the lines, it can be shown by a similar argument to the case where ''d'' = 1 and ''l'' = 1 that the probability of a intersection is <math> \frac {2*l}{\pi*d} </math>. Note that this agrees with the normal case, where ''l'' =1 and ''d'' =1, so these variables disappear and the probability is <math> \frac {2}{\pi} </math>. ## Revision as of 11:52, 9 June 2010 Buffon's Needle The Buffon's Needle problem is a mathematical method of approximating the value of pi $(\pi = 3.1415...)$involving repeatedly dropping needles on a sheet of lined paper and observing how often the needle intersects a line. Buffon's Needle Field: Geometry Created By: Wolfram MathWorld # Basic Description The method was first used to approximate π by Georges-Louis Leclerc, the Comte de Buffon, in 1777. Buffon posed the Buffon's Needle problem and offered the first experiment where he threw breadsticks over his shoulder and counted how often the crossed lines on his tiled floor. Subsequent mathematicians have used the method with needles instead of bread sticks, or with computer simulations. In the case where the distance between the lines is equal the length of the needle, we will show that an approximation of π can be calculated using the equation $\pi \approx {2*\mbox{number of drops} \over \mbox{number of hits}}$ # A More Mathematical Explanation [Click to view A More Mathematical Explanation] #### Will the Needle Intersect a Line? [[Image:willtheneedlecros [...] [Click to hide A More Mathematical Explanation] #### Will the Needle Intersect a Line? To prove that the Buffon's Needle experiment will give an approximation of π, we can consider which positions of the needle will cause an intersection. Since the needle drops are random, there is no reason why the needle should be more likely to intersect one line than another. As a result, we can simplify our proof by focusing on a particular strip of the paper bounded by two horizontal lines. The variable θ is the acute angle made by the needle and an imaginary line parallel to the ones on the paper. The distance between the lines is 1 and the needle length is 1. Finally, d is the distance between the center of the needle and the nearest line. Also, there is no reason why the needle is more likely to fall at a certain angle or distance, so we can consider all values of θ and d equally probable. We can extend line segments from the center and tip of the needle to meet at a right angle. A needle will cut a line if the green arrow, d, is shorter than the leg opposite θ. More precisely, it will intersect when $d \leq \left( \frac{1}{2} \right) \sin(\theta). \$ See case 1, where the needle falls at a relatively small angle with respect to the lines. Because of the small angle, the center of the needle would have to fall very close. In case 2, the needle intersects even though the center of the needle is far from both lines because the angle is so large. #### The Probability of an Intersection In order to show that the Buffon's experiment gives an approximation for π, we need to show that there is a relationship between the probability of an intersection and the value of π. If we graph the possible values of θ along the X axis and d along the Y, we have the sample space for the trials. In the diagram below, the sample space is contained by the dashed lines. Each point on the graph represents some combination of an angle and a distance that a needle might occupy. There will be an intersection if $d \leq \left ( \frac{1}{2} \right ) \sin(\theta) \$, which is represented by the blue region. The area under this curve represents all the combinations of distances and angles that will cause the needle to intersect a line. The area under the blue curve, which is equal to 1/2 in this case, can found by evaluating the integral $\int_0^{\frac {\pi}{2}} \frac{1}{2} \sin(\theta) d\theta$ Then, the area of the sample space can be found by multiplying the length of the rectangle by the height. $\frac {1}{2} * \frac {\pi}{2} = \frac {\pi}{4}$ The probability is equal to the ratio of the two areas in this case because each possible value of θ and d is equally probable. The probability of an intersection is $P_{hit} = \cfrac{ \frac{1}{2} }{\frac{\pi}{4}} = \frac {2}{\pi} = .6366197...$ #### Using Random Samples to Approximate Pi The original goal of the Buffon's needle method, approximating π, can be achieved by using probability to solve for π. If a large number of trials is conducted, the proportion of times a needle intersects a line will be close to the probability of an intersection. That is, the number of line hits divided by the number of drops will equal approximately the probability of hitting the line. $\frac {\mbox{number of hits}}{\mbox{number of drops}} \approx P_{hit} = \frac {2}{\pi}$ So $\frac {\mbox{number of hits}}{\mbox{number of drops}} \approx \frac {2}{\pi}$ Therefore, we can solve for π: $\pi \approx \frac {2 * {\mbox{number of drops}}}{\mbox{number of hits}}$ #### Watch a Simulation http://mste.illinois.edu/reese/buffon/bufjava.html # Why It's Interesting #### Monte Carlo Methods The Buffon's needle problem was the first recorded use of a Monte Carlo method. These methods employ repeated random sampling to approximate a probability, instead of computing the probability directly. Monte Carlo calculations are especially useful when the nature of the problem makes a direct calculation impossible or unfeasible, and they have become more common as the introduction of computers makes randomization and conducting a large number of trials less laborious. π is an irrational number, which means that its value cannot be expressed exactly as a fraction a/b, where a and b are integers. As a result, π cannot be written as an exact decimal and mathematicians have been challenged with trying to determine increasingly accurate approximations. The timeline below shows the improvements in approximating pi throughout history. In the past 50 years especially, improvements in computer capability allow mathematicians to determine more decimal places. Nonetheless, better methods of approximation are still desired. A recent study conducted the Buffon's Needle experiment to approximate π using computer software. The researchers administered 30 trials for each number of drops, and averaged their estimates for π. They noted the improvement in accuracy as more trials were conducted. These results show that the Buffon's Needle approximation is relatively tedious. Even when a large number of needles are dropped, this experiment gave a value of pi that was inaccurate in the third decimal place. Compared to other computation techniques, Buffon's method is impractical because the estimates converge towards π rather slowly. Nonetheless, the intriguing relationship between the probability of a needle's intersection and the value of π has attracted mathematicians to study the Buffon's Needle method since its introduction in the 18th century. #### Generalization of the problem The Buffon’s needle problem has been generalized so that the probability of an intersection can be calculated for a needle of any length and paper with any spacing. For a needle shorter than the distance between the lines, it can be shown by a similar argument to the case where d = 1 and l = 1 that the probability of a intersection is $\frac {2*l}{\pi*d}$. Note that this agrees with the normal case, where l =1 and d =1, so these variables disappear and the probability is $\frac {2}{\pi}$. The generalization of the problem is useful because it allows us to examine the relationship between length of the needle, distance between the lines, and probability of an intersection. The variable for length is in the numerator, so a longer needle will have a greater probability of an intersection. The variable for distance is in the denominator, so greater space between lines will decrease the probability. To see how a longer needle will affect probability, follow this link: http://whistleralley.com/java/buffon_graph.htm #### Needles in Nature Applications of the Buffon's Needle method are even found naturally in nature. The Centre for Mathematical Biology at the University of Bath found uses of the Buffon's Needle algorithm in a recent study of ant colonies. The researchers found that an ant can estimate the size of an anthill by visiting the hill twice and noting how often it recrosses its first path. Ants generally nest in groups of about 50 or 100, and the size of their nest preference is determined by the size of the colony. When a nest is destroyed, the colony must find a suitable replacement, so they send out scouts to find new potential homes. In the study, scout ants were provided with "nest cavities of different sizes, shapes, and configurations in order to examine preferences" [2]. From their observations, researchers were able to draw the conclusion that scout ants must have a method of measuring areas. A scout initially begins exploration of a nest by walking around the site to leave tracks. Then, the ant will return later and walk a new path that repeatedly intersects the first tracks. The first track will be laced with a chemical that causes the ant to note each time it crosses the original path. The researchers believe that these scout ants can calculate an estimate for the nest's area using the number of intersections between its two visits. The ants can measure the size of their hill using a related and fairly intuitive method: If they are constantly intersecting their first path, the area must be small. If they rarely reintersects the first track, the area of the hill must be much larger so there is plenty of space for a non-intersecting second path. "In effect, an ant scout applies a variant of Buffon's needle theorem: The estimated area of a flat surface is inversely proportional to the number of intersections between the set of lines randomly scattered across the surface." [7] This idea can be related back to the generalization of the problem by imagining if the parallel lines were much further apart. A larger distance between the two lines would mean a much smaller probability of intersection. We can see in case 3 that when the distance between the lines is greater than the length of the needle, even very large angle won’t necessarily cause an intersection. This natural method of random motion in nature allows the ants to gauge the size of their potential new hill regardless of its shape. Scout ants are even able to asses the area of a hill in complete darkness. The animals show that algorithms can be used to make decisions where an array of restrictions may prevent other methods from being effective. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References [1] http://www.maa.org/mathland/mathtrek_5_15_00.html [2] http://mste.illinois.edu/reese/buffon/bufjava.html [3] http://www.absoluteastronomy.com/topics/Monte_Carlo_method [4] The Number Pi. Eymard, Lafon, and Wilson. [5] Monte Carlo Methods Volume I: Basics. Kalos and Whitlock. [6] Heart of Mathematics. Burger and Starbird [7] http://math.tntech.edu/techreports/TR_2001_4.pdf Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318856596946716, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/202247-equality-summations.html
1Thanks • 1 Post By GJA # Thread: 1. ## An equality with summations I don't understand how the following is equivalent. $n\Sigma(X_{i}-\overline{X})^2 = n\Sigma X_{i}^2 - (\Sigma X_{i})^2$ $\overline{X}$ is the average of all $X_{i}$ So, $\overline{X} = \Sigma X_{i}/n$ Could someone walk me through this. I've been staring at it for a long while now. I'll try to show my confusion below: So, expanding the left side I get $n\Sigma(X_{i}^2 - 2X_{i}\overline{X} + \overline{X}^2)$ Then substituting for $\overline{X}$, I get $n\Sigma(X_{i}^2 - 2X_{i}\Sigma X_{i}/n + (\Sigma X_{i}/n)^2$ Then multiplying it out, I get $n\Sigma X_{i}^2 -\Sigma(2X_{i}\Sigma X_{i}) + \Sigma((\Sigma X_{i})^2/n)$ And I'll stop there, because I really don't know what taking the summation of a summation means??? But, I do think I'm not too far off track because I now see the first term from the right hand side of the initial equality. 2. ## Re: An equality with summations Hi, datanewb. You're definitely on the right track, nice work! There is some confusion, because you've used the same index to represent both the original summation and the summation needed for $\overline{X}.$ Since the i index is used in the orginal summation, we would want to use a new dummy index, e.g. write $\overline{X}=\frac{1}{n}\sum_{j}X_{j}$. Then the double summation is a summation over the indices i and j. Does this help? I can write up more details if it's still confusing. Good luck! 3. ## Re: An equality with summations Thank you, GJA, that definitely does help. Rewriting the last line $n \sum_{i}X_{i}^2 - \sum_{j}^n(2X_{j}\sum_{i}X_{i}) + \sum_{j}^n((\sum_{i}X_{i})^2/n^2)$ is equivalent to $n \sum_{i}X_{i}^2 - \sum_{j}^n(2X_{j}\sum_{i}X_{i}) + ((\sum_{i}X_{i})^2/n)$ I think... okay, I need to think about this a little bit more. Hopefully I will solve it later tonight! 4. ## Re: An equality with summations Glad it helped! I think expanding $\overline{X}$ a line or two later might simplify things. Good luck and nice job sticking with a tricky little problem like this. 5. ## Re: An equality with summations Okay, I finally solved it! Thank you for the advice and for giving me the space and time to figure it out! I was simply too unfamiliar with algebraic manipulations of sigma notation to figure this out. I had to realize the following: $\sum_{i}^{n-1}(X_{i} + C) = \sum_{i}^{n-1}X_{i} + (n-i)C$ Secondly, $\sum_{i}\frac{1}{n}X_{i} = \frac{1}{n}\sum_{i}X_{i}$ Finally, I had to realize that $\sum_{i}(X_{i}\sum_{i}X_{i}) = \sum_{i}X_{i}\sum_{i}X_{i} = (\sum_{i}X_{i})^2$ Please, if any of this is wrong, correct me. Using the above logic, I was able to see that $n\sum_{i}X_{i}^2 - n\sum_{i}(2X_{i}\overline{X}) + n^2\overline{X}^2 =$ and substituting $\overline{X}$ with $\sum_{j}\frac{1}{n}X_{j}$ $n\sum_{i}X_{i}^2 - n\sum_{i}(2X_{i}\sum_{j}\frac{1}{n}X_{j}) + n^2(\sum_{j}\frac{1}{n}X_{j})^2 =$ $n\sum_{i}X_{i}^2 - 2\sum_{i}(X_{i}\sum_{j}X_{j}) + (\sum_{j}X_{j})^2 =$ and since $i\equiv{j}$ $n\sum_{i}X_{i} - (\sum_{i}X_{i})^2$ Which is exactly the right hand side of the equality I was trying to understand initially! Woohoo!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9566380381584167, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/45595/what-topics-should-be-included-in-a-calculus-for-the-liberal-arts-course
## What topics should be included in a calculus-for-the-liberal arts course? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have in mind a course taken by liberal-arts students who will probably never take another math course. I would like such a course to convey some of the way mathematical thinking is done (i.e. not a cookbook course) without getting "rigorous" (since scuh students can be assumed to understand that and can learn rather little of it during a one-year calculus course). Apparently I am not the first to think of excluding the mean value theorem, since one of James Stewart's books does that. I would also like to include some of the ways in which differential and integral calculus have played a role in the history of science. I'm teaching a course in which I began with this and I use that idea repeatedly in exercises. It will of course be used in explaining the fundamental theorem. One of various places where I've used that proposition so far is #3 in this assignment, where I was told by multiple students that no one else who teaches math ever asks students to think through steps like this. They "know" very well that that's not at all how math is done. Hence, they say, it is quite confusing. I do some topics that might normally be done only in "rigorous" course, such as things like #1 in this, but as you see, I don't do it in the way in which rigorous arguments are written. I'd like to see skills taught in such a course only to the extent to which they aid thinking, and I like to have students write carefully about that thinking. This contrasts with a practice that perhaps few if any mathematicians intend to do, but which is widespread, and that is that students in such courses are taught that mathematics consists entirely of skills. This leaves no place for things like one that I like to include: What is "natural" about the number $e$? (Here is how I begin the treatment of that question.) It seems as if mathematical thinking is often reserved for advanced courses rather than freshman calculus or the like, despite what is probably overwhelming empirical evidence that it can be done even at the most elementary levels, e.g. teaching graph theory to 4th-graders. The question here is: Which specific topics should be included in a course consistent with the ideas sketch above and why? In particular, which that are now customarily not included should be there, and vice-versa? - You might want to mention that you have asked a similar (not duplicate) question before: mathoverflow.net/questions/28695/… . Also I suggest you make this community-wiki. – danseetea Nov 10 2010 at 21:21 2 I would teach all of the sections in the calculus book by Hughes-Hallett that do not involve the rules of differentiation and integration. Another book that I always enjoyed teaching with was a calculus book by Goldstein, Lay, and Schneider. – Deane Yang Nov 10 2010 at 21:28 To clarify danseetea's comment: that previous question was on what to teach liberal arts students who will take only one math course. This is specifically on calculus for liberal arts students, regardless of how many math courses they take. – Michael Hardy Nov 10 2010 at 23:04 ## 1 Answer Several years ago the liberal-arts-ish university where I was at the time was pushing to have more interaction between the sciences and the humanities. In that spirit I volunteered to give an hour-long seminar entitled, "So You Think You're Educated, But You Don't Know Calculus: A Brief Introduction to One of Humanity's Greatest Inventions." It was aimed at the humanities faculty. My goal was to explain the big ideas behind calculus and place them in their historical and philosophical context for an audience of very smart people with weak math backgrounds. You might be able to use the historical and philosophical context part of the talk for the "ways in which differential and integral calculus have played a role in the history of science" aspect of your question. You are welcome to borrow freely from my presentation. In retrospect the title may have been a bit too audacious, but the talk went much better than I had expected. A few of the scientists showed up for fun, but most of the audience were folks from the humanities and social sciences. They were engaged, and they peppered me with questions for half an hour after the talk was over. After I left there were still people who stayed behind to discuss the seminar. Later I even got an email from the provost (a religion scholar) who wanted me to meet with him to discuss the ideas in the talk! It was, frankly, the most successful academic talk I've ever given - and much more so than the one I gave three days ago at a math conference that was attended by eight people in a room that could hold hundreds and yielded no questions. :( One caveat: When I discuss the philosophical implications of calculus, I'm doing so as I think they appeared to people at the time, not today. Clearly, humanity's consensus on these big questions has changed in the last 300 years. The other thing I would say is to second Deane Yang's recommendation to look at the Hughes-Hallet, et al, calculus texts. I know there are strong opinions on the calculus reform movement, and I don't want to wade into that. But what the Hughes-Hallet texts do well (in my opinion) is to emphasize ideas and mathematical thinking over rote computation. Since you're after the former, looking at what they've done may be helpful. - That talk sounds awesome: I love the boldness of the title and agree with the idea too. I think it's a shame that the way calculus is taught means that students can't see the forest of ideas for the tree of minor computations. – Thierry Zell Nov 10 2010 at 22:37 I read through the slides. I'd have emphasized instantaneous rates of change rather than tangents. I think the characterization of integral calculus on the second page falls short, since one might want to find an integral that one can't find by using the fundamental theorem. I hadn't realized Newton's book published in the 1680 avoided using the new discoveries. The part about God initially seemed off-topic, but apparently Newton's mathematics did unintentionally lead to Deism. – Michael Hardy Nov 12 2010 at 17:04 ..... Explaining "why nearly everything on earth and in space moved the way it did" seems like a stretch if you apply it to living organisms. Nonetheless calling Newton the greatest of all scientists may well be well justified. Definitely efforts like this to explain the importance of calculus to laypersons are needed. – Michael Hardy Nov 12 2010 at 17:04 @Michael Hardy: Thanks for the feedback. Those are fair criticisms of my characterization of integral calculus and of the comment about motion. I chose the tangent and area problems as prototypes because they would be easy for my audience to grasp and because I still think it is fascinating that they are (in some sense) inverse problems. I'm glad you got something helpful from the talk. – Mike Spivey Nov 12 2010 at 21:11 That the instantaneous-rate-of-change problem and the area problem are inverses is easier to see directly than that the tangent problem and the area problem are inverses: wnk.hamline.edu/~mjhardy/1170/handouts/… – Michael Hardy Nov 13 2010 at 4:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9742946028709412, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/7091/irreducible-polynomial-over-number-field-with-roots-in-every-completion
## Irreducible polynomial over number field with roots in every completion? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let K/Q be a field, probably not a finite extension. Is it possible for a polynomial to be irreducible over K but have a root in every completion of K? What about all but finitely many completions? This question is related to the question "Can a non-surjective polynomial map from an infinite field to itself miss only finitely many points?", and should help prove that such a polynomial can not exist for any subfield of the algebraic closure of the rationals. The idea is that we make the candidate polynomial monic and have algebraic integers for coefficients, then take any maximal ideal in the ring of integers of the candidate field and complete it using the ideal - since the polynomial must have a root in the residue field, it will have a root in the completion. I'm wondering if this forces the polynomial to have a root in the original field - hence the question. The same question only for function fields is also interesting, in order to prove the above for subfields of the algebraic closure of Fp(t) - ## 2 Answers For a finite extension $K/\mathbf{Q}$, the answer is no. Suppose that $f$ is an irreducible polynomial with coefficients in $K$ and splitting field $L$. If $G$ is the Galois group of $L/K$, then the polynomial $f$ gives rise to a faithful transitive permutation representation $G \rightarrow S_d$, where $d$ is the degree of $f$. If $P$ is a prime in $O_K$ that is unramified in $L$, then $f$ has a root over $O_{K,P}$ if and only if the corresponding Frobenius element $\sigma_P \in S_d$ has a fixed point. On the other hand, a theorem of Jordan says that every transitive subgroup of $S_d$ contains an element with no fixed points. By the Cebotarev density theorem, it follows that $f$ fails to have a point modulo $P$ for a positive density of primes $P$. For an infinite extension $K/\mathbf{Q}$, the answer is (often) yes. Let $K$ be the compositum of all cyclotomic extensions. Then $x^5 - x - 1$ is irreducible over $K$, because its splitting field over $\mathbf{Q}$ is $S_5$. On the other hand, it has a root in every completion (easy exercise). Finally, your claim that "since a the polynomial must have a root in the residue field, it will have a root in the completion" is false. Hensel's lemma comes with hypotheses. - I had the cyclotomic thing in my mind a few hours before I posted, but then forgot it... – Dror Speiser Nov 29 2009 at 4:49 As for Hensel, sure, but if I'm not mistaken an irreducible integer polynomial will satisfy Hensel's requirements for all but finitely many places. – Dror Speiser Nov 29 2009 at 5:19 That's interesting, do you know where the argument breaks down if K/Q is infinite? – Rebecca Bellovin Nov 29 2009 at 8:05 Let k be a residue field of O_K. Because K/Q is infinite, it is possible that k is infinite. Then one can't characterize the elements of k as the fixed points of some Frobenius map. In my opinion, this is the largest break in the argument, although probably not the only one. – David Speyer Nov 30 2009 at 0:17 2 This has 16 upvotes? What's wrong with you people? – Lavender Honey Jan 22 2010 at 22:44 show 3 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Edit: I see I missed the note that $K$ might not be finite over $\mathbb{Q}$. This answer is not correct for $K$ of infinite degree over $\mathbb{Q}$, as in FC's answer above. No. This is a consequence of the Chebotarev density theorem. To see how it follows, look at exercise 6 at the end of Cassels and Frohlich's "Algebraic Number Theory". Briefly, the Chebotarev density theorem says that for a Galois extension of global fields $L/K$ and for a finite set $S$ of places of $K$, the proportion of primes of $K$ splitting in $L$ is $1/[L:K]$. If $G=\text{Gal}(L/K)$ and $E$ is the fixed field of some $H\subset G$, it is possible to show that the proportion of places of $K$ with a split factor in $E$ is $|\bigcup_{\rho\in G}\rho H\rho^{-1}|/|G|$, and a lemma on finite groups says that this quotient is not $1$ unless $H=G$. In your case, take $E=K[x]/(f)$ and $L$ to be a normal extension of $K$ containing $E$. Then "$v$ has a split factor" means "$f$ has a root in the completion $K_v$". If $f$ has a root in each completion (or even a set of completions with density $1$, which includes the case "all but finitely many"), we must have $H=G$ and $E=K$. So $f$ already had a root in $K$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377370476722717, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4211403
Physics Forums Page 3 of 6 < 1 2 3 4 5 6 > ## Does My Wrist Watch Physically Beat Slower? Quote by PeterDonis I don't understand where you are getting this from. LET draws exactly the same spacetime diagram as you have drawn, and predicts all of the same numbers. If you think "LET" says anything different from your diagram, then you mean something different by "LET" than the rest of us do. My diagram shows perfectly what LET means. In my diagram the ETHER frame is very well indicated. In that ether frame the primed coordinates do not make sense, unless they are mathematical fictous ad hoc numbers, just like Lorentz admited himself. The only thing you can repeat is that the numbers are what they are. Of course. But apparently you can not give me the context in which the numbers make sense. Only if on that diagram red 3D spaces are added the coordinates make sense. I see that you do not understand this and there is not much more I can do about it. We better stop arguing about this. It doesn't help either way. Recognitions: Gold Member Quote by bobc2 Quote by Kingfire Hello, Some physics books tend to say that "your wrist watch will be beating slower when you travel at the or close to the speed of light." Does that mean literally? My own speculation: Although time does slow down when I travel at a speed close to the speed of light, my wrist watch will not beat any faster or slower because it is just a mechanical device that beats every earthly second. I am not sure though. Kingfire, there are at least two different competing interpretations of special relativity on this forum. 1) First, there is what is known as the Lorentz Ether Theory (LET). If you are basing the answer to your question on this interpretation, the answer to your question would be, yes. Yes, your watch physically beats slower. That's because, according to LET, there are time shifts in the transmittal of electrical forces between and within physical objects, resulting in actual changes in speeds of physical interactions, including clock mechanisms (affecting tick rates, etc.). The answer under any interpretation or understanding of any form or version of LET, past or present, is not yes. Even though Lorentz believed in a literal ether defining an absolute rest state, only in which light propagates at c, he, and all other LET adherents never claimed that the earth was ever stationary in it. Therefore, since the earth must be traveling at some unknown speed and in some unknown direction through the ether, clocks on the earth are already beating slower than the presumed absolute time defined by the ether. So if you take off from the earth in the same direction that the earth is traveling through the ether, then your wristwatch will beat out seconds more slowly than earthly seconds. However, if you take off in the opposite direction, you could actually be stationary in the ether, in which case your wristwatch will beat out seconds faster than earthly seconds. So the correct answer according to LET is: "unknown". I already gave the correct answer under SR in my first post. Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by Vandam My diagram shows perfectly what LET means. In my diagram the ETHER frame is very well indicated. In that ether frame the primed coordinates do not make sense, unless they are mathematical fictous ad hoc numbers, just like Lorentz admited himself. On the LET interpretation, the primed coordinates correspond to coordinate assignments that the moving observer would make. LET says that those assignments are not the "true" coordinates, but it still gives them a perfectly well-defined meaning. Quote by Vandam But apparently you can not give me the context in which the numbers make sense. I already have, repeatedly. I just did it again, above. But you either can't understand or refuse to accept that LET is an *interpretation*, just as the "block universe" is an *interpretation*. Quote by Vandam Only if on that diagram red 3D spaces are added the coordinates make sense. On your interpretation, perhaps. But there are other interpretations. I'll stop if you will. interesting discussions.. so SR and LET are identical in mathematical formulation. Peterdonis. Going to this example. Supposed you had a missile launched from earth travelling at 0.99c aimed at a target in Tau Ceti, and in it's frame only 2 seconds would elapse travelling to it. Supposed after 30 seconds, you have order from the President to abort it. You know you can't reach the missile using any radiowave because it can't go beyond light speed. Supposed tachyons could travel in the aether frame only and instantaneously (and normal light and matter can't). When you sent out the tachyon abort signal at 30 seconds... it should reach the missile at its 30 seconds time too right? But then by this time, the target in Tau Ceti is already destroyed at 2 seconds in the missile frame. Is this example right? Or can you reach the missile at 1.8 seconds even after you sent out the tachyons at your 30 seconds using tachyons that uses the aether frame? I don't believe in tachyons. But just want to understand the concept and limitations. Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by Tomahoc Peterdonis. Going to this example. Supposed you had a missile launched from earth travelling at 0.99c aimed at a target in Tau Ceti, and in it's frame only 2 seconds would elapse travelling to it. 2 seconds in the missile's frame. It would still take 12/.99 years (Tau Ceti is approximately 12 light years away, we'll assume it's exactly 12 light years here) in the Earth frame. Quote by Tomahoc Supposed after 30 seconds, you have order from the President to abort it. Meaning, 30 seconds after launch in the Earth frame. Quote by Tomahoc You know you can't reach the missile using any radiowave because it can't go beyond light speed. No, you don't know that. The missile will take 12/.99 years, or 12.12 years, in the Earth frame to reach Tau Ceti. A radio pulse traveling at the speed of light will take 12 years flat. But 0.12 years is a lot more than 30 seconds, so a radio pulse sent out 30 seconds after the missile leaves, in the Earth frame, will catch up with the missile before it reaches Tau Ceti. I just derived that result in the Earth frame, but since it's a result about an invariant--the crossing of two worldlines--it must hold in any frame, including the missile's frame. (This means, of course, that in the missile's frame, the time between launch and the President issuing the order is *much* less than 30 seconds; in fact it's 30 seconds divided by the time dilation factor, which is something like 10^8, so it's on the order of a hundred nanoseconds. In that time, the missile has gotten closer to Tau Ceti--or, rather, Tau Ceti has gotten closer to the missile--by only a very small fraction of the total distance; so in the missile's frame, the radio pulse simply has a shorter distance to travel than Tau Ceti does, so it reaches the missile first.) Quote by Tomahoc Supposed tachyons could travel in the aether frame only and instantaneously (and normal light and matter can't). When you sent out the tachyon abort signal at 30 seconds... it should reach the missile at its 30 seconds time too right? No. As I said in the other thread where you asked about tachyons, we don't have a theory of tachyons, so we don't know what the rule would be that determines which spacelike worldline a tachyon travels on. But if we assume that the Earth's rest frame is the "aether frame", then a tachyon pulse sent out at Earth time t = 30 seconds after launch would arrive at the missile at Earth time t = 30 seconds after launch; which, as I noted above, would be missile time t' = 100 nanoseconds or so after launch, so it would be way before the missile reached Tau Ceti. Of course, this depends on the Earth's rest frame being the "aether frame". However, we can make a much more general statement, because we've already proven (I just did it above) that a light pulse emitted at Earth time t = 30 seconds after launch will reach the missile before it hits Tau Ceti. But *any* tachyon pulse, regardless of how it travels, must reach the missile before a light pulse emitted from Earth at the same time, because any tachyon must, by definition, travel faster than light. So if a light pulse can reach the missile in time, then so can any tachyon pulse, regardless of the exact laws governing tachyons. Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor Hello Kingfire! Welcome to PF! (are you still there? ) Quote by Kingfire Some physics books tend to say that "your wrist watch will be beating slower when you travel at the or close to the speed of light." not if you're still wearing it … time dilation is only relevant between two clocks (or a clock and an observer) if they have different velocities Quote by PeterDonis 2 seconds in the missile's frame. It would still take 12/.99 years (Tau Ceti is approximately 12 light years away, we'll assume it's exactly 12 light years here) in the Earth frame. Meaning, 30 seconds after launch in the Earth frame. No, you don't know that. The missile will take 12/.99 years, or 12.12 years, in the Earth frame to reach Tau Ceti. A radio pulse traveling at the speed of light will take 12 years flat. But 0.12 years is a lot more than 30 seconds, so a radio pulse sent out 30 seconds after the missile leaves, in the Earth frame, will catch up with the missile before it reaches Tau Ceti. I just derived that result in the Earth frame, but since it's a result about an invariant--the crossing of two worldlines--it must hold in any frame, including the missile's frame. (This means, of course, that in the missile's frame, the time between launch and the President issuing the order is *much* less than 30 seconds; in fact it's 30 seconds divided by the time dilation factor, which is something like 10^8, so it's on the order of a hundred nanoseconds. In that time, the missile has gotten closer to Tau Ceti--or, rather, Tau Ceti has gotten closer to the missile--by only a very small fraction of the total distance; so in the missile's frame, the radio pulse simply has a shorter distance to travel than Tau Ceti does, so it reaches the missile first.) No. As I said in the other thread where you asked about tachyons, we don't have a theory of tachyons, so we don't know what the rule would be that determines which spacelike worldline a tachyon travels on. But if we assume that the Earth's rest frame is the "aether frame", then a tachyon pulse sent out at Earth time t = 30 seconds after launch would arrive at the missile at Earth time t = 30 seconds after launch; which, as I noted above, would be missile time t' = 100 nanoseconds or so after launch, so it would be way before the missile reached Tau Ceti. Of course, this depends on the Earth's rest frame being the "aether frame". However, we can make a much more general statement, because we've already proven (I just did it above) that a light pulse emitted at Earth time t = 30 seconds after launch will reach the missile before it hits Tau Ceti. But *any* tachyon pulse, regardless of how it travels, must reach the missile before a light pulse emitted from Earth at the same time, because any tachyon must, by definition, travel faster than light. So if a light pulse can reach the missile in time, then so can any tachyon pulse, regardless of the exact laws governing tachyons. I should have added more 9 in the 0.99c. This is a a case when rounding off doesn't work. Supposed the aether frame is not the earth's rest frame.. but somewhere out there.... is it not always the case that when the aether frame is used, 30 seconds on earth is synchronized to 30 seconds on the missile? You mean it varies depending on the location of the aether frame even when tachyon speed is instantaneous?? How do you find the location of the aether frame if you both want the earth's and missile to be both sychronized at 30 second worldline? Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by Tomahoc I should have added more 9 in the 0.99c. This is a a case when rounding off doesn't work. Well, what exact numbers do you want to use? I'm using the numbers you wrote down; if you want to use different ones, feel free to give them. Quote by Tomahoc Supposed the aether frame is not the earth's rest frame.. but somewhere out there.... is it not always the case that when the aether frame is used, 30 seconds on earth is synchronized to 30 seconds on the missile? No; which frame is the ether frame has nothing to do with that question. The answer to it is always "no", because the Earth and the missile are in relative motion. Quote by Tomahoc You mean it varies depending on the location of the aether frame even when tachyon speed is instantaneous?? What varies? I don't understand what you're asking. If you mean, does the fact that tachyons travel faster than light vary, no, it doesn't; the *definition* of a tachyon is that it travels faster than light, and if it travels faster than light in any frame, it travels faster than light in every frame. Quote by Tomahoc How do you find the location of the aether frame if you both want the earth's and missile to be both sychronized at 30 second worldline? You can't; the Earth and the missile are in relative motion, so their clocks can't be synchronized. See above. Blog Entries: 9 Recognitions: Gold Member Science Advisor Tomahoc, one other thought regarding the Tau Ceti scenario; I suggest that you consider carefully this statement I made a few posts ago: Quote by PeterDonis I just derived that result in the Earth frame, but since it's a result about an invariant--the crossing of two worldlines--it must hold in any frame, including the missile's frame. Do you see what this means? It means that the question you are asking--can the radio pulse catch up to the missile before it reaches Tau Ceti--can be answered without having to use any frame except the Earth frame. You have a distance D from Earth to Tau Ceti; a speed v for the missile; and a time t after launch that the radio pulse goes out. Those three facts, all by themselves, are enough to answer the question: if we take D and v as given, you can calculate exactly the latest time t at which the radio pulse can go out and still reach the missile before it hits Tau Ceti. I suggest that you work that answer out first, before you even start thinking about tachyons in this scenario. Quote by PeterDonis Well, what exact numbers do you want to use? I'm using the numbers you wrote down; if you want to use different ones, feel free to give them. No; which frame is the ether frame has nothing to do with that question. The answer to it is always "no", because the Earth and the missile are in relative motion. What varies? I don't understand what you're asking. If you mean, does the fact that tachyons travel faster than light vary, no, it doesn't; the *definition* of a tachyon is that it travels faster than light, and if it travels faster than light in any frame, it travels faster than light in every frame. You can't; the Earth and the missile are in relative motion, so their clocks can't be synchronized. See above. In my query. There is the assumption that the tachyon velocity is not frame dependent, meaning not fixed relative to earth but fixed relative to the aether which can be anywhere. In this example, if we send aborting signal after 30 seconds. It should arrive at the missile 30 seconds? Also ignore the distance is tau ceti. Imagine it is so far off that light speed is not enough to reach it because it is far. I thought tau ceti is hundreds of light years away and I'm assuming 0.99999999999c (or put any 9 where it is far enough) Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by Tomahoc There is the assumption that the tachyon velocity is not frame dependent, meaning not fixed relative to earth but fixed relative to the aether which can be anywhere. In other words, you don't know what the tachyon's velocity is in any frame, because you don't know which frame is the aether frame. Quote by Tomahoc In this example, if we send aborting signal after 30 seconds. It should arrive at the missile 30 seconds? Since you don't know the tachyon's velocity in any frame, you can't predict when it will reach the missile. However, you can still draw some conclusions just by working the problem in the Earth frame. See below. Quote by Tomahoc Also ignore the distance is tau ceti. Imagine it is so far off that light speed is not enough to reach it because it is far. I thought tau ceti is hundreds of light years away and I'm assuming 0.99999999999c (or put any 9 where it is far enough) In other words, you want a scenario where the President's order goes out too late for a light pulse to reach the missile before it hits Tau Ceti, correct? I'll assume that's your intent in what follows. In my last post, I said we can figure out everything in the Earth frame; I was hoping you would pick up on that, but I'll go ahead and do it now. All quantities are relative to the Earth frame in what follows. We have a distance D to Tau Ceti, a speed v < 1 for the missile (I'm using units in which c = 1), and a time t after the missile launch when the President's order goes out. We want t to be large enough that the radio pulse emitted then from Earth can't reach the missile before it hits Tau Ceti. We assume that the missile is launched at time $t_0 = 0$. The time the missile reaches Tau Ceti is: $$t_m = \frac{D}{v}$$ The time the radio pulse reaches Tau Ceti is (the pulse is sent at time t and travels at speed 1): $$t_r = t + D$$ We want $t_r > t_m$, which gives $$t + D > \frac{D}{v}$$ or, rearranging terms, $$t > D \frac{1 - v}{v}$$ Now suppose we have a tachyon pulse that travels at speed w > 1 in the Earth frame (we don't know w's exact value, but we can still work with it as an unknown variable). We can run the same type of analysis as above to find the time $t_y$ that a tachyon pulse emitted at t will reach Tau Ceti: $$t_y = t + \frac{D}{w}$$ If we want the tachyon pulse to catch the missile before it reaches Tau Ceti, we must have $t_y < t_m$, which gives $$t + \frac{D}{w} < \frac{D}{v}$$ or, rearranging terms, $$t < D \frac{w - v}{w v}$$ So if the time t lies between the two limits given above, i.e., if we have: $$D \frac{1 - v}{v} < t < D \frac{w - v}{w v}$$ then the tachyon pulse will be able to catch the missile before it hits Tau Ceti, but a radio pulse will not. I'll stop here to let you digest the above; it should give you an idea of how to calculate when each pulse will reach the missile, as well as when it will reach Tau Ceti. Quote by PeterDonis In other words, you don't know what the tachyon's velocity is in any frame, because you don't know which frame is the aether frame. Since you don't know the tachyon's velocity in any frame, you can't predict when it will reach the missile. However, you can still draw some conclusions just by working the problem in the Earth frame. See below. In other words, you want a scenario where the President's order goes out too late for a light pulse to reach the missile before it hits Tau Ceti, correct? I'll assume that's your intent in what follows. In my last post, I said we can figure out everything in the Earth frame; I was hoping you would pick up on that, but I'll go ahead and do it now. All quantities are relative to the Earth frame in what follows. We have a distance D to Tau Ceti, a speed v < 1 for the missile (I'm using units in which c = 1), and a time t after the missile launch when the President's order goes out. We want t to be large enough that the radio pulse emitted then from Earth can't reach the missile before it hits Tau Ceti. We assume that the missile is launched at time $t_0 = 0$. The time the missile reaches Tau Ceti is: $$t_m = \frac{D}{v}$$ The time the radio pulse reaches Tau Ceti is (the pulse is sent at time t and travels at speed 1): $$t_r = t + D$$ We want $t_r > t_m$, which gives $$t + D > \frac{D}{v}$$ or, rearranging terms, $$t > D \frac{1 - v}{v}$$ Now suppose we have a tachyon pulse that travels at speed w > 1 in the Earth frame (we don't know w's exact value, but we can still work with it as an unknown variable). We can run the same type of analysis as above to find the time $t_y$ that a tachyon pulse emitted at t will reach Tau Ceti: $$t_y = t + \frac{D}{w}$$ If we want the tachyon pulse to catch the missile before it reaches Tau Ceti, we must have $t_y < t_m$, which gives $$t + \frac{D}{w} < \frac{D}{v}$$ or, rearranging terms, $$t < D \frac{w - v}{w v}$$ So if the time t lies between the two limits given above, i.e., if we have: $$D \frac{1 - v}{v} < t < D \frac{w - v}{w v}$$ then the tachyon pulse will be able to catch the missile before it hits Tau Ceti, but a radio pulse will not. I'll stop here to let you digest the above; it should give you an idea of how to calculate when each pulse will reach the missile, as well as when it will reach Tau Ceti. Many thanks for the details. I digested it, but what I'm asking or the scenerio im interested is not exactly it (although ill put it in my notebook for detailed study). The scenario I'm interested is the following. If instantaneous tachyons can reach the missile. And the missile sending back another signal. It can reach the earth before earth send it. This is what happen if the tachyons are frame dependent. But if the tachyons velocity which can be any speed up to instantaneous is always Fixed relative to the aether frame. Then no backward time loop possible. In this case, the tachyons signal sent out 30 secs from earth reaches the missile also at 30 seconds? Because if its earlier, it can produce a situation where earth can receive it before it sends out the signal. Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by Tomahoc what I'm asking or the scenerio im interested is not exactly it For future reference, it helps to ask the question you're really interested in up front. Quote by Tomahoc If instantaneous tachyons can reach the missile. And the missile sending back another signal. It can reach the earth before earth send it. This is what happen if the tachyons are frame dependent. By "frame dependent" you mean, I assume, "the tachyon always has the same speed relative to the emitter". In that case, yes, you're correct, you can have a round-trip tachyon signal arrive before it was sent. Quote by Tomahoc But if the tachyons velocity which can be any speed up to instantaneous is always Fixed relative to the aether frame. Then no backward time loop possible. Yes, that's correct; if the tachyon's speed is always fixed relative to the *same* frame (which we can call the "aether frame") regardless of the emitter's state of motion, then a round-trip tachyon signal can never arrive before it was sent; the quickest it can arrive is at the same instant it was sent (if the return signal is emitted at the same instant the outgoing signal arrives). Quote by Tomahoc In this case, the tachyons signal sent out 30 secs from earth reaches the missile also at 30 seconds? If you mean 30 seconds according to the Earth frame, then yes, *if* the Earth frame is the aether frame. If not, no, the signal will arrive at the missile at some other time, which could be earlier or later than 30 seconds, depending on how the Earth is moving relative to the aether frame. However, even if the signal arrives at the missile earlier than t = 30 seconds in the Earth frame, the return signal still won't arrive before it was sent, *if* tachyons always travel at the same speed relative to the aether frame. Remember that the return signal is traveling in the opposite direction to the outbound signal; that means the effect of the Earth's velocity relative to the aether frame is exactly the opposite on the return signal from what it was on the outbound signal. For example, suppose the outbound signal travels "backwards in time" by 1 second, so it arrives at the missile at t = 29 seconds. Then the return signal will travel "forwards in time" by the same amount, because it's traveling in the opposite direction; so it will arrive back at t = 30 seconds (assuming it is emitted at the same instant the outbound signal is received). Quote by tiny-tim Hello Kingfire! Welcome to PF! (are you still there? ) not if you're still wearing it … time dilation is only relevant between two clocks (or a clock and an observer) if they have different velocities Good comment, tiny-tim. That is exactly the situation with Einstein-Minkowski special relativity. However, in the context of the Lorentz Ether Theory (LET) the situation is physically different. Lorentz specifically based his derivations on the consideration of a fixed ether and the results of transmittal times between objects and within objects--all processes occuring in one time evolving 3-D world. So, all observers are living in the same 3-D world. Thus, the watch the moving guy is wearing (he's moving relative to the ether) is physically ticking more slowly than it would if the guy were at rest relative to the ether. However, due to Lorentzian processes affecting this guy (length contractions and time time dilations) as well as affecting the guy's wrist watch, he does not notice the fact that his clock is ticking slower, etc. Again, it should be emphasized that the basis of Lorentz's (and Poincare's, et. al.) derivations make LET significantly different than the Einstein-Minkowski theory of special relativity, notwithstanding the common mathematical feature, i.e., Lorentz transformations. It should be noted that hardly any physicists doing special relativity do it in the context of the fixed ether concept. Virtually all physicists doing relativity operate with derivations based on the Einstein-Minkowski concept. I recently reviewed several of my old text books and reference books on special relativity and found all of them following the Einstein-Minkowski formalism (Bergman, Rindler, Weyl, Naber, Baruk "Classical Field Theory", etc.). Even all of the popularizations follow Einstein-Minkowski, with only an occasional brief mention of LET. That's why I kind of feel like LET is more of a red herring to be put on the table any time someone begins to infer that the 4-dimensional spacetime somehow relates to physical reality. p.s. I notice that those on this forum who present LET as though it were on a par with Einstein-Minkowski never use the Lorentz ether concept with the implied force transmittal delays, etc., as a basis for explaining the phenomena associated with relativistic speeds. They either couch explanations in the context of Einstein-Minkowski spacetime or else just do Lorentz transformation numerical calculations, avoiding any reference to underlying foundational concepts of special relativity. Not even a comparison of alternative physical concepts are considered relevant. Quote by PeterDonis For future reference, it helps to ask the question you're really interested in up front. By "frame dependent" you mean, I assume, "the tachyon always has the same speed relative to the emitter". In that case, yes, you're correct, you can have a round-trip tachyon signal arrive before it was sent. Yes, that's correct; if the tachyon's speed is always fixed relative to the *same* frame (which we can call the "aether frame") regardless of the emitter's state of motion, then a round-trip tachyon signal can never arrive before it was sent; the quickest it can arrive is at the same instant it was sent (if the return signal is emitted at the same instant the outgoing signal arrives). If you mean 30 seconds according to the Earth frame, then yes. No I mean 30 seconds in the missile frame. Because if it reaches the missile at say 1 sec or 25 seconds (let's say it travels continuous and no target), it can produce a scenario where earth can receive it before sending out. Now does it mean 30 seconds on earth and 30 seconds on the missile are simultaneous to the aether frame? If yes. How do you make the aether frame simultaneous to it when they are in relative motion. This is what I was trying to understand. Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by bobc2 However, due to Lorentzian processes affecting this guy (length contractions and time time dilations) as well as affecting the guy's wrist watch, he does not notice the fact that his clock is ticking slower, etc. It's more than that; the moving guy also thinks that the clock of the guy at rest relative to the ether is ticking slower than his. "Time dilation" in this sense is still symmetric. It's just that LET gives a privileged status to the guy at rest relative to the ether; his perception is the "true" one, and the perception of the moving guy, who thinks the guy at rest's clock is ticking slower, is an "illusion". Quote by bobc2 It should be noted that hardly any physicists doing special relativity do it in the context of the fixed ether concept. Virtually all physicists doing relativity operate with derivations based on the Einstein-Minkowski concept. I recently reviewed several of my old text books and reference books on special relativity and found all of them following the Einstein-Minkowski formalism (Bergman, Rindler, Weyl, Naber, Baruk "Classical Field Theory", etc.). The formalism is the same for LET as it is for what you are calling "Einstein-Minkowski". The only difference is the interpretation. It would be more correct to say that virtually all physicists doing relativity operate on the Einstein-Minkowski *interpretation*; they view spacetime as a 4-D object, not as a 3-D object that "changes with time". (I'm not sure "virtually all" is correct here either; the ADM formalism in GR does not take this view, and a considerable number of relativists have worked on that.) Quote by bobc2 That's why I kind of feel like LET is more of a red herring to be put on the table any time someone begins to infer that the 4-dimensional spacetime somehow relates to physical reality. I would agree that LET is not a popular interpretation. I would also agree that is a less parsimonious interpretation, since it postulates that one inertial frame has a special status, but gives no way of telling which one it is, so the special status doesn't have any experimental consequences. However, the "block universe" interpretation, at least the strong version that has been argued here (and is also argued by certain physicists in popular books) is subject to similar criticisms, because the strong "block universe" interpretation is more than the simple claim that "4-dimensional spacetime somehow relates to physical reality". It is the claim that 4-dimensional spacetime *is* physical reality, period. That's a very strong claim, which also goes beyond the experimental evidence we have, not to mention that all of our current candidates for a theory of quantum gravity say it's false--they all view 4-dimensional spacetime as an emergent, approximate phenomenon, not as fundamental. (There are also issues involving determinism, which I've talked about before.) Blog Entries: 9 Recognitions: Gold Member Science Advisor Quote by Tomahoc No I mean 30 seconds in the missile frame. That's not possible with any of the numbers you've given; a curve going from t = 30 seconds on the Earth's worldline to t' = 30 seconds on the missile's worldline would be timelike, not spacelike. In fact it will be timelike for a missile traveling at any speed fairly close to that of light (off the top of my head I think all that's required is a gamma factor of 2, which requires a missile speed of 0.866c). Quote by Tomahoc Because if it reaches the missile at say 1 sec or 25 seconds (let's say it travels continuous and no target), it can produce a scenario where earth can receive it before sending out. Not if the tachyon always travels at the same speed in the ether frame. It's easy to show this: just work the problem in the ether frame. There are two possible cases in that frame: Earth and missile both moving in the same direction, and Earth and missile moving in opposite directions. It's straightforward to show for each case that if the tachyon travels at a fixed speed w relative to the ether frame, the Earth can't receive it before it sends it. And since both events occur on the Earth's worldline, their time ordering is invariant; if the signal is received after it's sent in the ether frame, it's received after it's sent in any frame. Work it out. Quote by Tomahoc Now does it mean 30 seconds on earth and 30 seconds on the missile are simultaneous to the aether frame? They can't possibly be if the missile is traveling at any significant fraction of the speed of light, because the two events will be timelike separated, not spacelike separated. Only spacelike separated events can be simultaneous in any frame. Quote by Tomahoc How do you make the aether frame simultaneous to it when they are in relative motion. This is what I was trying to understand. I think you're going at it the wrong way around. Try what I suggested above: work the problem in the ether frame, treating the tachyon speed w as an unknown, but fixed in that frame. Work it out and you will find that the tachyon signal can't be received on Earth before it is sent for *any* tachyon speed w greater than 1, including speed w = infinity (i.e., the tachyon travels instantaneously in the ether frame). Page 3 of 6 < 1 2 3 4 5 6 > Tags relativity, speed of light, time, wrist watch Thread Tools | | | | |------------------------------------------------------------------|------------------------|---------| | Similar Threads for: Does My Wrist Watch Physically Beat Slower? | | | | Thread | Forum | Replies | | | General Discussion | 17 | | | General Physics | 3 | | | Computing & Technology | 0 | | | General Discussion | 32 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439360499382019, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=554180
Physics Forums ## Calculating Tension Force Problem 1. The problem statement, all variables and given/known data A girl swings a 2.7kg ball attached to a 72.0-cm string in a horizontal circle above her head, which makes one revolution in .98s. What is the tension force, $$F_{T}$$, exerted on the string by the ball? 2. Relevant equations Alright, I have never worked with Tension Force in my class, but we have worked with basic physic formulas, which I applied to this problem. Since mass and time are given, I assumed to use Displacement & Time (assuming 72cm [or 0.72m] is displacement instead of radius, since I'm unsure if it's radius or not...), which is: $$x = x_0 + v_0 t + (1/2) a t^2$$ For the second part, I applied Newton's Second Law to find Tension Force. $${F} = {m}{a}$$ 3. The attempt at a solution $$x = x_0 + v_0 t + (1/2) a t^2$$ or (assuming $$v_0$$ is 0): 0.72 = (1/2) (a) (0.98)^2 And for acceleration I got $$1.49 (m/s)^2$$. Then, applying Newton's Second Law: $${F} = {m}{a}$$ or F = (2.7)(1.49) = 4.04 N. However, my only available answers are 3.8N, 3*(10^3)N, 8*(10^1)N, and 92N. I would of rounded 4.04N to 3.8N and moved on, but I have a strong feeling I did this entire problem wrong. If I did, could someone explain what I did wrong and guide me in the right direction? Thanks! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug It would be help if you drew a picture or diagram of this problem and labeling all the forces acting on the ball. I'm curious as to where you got the equation x=x0+v0t+(1/2)at2 from. Quote by rubenhero It would be help if you drew a picture or diagram of this problem and labeling all the forces acting on the ball. This is what I drew on my paper. My diagram is what originally confused me (It's a bad one). I assumed Force of Gravity (mg) is acting on the ball, but I'm unsure if any other forces are acting upon it. I do know there is tension on the string by the ball, because that is what I'm solving for. I was unsure if there is any Normal Force, Friction Force, or Applied Force $$(F_N, F_f, F_p)$$, so I resorted to the formula for Uniformly Accelerated Motion. Quote by rubenhero I'm curious as to where you got the equation x=x0+v0t+(1/2)at2 from. I resorted to that formula for Uniformly Accelerated Motion, since I have Distance and Time, and I'm trying to find Acceleration so I can find Tension Force. I could use Centripetal Acceleration to find acceleration, but I don't have a Velocity, and I'm instead given other variables (mass, time, distance [or radius, still not sure]). I'm just completely lost and confused at this point. ## Calculating Tension Force Problem I agree the diagram you drew is confusing. How I read the problem was the horizontal circle's radius is the length of the string. You can get the velocity since you have the period. Mentor Blog Entries: 1 This is a circular motion / centripetal acceleration problem. Hints: Realize that the string must make an angle with the horizontal. Apply Newton's 2nd law. hey Doc Al, are you referring to a tether ball diagram because I got confused after your response. Mentor Blog Entries: 1 Quote by rubenhero hey Doc Al, are you referring to a tether ball diagram because I got confused after your response. Yes, it would look something like a tether ball diagram. Same basic idea. Quote by rubenhero I agree the diagram you drew is confusing. How I read the problem was the horizontal circle's radius is the length of the string. You can get the velocity since you have the period. I'm assuming I use $$V = {x}/{t}$$ with x = 0.72m? V = x/t is for linear motion, to find speed here you have to use angular velocity because it is circular/rotational motion. Mentor Blog Entries: 1 Quote by Declension I'm assuming I use $$V = {x}/{t}$$ with x = 0.72m? Two problems here: (1) The distance traveled in one revolution will equal the circumference of the circle. (2) 0.72m is the length of the string, not the radius of the circular path. This is quite an involved problem. As I pointed out earlier, while the circle is horizontal, the string is not. You'll need a bit of trig to relate the length of the string to the radius, in addition to what I mentioned earlier. Start by drawing an accurate diagram. Thread Tools | | | | |--------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Calculating Tension Force Problem | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 11 | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 11 | | | Introductory Physics Homework | 7 | | | Introductory Physics Homework | 24 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277032017707825, "perplexity_flag": "middle"}
http://scicomp.stackexchange.com/questions/2595/exploring-feasible-points-in-a-linearly-defined-space
# Exploring feasible points in a linearly defined space What is the quickest way to find a point inside a linear feasible space? (Defined by the intersection of several hyperplanes and halfspaces). I want to be able to choose an initial point in the original convex space, discover a certain neighborhood around it (not convex but can be written as the union of some convex spaces defined by by the intersection of several halfspaces) using a procedure that depends on the point I have, and then I need to choose another point in the original space but not in the neighborhood already explored. I need to keep doing that until the space is exhausted (It should be exhausted eventually). Basically, I have a convex space $S$, I need loop until $S=\phi$ while doing the following: Choose $x\in S$, Find $N$ around $x$, and then $S \leftarrow S - N$ Any help is appreciated. - ## 2 Answers Finding a single feasible point is traditionally done by phase 1 of the simplex algorithm. http://en.wikipedia.org/wiki/Simplex_algorithm#Finding_an_initial_canonical_tableau This means that you can do it by calling any routine for linear programming, just by putting the objective function to zero. Covering the feasible domain by balls of fixed radius $r$ whose midpoints are feasible is much harder, though, as exccluding a ball constitutes a nonconvex domain. There are algorithms for enumerating all vertices of a bounded polyhedron given by equations and inequalities (their number typically grows exponentially with the dimension, though). After having all vertices, the feasible set consists just of their convex combinations, so this can be used to samle inside. But unless the polyhedron is a simplex, different convex combinations may give the same point, so one would need to add a reject facility when generating a minimal covering. - Thank you very much for you reply. As for the first step, this seems to work fine and my LP solver (MOSEK) is working fast with zero objective function. For the second part, It is still confusing for me. The thing is, after choosing the first point, I discover a neighborhood around the point which can be described by a set of $n$ linear inequalities say $Ax \leq b$. As you said, the rest of the space is not convex anymore. One way would be to solve $n$ linear programs each of which have the initial space as a constraint and one of the inequalities of $Ax\leq b$ inverted. Is there a faster way? – Fawaz Jun 25 '12 at 15:54 1 @Fawaz: Your way for the second part is wrong. If you reverse one of your inequalities, you get a completely different problem. (Look at the simple set of inequalities $x\ge -1$ and $x\le 1$ to see that.) You'd need to add for each point $x_i$ discovered an inequality $\|x-x_i\|_2^2\ge \delta$. MOSEK will probably cope with the first few, but after a while it will be more and more difficult - and slower - to fill the gaps, and MOSEK will probably return infeasible apthough there are still feasible points not covered. The only reliable metyhod I know for covering is the one I described. – Arnold Neumaier Jun 25 '12 at 16:06 There is a software called PORTA that can enumerate all the points that are feasible in a set of linear inequalities and equalities. http://www.iwr.uni-heidelberg.de/groups/comopt/software/PORTA/ enjoy it! - Welcome to SciComp! Could you elaborate a bit on this? From the linked webpage, it looks like PORTA only enumerates the integral feasible points, of which there might be none even if the feasible set is nonempty. – Christian Clason Apr 8 at 16:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936953604221344, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/23125/trying-to-derive-an-inverse-trigonometric-function
# Trying to derive an inverse trigonometric function I'd like to know how to derive these functions (I know the answers, I want to know how to get there) \begin{align*} f(x) &= \arcsin\left(\frac{x}{3}\right)\\ f(x) &= \arccos(2x+1)\\ f(x) &= \arctan(x^2)\\ f(x) &= \mathrm{arcsec}(x^7)\\ \end{align*} etc. - You mean differentiate them? Use the chain rule. – Qiaochu Yuan Feb 21 '11 at 19:38 1 Are you asking how to get, for example, the derivative of the arcsine function in general? Or are you asking how to get the derivative of, for example, $f(x) = \arcsin\left(\frac{x}{3}\right)$ already knowing the derivative of the arcsine function in general? – Isaac Feb 21 '11 at 20:01 @Isaac I'm trying to get the general way of doing it, and then applying it to specific cases, like these mentioned. Thanks. – Pacane Feb 21 '11 at 20:08 1 The general way is to know the derivatives of the inverse trigonometric functions, and then apply the Chain Rule. You could try to derive the derivative of, say $f(x)=\mathrm{arcsec}(x^7)$ by finding the derivative of $\sec(f(x))$, applying the Chain Rule, and then solving for $f'(x)$, but in the long run it is probably going to be less work to remember the derivatives of the inverse trig functions than to derive them from scratch every time you need them. – Arturo Magidin Feb 21 '11 at 20:16 ## 4 Answers The formulas you need are the derivatives of $\arcsin(u)$, $\arccos(u)$, $\arctan(u)$, $\mathrm{arcsec}(u)$, and presumably $\mathrm{arccot}(u)$ and $\mathrm{arccsc}(u)$. Once you know these, you can apply the Chain Rule. And how do you find these derivatives? Well, the Inverse Function Theorem is your first friend. If $y = g(x)$ has an inverse, is differentiable at $x=a$, $g(a)=b$, and $g'(a)\neq 0$, then $(g^{-1})'(b) = \frac{1}{g'(a)}$. So, consider $y=\sin(\theta)$. Since the derivative of $\sin(\theta)$ is $\cos(\theta)$, you have that $$\frac{d}{du}\arcsin(u) = \frac{1}{\cos(\arcsin(u))}.$$ But... what is $\cos(\arcsin(u))$? Suppose $\arcsin(u)=\theta$. That means that $\sin(\theta) = u$, and since $\sin^2(\theta)+\cos^2(\theta)=1$, then $\cos^2(\theta) = 1 - \sin^2(\theta) = 1-u^2$. Therefore, $|\cos(\theta)|=\sqrt{\cos^2\theta} = \sqrt{1-u^2}$; and because in order to talk about the inverse of $\sin \theta$ we must have $-\frac{\pi}{2}\leq \theta\leq \frac{\pi}{2}$, then $\cos\theta\geq 0$, so $|\cos\theta|=\cos\theta$. That is, $\cos\theta = \sqrt{1-u^2}$. So, plugging into the formula for the derivative of $\arcsin(u)$, we have: $$\frac{d}{du}\arcsin(u) = \frac{1}{\cos(\arcsin u)} = \frac{1}{\sqrt{1-u^2}}.$$ Performing the same kind of analysis for $\arccos(u)$, we get $$\frac{d}{du}\arccos(u) = \frac{1}{-\sin(\arccos u)} = -\frac{1}{\sqrt{1-u^2}}.$$ For $\arctan u$, using the fact that $(\tan\theta)' = \sec^2\theta$, we have $$\frac{d}{du}\arctan u = \frac{1}{\sec^2(\arctan u)}.$$ Now, if $\arctan u = \theta$, then $\tan(\theta) = u$. Using the fact that $\tan^2\theta + 1 = \sec^2\theta$, we get that $sec^2(\arctan u) = \sec^2(\theta) = 1 + \tan^2(\theta) = 1+u^2$, so $$\frac{d}{du}\arctan u = \frac{1}{\sec^2(\arctan u)} = \frac{1}{1+u^2}.$$ For $\mathrm{arccot u}$, the same analysis works, provided you remember that $(\cot\theta)' = -\csc^2\theta$ and that $1 + \cot^2\theta = \csc^2\theta$, so $$\frac{d}{du}\mathrm{arccot}(u) = \frac{1}{-\csc^2(\mathrm{arccot}(u))} = -\frac{1}{1+u^2}.$$ With $\mathrm{arcsec}u$, we have $(sec\theta)' = sec\theta\tan\theta$, so $$\frac{d}{du}\mathrm{arcsec}(u) = \frac{1}{\sec(\mathrm{arcsec} (u))\tan(\mathrm{arcsec} u)}.$$ Here, $\sec(\mathrm{arcsec} (u)) = u$; if $\mathrm{arcsec}(u)=\theta$, then $\sec\theta = u$, and from $\tan^2\theta + 1 = \sec^2\theta$, we get $|\tan\theta| = \sqrt{u^2 - 1}$. You get: $$\frac{d}{du}\mathrm{arcsec}(u) = \frac{1}{\sec(\mathrm{arcsec}(u))\tan(\mathrm{arcsec}(u))} = \frac{1}{u\sqrt{u^2-1}}.$$ And finally, using the fact that $(\csc\theta)' = -\csc\theta\cot\theta$, you get $$\frac{d}{du}\mathrm{arccsc}(u) = \frac{1}{-\csc(\mathrm{arccsc}(u))\cot(\mathrm{arccsc}(u))} = -\frac{1}{u\sqrt{u^2-1}}.$$ Once you have these formulas, the Chain Rule takes care of the rest. So you have: \begin{align*} \frac{d}{du}\arcsin(u) &= \frac{1}{\sqrt{1-u^2}}, &\qquad \frac{d}{du}\arccos u &= -\frac{1}{\sqrt{1-u^2}},\\ \frac{d}{du}\arctan(u) &=\frac{1}{1+u^2}, &\frac{d}{du}\mathrm{arccot}(u) &= -\frac{1}{1+u^2},\\ \frac{d}{du}\mathrm{arcsec}(u) &=\frac{1}{u\sqrt{u^2-1}}, &\frac{d}{du}\mathrm{arccsc}(u) &= - \frac{1}{u\sqrt{u^2-1}}. \end{align*} - Thanks for your very descriptive answer. One last thing though, I think I am not using the chain rule correctly. Let's say I take the example of $f(x) = \arcsin\frac{x}{3}$ , now I know that $\frac{d}{dx} \arcsin(x) = \frac{1}{\sqrt{1-x^2}}$ and I know that $\frac{d}{dx} \frac{x}{3} = \frac{1}{3}$ I was going to multiply both, but apparently it doesn't work like that. Got any tip for me ? – Pacane Feb 21 '11 at 20:38 2 @Pacane: Looks like your real problem is that you don't know how to apply the Chain Rule. The Chain Rule says: $$\frac{d}{dx}f(g(x)) = f'(g(x))g'(x)$. So you need to evaluate the derivative of $\arcsin(u)$ at $u=\frac{x}{3}$, then multiply by the derivative of $\frac{x}{3}$. That is:$$\frac{d}{dx}\arcsin\frac{x}{3} = \left(\frac{1}{\sqrt{1 -\left(\frac{x}{3}\right)^2}}\right)\left(\frac{x}{3}\right)' = \left(\frac{1}{\sqrt{1 - \frac{x^2}{9}}}\right)\frac{1}{3}. – Arturo Magidin Feb 21 '11 at 20:46 Thanks for your help. Really appreciated. – Pacane Feb 21 '11 at 20:54 I'll walk you through a derivation (how to get there) of the derivative of the arcsine function. The same idea can be applied to the other inverse trigonometric functions. If $y=\arcsin x$, then $\sin y=\sin(\arcsin x)=x$. $$\sin y=x$$ Take the derivative of both sides with respect to $x$ (remember the chain rule!). $$\cos y\cdot\frac{dy}{dx}=1$$ Isolate $\frac{dy}{dx}$, which is what we're trying to find. $$\frac{dy}{dx}=\frac{1}{\cos y}$$ Since we want $\frac{dy}{dx}$ in terms of $x$, substitute for $y$. $$\frac{dy}{dx}=\frac{1}{\cos (\arcsin x)}$$ $\cos(\arcsin x)=\sqrt{1-x^2}$ (see this answer of mine for a technique for simplifying a trig function of an inverse trig function), so $$\frac{dy}{dx}=\frac{1}{\sqrt{1-x^2}}.$$ - When you say "derive these functions" do you mean "take the derivative of these functions?" If so, the chain rule is your friend. So for $f(x)=\arctan(x^2)$, $f'(x)=\frac{1}{1+(x^2)^2}2x$ - How did you get the $\frac{1}{1+(x^2)^2}$ part ? – Pacane Feb 21 '11 at 19:43 1 @Pacane: From $\frac{d}{dx}\arctan(x) = \frac{1}{1+x^2}$. – Arturo Magidin Feb 21 '11 at 19:45 if $y=\arcsin(x)$ then $\sin(y)=x$ and differentiating wrt $x$ we get $cos(y)y'=1$. So, $y'=\frac{1}{\cos(\arcsin(x))}$. draw a triangle (with sides 1, $x$, $\sqrt{1-x^2}$) to see that $\cos(\arcsin(x))=\sqrt{1-x^2}$. Hence $\frac{d}{dx}\arcsin(x)=\frac{1}{\sqrt{1-x^2}}$. This kind of reasoning works for various inverse function (using implicit differentiation). see any calc textbook under implicit differentiation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8767760396003723, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Median_absolute_deviation
# Median absolute deviation In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample. For a univariate data set X1, X2, ..., Xn, the MAD is defined as the median of the absolute deviations from the data's median: $\operatorname{MAD} = \operatorname{median}_{i}\left(\ \left| X_{i} - \operatorname{median}_{j} (X_{j}) \right|\ \right), \,$ that is, starting with the residuals (deviations) from the data's median, the MAD is the median of their absolute values. ## Example Consider the data (1, 1, 2, 2, 4, 6, 9). It has a median value of 2. The absolute deviations about 2 are (1, 1, 0, 0, 2, 4, 7) which in turn have a median value of 1 (because the sorted absolute deviations are 0, 0, 1, 1, 2, 4, 7). So the median absolute deviation for this data is 1. ## Uses The median absolute deviation is a measure of statistical dispersion. Moreover, the MAD is a robust statistic, being more resilient to outliers in a data set than the standard deviation. In the standard deviation, the distances from the mean are squared, so large deviations are weighted more heavily, and thus outliers can heavily influence it. In the MAD, the deviations of a small number of outliers are irrelevant. Because the MAD is a more robust estimator of scale than the sample variance or standard deviation, it works better with distributions without a mean or variance, such as the Cauchy distribution. ## Relation to standard deviation In order to use the MAD as a consistent estimator for the estimation of the standard deviation σ, one takes $\hat{\sigma}=K\cdot \operatorname{MAD}, \,$ where K is a constant scale factor, which depends on the distribution. For normally distributed data K is taken to be 1/Φ−1(3/4) $\approx$ 1.4826, where Φ−1 is the inverse of the cumulative distribution function for the standard normal distribution, i.e., the quantile function. This is because the MAD is given by: $\frac 12 =P(|X-\mu|\le \operatorname{MAD})=P\left(\left|\frac{X-\mu}{\sigma}\right|\le \frac {\operatorname{MAD}}\sigma\right)=P\left(|Z|\le \frac {\operatorname{MAD}}\sigma\right).$ Therefore we must have that Φ(MAD/σ) − Φ(−MAD/σ) = 1/2. Since Φ(−MAD/σ) = 1 − Φ(MAD/σ) we have that MAD/σ = Φ−1(3/4) from which we obtain the scale factor K = 1/Φ−1(3/4). Hence $\sigma \approx 1.4826\ \operatorname{MAD}. \,$ In other words, the expectation of 1.4826 times the MAD for large samples of normally distributed Xi is approximately equal to the population standard deviation. The factor $1.4826\ \approx 1/\left(\Phi^{-1}(3/4)\right)$ results from the reciprocal of the normal inverse cumulative distribution function, $\Phi^{-1}(P)$, evaluated at probability $P=3/4$.[1] ## The population MAD The population MAD is defined analogously to the sample MAD, but is based on the complete distribution rather than on a sample. For a symmetric distribution with zero mean, the population MAD is the 75th percentile of the distribution. Unlike the variance, which may be infinite or undefined, the population MAD is always a finite number. For example, the standard Cauchy distribution has undefined variance, but its MAD is 1. The earliest known mention of the concept of the MAD occurred in 1816, in a paper by Carl Friedrich Gauss on the determination of the accuracy of numerical observations.[2][3] ## Notes 1. Gauss, Carl Friedrich (1816). "Bestimmung der Genauigkeit der Beobachtungen". Zeitschrift für Astronomie und verwandte Wissenschaften 1: 187–197. 2. Walker, Helen (1931). Studies in the History of the Statistical Method. Baltimore, MD: Williams & Wilkins Co. pp. 24–25. ## References • Hoaglin, David C.; Frederick Mosteller and John W. Tukey (1983). Understanding Robust and Exploratory Data Analysis. John Wiley & Sons. pp. 404–414. ISBN 0-471-09777-2. • Russell, Roberta S.; Bernard W. Taylor III. (2006). Operations Management. John Wiley & Sons. pp. 497–498. ISBN 0-471-69209-3. • Venables, W.N.; B.D. Ripley (1999). Modern Applied Statistics with S-PLUS. Springer. p. 128. ISBN 0-387-98825-4.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8068788647651672, "perplexity_flag": "middle"}
http://amathew.wordpress.com/2009/11/27/proof-of-the-fixed-point-theorem/
# Climbing Mount Bourbaki Thoughts on mathematics November 27, 2009 ## Proof of the fixed point theorem Posted by Akhil Mathew under differential geometry, MaBloWriMo | Tags: Cartan fixed point theorem, compact Lie groups, cosine inequality, fixed points, negative curvature | Today we will prove the fixed point theorem, which I restate here for convenience: Theorem 1 (Elie Cartan) Let ${K}$ be a compact Lie group acting by isometries on a simply connected, complete Riemannian manifold ${M}$ of negative curvature. Then there is a common fixed point of all ${k \in K}$. There is a Haar measure on ${K}$. In fact, we could even construct this by picking a nonzero alternating ${n}$-tensor (where ${n=\dim K}$) at ${T_e(K)}$, and choosing the corresponding ${K}$-invariant ${n}$-form on ${K}$. This yields a functional ${C(K) \rightarrow \mathbb{R}}$, which we can assume positive by choosing the orientation appropriately. This yields the Haar measure ${d \mu}$ by the Riesz representation theorem. Now define ${J(q) := \int_K d^2(q,kp) d \mu(k).}$ This is a continuous function ${M \rightarrow \mathbb{R}}$ which has a minimum, because ${J(q)>J(p)}$ for ${q}$ outside some compact set containing ${p}$. Let the minimum occur at ${q_0}$. I claim that the minimum is unique, which will imply that it is a fixed point of ${K}$. It can be checked that ${J}$ is continuously differentiable; indeed, let ${q_t}$ be a curve. Then ${d^2(q_t,kp)}$ can be computed as in yesterday when ${kp \neq q_t}$; when they are equal, it is still differentiable with zero derivative because of the ${d^2}$. (I am sketching things here because I don’t currently want to dive into the technical details; see Helgason’s book for them.) So now take ${q_t}$ to be a geodesic joining the minimal point ${q_0}$ to some other point ${q_1}$. Now $\displaystyle \frac{d}{dt} J(q_t)|_{t=0} = \int_K \frac{d}{dt} d^2(q_t, kp) |_{t=0} d\mu(k) = 0.$ Then we get $\displaystyle \int_K d(q_0,kp) \cos \alpha d\mu(k) = 0$ where ${\alpha}$ is an appropriate angle as in yesterday’s post. When ${q_0=kp}$, this ${\alpha}$ is not well-defined, but ${d(q_0,kp)=0}$, so it is ok. Now $\displaystyle \int_K d^2(q_1, k.p) d\mu \geq \int_K \left( d^2(q_0, kp) + d^2(q_0,q_1) - 2 d(q_0, kp) \cos \alpha \right) d \mu(k) .$ This is because of the cosine inequality. But the cosine part vanishes, so this is strictly greater than ${J(q_0)}$. In particular, since ${q_1}$ was arbitrary, ${q_0}$ was a global minimum for ${J}$—and it is thus a fixed point.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933969259262085, "perplexity_flag": "head"}
http://gmatclub.com/wiki/Probability
# Probability ### From GMATClub This page is a crash-course to get you ready to crack probability questions on GMAT. ## Introduction How probable is it to get probability questions on GMAT? Probability questions are becoming increasingly common. They tend to be bundled among the difficult questions, so high scorers will commonly encounter 1, 2, or 3 of them. If you are a low scorer and are pressed for time, consider skipping this material, as you are not likely to encounter difficult questions on the adaptive test. Do I have to be a genius to solve probability questions? Absolutely not. Neither this brief course nor GMAT require any math knowledge beyond what you learned in your high school. Just put in some effort and you will crack it. Do not be influenced by the rumors that say probability is difficult - those come from a conspiracy among the instructors who want to secure their wages. What is probability? Probability is a measure of how likely is an event to happen. It is measured in fractions from 0 to 1 (0 is impossible, 1 is unavoidable or certain). Sometimes it is denoted in percentages, again from 0% to 100%. What is an event and an outcome? An event is anything that happens. In probability theory we speak of events having outcomes or results. A coin flip (an event) has two possible outcomes - heads and tails. A die toss has six possible outcomes. When a coin is flipped (an event is tested), one of the outcomes is obtained. Either heads or tails. How is probability used? The probability of event A is commonly denoted $p(A)$. So, if Heads is H and Tails is T, $p(H) = 0.5$ means that you have 1 chance in 2 to get heads in a coin flip. This also means that if you flip the coin 100 times, you will get heads about $100*0.5=50$ times. Not exactly 50. You may get 49, or 63, or even no heads. But 50 is the most likely number. This works, because we assume that the coin is fair, that coin flips are independent, and that we have accounted for all possible outcomes, heads and tails. This is always assumed on GMAT unless otherwise stated. But be sure to check these assumptions when using probability theory in business. ## Simple Probability: The F/T Rule The probability of an event is the number of favorable outcomes divided by the total number of possible outcomes, when the outcoumes are equally likely. This is known as the F/T Rule, and 90% of the problems are solved using just this tool. No kidding. Here are some examples to see how it works: {{#x:box| What is the probability that a card drawn at random from a deck of cards will be an ace? Solution. In this case there are four favorable outcomes, aces of spades, hearts, diamonds and clubs. Since each of the 52 cards in the deck represents a possible outcome, there are 52 possible outcomes. Therefore, the probability is $\frac{4}{52}=\frac{1}{13}$. }} The same principle can be applied to the problem of determining the probability of obtaining different totals from a pair of dice. {{#x:box| Two fair six-sided dice are rolled; what is the probability of having 5 as the sum of the numbers? Solution. There are 36 possible outcomes when a pair of dice is thrown (six outcomes for the first die times six outcomes for the second one). Since four of the outcomes have a total of 5, {(1,4), (4,1), (2,3), (3,2)}, the probability of the two numbers adding up to 5 is $\frac{4}{36} = \frac{1}{9}$. }} ## Probability of Multiple Events For questions involving single events, the F/T rule is sufficient. In fact, it is often sufficient for all other cases too. But, for questions involving multiple events, some other tools may be more appropriate. Even when the problem can be solved with F/T, these tools still may provide a more elegant solution. ### NOT If you know that the probability of an event (or one of the outcomes) is $p$, the probability of this event NOT happening (or the probability of it NOT having this given outcome), is $1-p$. ```p(not A) + p(A) = 1 ``` ### AND If two (or more) independent events are occurring, and you know the probability of each, the probability of BOTH (or ALL) of them occurring together (event A and event B and event C etc) is a multiplication of their probabilities. ```p(A and B) = p(A) * p(B) p(A and B and C ... and Z) = p(A) * p(B) * p(C) * ... * p(Z) ``` Suppose Mark will only be happy today if he gets an email and wins the lottery. He has a 90% chance to get an email and 0.1% chance to win the lottery. What are Mark's chances for happiness? Since email and lottery are independent (getting an email doesn't change my lottery chances, and vice versa), we can use the AND tool: ```p(email AND lottery) = p(email) * p(lottery) = 90% * 0.1% = 0.09% ``` So Mark has 9 chances in 10,000. Not bad... ### OR If two (or more) incompatible events are occurring, the probability of EITHER of them occurring (event A or event B or event C etc.) is a sum of their probabilities. ```p(A or B) = p(A) + p(B) p(A or B or C ... or Z) = p(A) + p(B) + ... + p(Z) ``` Incompatible means that they can't happen together, i.e. p(A and B) = 0. In case of two compatible events, the OR tool looks a bit more complicated: ```p(A or B) = p(A) + p(B) - p(A and B) ``` If we know that A and B are independent, we can apply AND tool to rewrite: ```p(A or B) = p(A) + p(B) - p(A) * p(B) ``` Suppose Mark will now be happy in both cases - either getting an email or winning the lottery. What are his chances to happiness now? ```p(email OR lottery) = p(email) + p(lottery) - p(email) * p(lottery) ``` $90% + 0.1% - 0.09% = 90.01%$ Mark's chances are 9,001 in 10,000 now. ### Expressions/Brackets When you're being asked for something complex, try reducing it to events and outcomes, and writing a formula. Use brackets to denote complex events, such as (A and B), or (A and (B or C)), etc. It is common to use AND as if it is multiplication and OR as if it is addition in the order preference, i.e. (A and B or C) = ((A and B) or C), but (A and (B or C)) <> (A and B or C). When you figure out the formula, it'll be easy to reduce it to simple arithmetic operations by using NOT, AND, and OR tools. ### Elimination tricks Given that $0 \le p(A) \le 1$, you get the following rules: ```p(A and B) $\le$ p(A) p(A or B) $\ge$ p(A) p(A and B) $\le$ p(A or B) ``` Thinking of these rules is often an excellent strategy for eliminating certain answer choices. {{#x:box| ```If a fair coin is tossed twice, what is the probability that on the first toss the coin lands heads and on the second toss the coin lands tails? ``` 1. $\frac{1}{6}$ 2. $\frac{1}{3}$ 3. $\frac{1}{4}$ 4. $\frac{1}{2}$ 5. 1 Solution. Suppose first toss is A, second is B. We know that p(A_heads) = 50% and that p(B_tails) = 50%. Also, A and B are independent. So, p(A_heads and B_tails) = p(A_heads) * p(B_tails) = 50% * 50% = 25% = $\frac{1}{4}$. Answer is C. }} {{#x:box| If a fair coin is tossed twice what is the probability that it will land either heads both times or tails both times? 1. $\frac{1}{8}$ 2. $\frac{1}{6}$ 3. $\frac{1}{4}$ 4. $\frac{1}{2}$ 5. 1 Solution. Let first toss be A, second B. p(Ah) = p(At) = p(Bh) = p(Bt) = $\frac{1}{2}$ p(Ah and Bh) = p(Ah) * p(Bh) = $\frac{1}{4}$ p(At and Bt) = p(At) * p(Bt) = $\frac{1}{4}$ p((Ah and Bh) or (At and Bt)) = p(Ah and Bh) + p(At and Bt) = $\frac{1}{4} + \frac{1}{4} = \frac{1}{2}$ }} Note that AND rule works because A and B are independent, and OR rule works because (Ah and Bh) and (At and Bt) are incompatible. Alternatively, you may use F/T rule to solve this. Enumerate outcomes as (HH, HT, TH, TT). Favorable are HH and TT. So, p = $\frac{2}{4} = \frac{1}{2}$. Although in this case F/T rule works more gracefully, the AND/OR approach is still helpful - you can learn it on such easy examples as this to prepare for the more difficult ones. {{#x:box| A bowman hits his target in $\frac{1}{2}$ of his shots. What is the probability of him missing the target at least once in three shots? Solution. An optimal way to solve this is to think that (missing the target at least once) = 1 - (hitting it every time). So, p(hitting it every time) = p(shot1_hit and shot2_hit and shot3_hit) = p(shot1_hit) * p(shot2_hit) * p(shot3_hit) = $\frac{1}{2} * \frac{1}{2} * \frac{1}{2} = \frac{1}{8}$; p(missing at least once) = 1 - p(hitting it every time) = $1 - \frac{1}{8} = \frac{7}{8}$. }} Alternatively, use the F/T rule. The T are HHH, HHM, HMH, HMM, MHH, MHM, MMH, MMM. T = 8. The F are HHM, HMH, HMM, MHH, MHM, MMH, MMM. F = 7. In cases like this it is evident that F/T rule soon becomes too hard to apply. ## Event Types and Sets Analogy ### Compatible vs. Incompatible (Mutually exclusive) Events Sometimes you have to distinguish compatible and mutually exclusive events. Mutually exclusive are those events that can't happen together. Heads and tails are mutually exclusive events. Formally, two events are mutually exclusive if p(A and B) = 0. Otherwise, they are compatible. Note that mutually exclusive events are independent. (!) ### Dependent vs. Independent Events Most of the events that we have discussed so far are all independent events. By independent we mean that the first event does not affect the probability of the second event. Coin tosses are independent. They cannot affect each other's probabilities; the probability of each toss is independent of a previous toss and will always be $\frac{1}{2}$. Separate drawings from a deck of cards are independent events if you put the cards back. An example of a dependent event, one in which the probability of the second event is affected by the first, is drawing a card from a deck but not returning it. By not returning the card, you've decreased the number of cards in the deck by 1, and you've decreased the number of whatever kind of card you drew. If you draw an ace of spades, there are one fewer aces and one fewer spades. This fact affects the F in the F/T rule. What to do if you encounter dependent events? If possible, try to use F/T rule to the composite event of the two. In the cards example, you may consider counting all 2-card combinations you may draw (T), and then counting those that fit (F). This will be discussed in detail later. But sometimes the events can't be reduced to outcomes that can be counted. In these cases, use the sets analogy. ### Sets Analogy Remember the familiar problem type about students attending three language classes, say, French, German, and Chinese? There you had to calculate the number of students attending one of the classes, or number of students attending both French and German, but not Chinese, etc? The greatest way to solve such problems is to draw intersecting circles representing the three sets of students, and then to write there their numbers and try to find the answer. What does it have to do with probability, one might wonder. But this is precisely the way to solve probability problems with dependent events. This charts you may have drawn for simple sets problems are called Venn diagrams in the probability theory. Perhaps to scare you away. The logic is simple: each event is a language class, and each chance is a student in that class. And the probability of the event is the number of students (chances) attending it divided by the total number of students. Where the classes intersect is where two events happen at once. Mutually exclusive events do not intersect. Finally, independent events intersect in such an interesting way that, supposing French and German classes represent two independent events, the proportion of French students in the German class is the same as the proportion of French students in the school as a whole (100 students, 40 study German, 50 study French, and 20 study both: $\frac{20}{40} = \frac{50}{100}$). ### Conditional Probability Conditional probability is a simple way to denote proportions you understand with the sets analogy. Simply put, p(A/B) is the probability of event A happening given that event B has already happened, or the number of students attending both A and B classes divided by the number of students attending B class. So, for any two events, including dependent events, this statement hold: ```p(A and B) = p(A) * p(B/A) = p(B) * p(A/B) ``` This statement, however scary, is self-evident. Look at it. It says that to find the number of students studying French and German you have to either multiply the number of those who study French by the proportion of German scholars in the French class (p(B/A)), or multiply the number of German students by the proportion of French students in the German class (p(A/B)). But that's self-evident, isn't it? So it is with events. Independent events may, therefore, be defined as such that p(B/A) = p(B), p(A/B) = p(A). {{#x:box| What is the probability that a card selected from a deck will be either an ace or a spade? 1. $\frac{2}{52}$ 2. $\frac{2}{13}$ 3. $\frac{7}{26}$ 4. $\frac{4}{13}$ 5. $\frac{17}{52}$ Solution. Let A stand for a card being an ace, and S for it being a spade. We have to find p(A or S). Are A and S mutually exclusive? No. Are they independent? Why, yes, because spades have as many aces as any other suit. Then, p(A or S) = p(A) + p(S) - p(A) * p(S) With simple F/T we get: p(A) = 4/52 = 1/13 p(S) = 13/52 = 1/4 So, p(A or S) = 1/13 + 1/4 - 1/52 = 16/52 = 4/13 }} Sets analogy can help you visualize the formula. Draw two intersecting circles - one for aces, the other for spades. To get the area (probability) of the figure formed by these two circles together (all chances that are either aces or spades), you add the areas of aces and spades and subtract the intersecting area, in order not to count it twice. What we subtract is the ace of spades that was counted twice. Another way to think about the question is to just count aces and spades; that is, use the F/T rule. There are 13 spades in a deck and 3 aces other than the ace of spades already included in the 13 spades. Therefore, there are 16 desired outcomes out of a total of 52 possible outcomes, or $\frac{16}{52} = \frac{4}{13}$. {{#x:box| If someone draws a card at random from a deck and then, without replacing the first card, draws a second card, what is the probability that both cards will be aces? Solution. Event A is that the first card is an ace. Since 4 of the 52 cards are aces, P(A) = $\frac{4}{52} = \frac{1}{13}$. Given that the first card is an ace, what is the probability that the second card will be an ace as well? Of the 51 remaining cards, 3 are aces. Therefore, p(B/A) = $\frac{3}{51} = \frac{1}{17}$, and therefore: p(A and B) = p(A) * p(B/A) = $\frac{1}{13} * \frac{1}{17} = \frac{1}{221}$ }} {{#x:box| If there are 30 red and blue marbles in a jar, and the ratio of red to blue marbles is 2:3, what is the probability that, drawing twice, you will select two red marbles if you return the marbles after each draw? Solution. So, there are 12 red and 18 blue marbles. We are asked to draw twice and return the marble after each draw. Therefore, the first draw does not affect the probability of the second draw. We return the marble after the draw, and therefore, we return the situation to the initial conditions before the second draw. Nothing is altered in between draws; therefore, the events are independent. p(drawing a red marble) would be $\frac{12}{30} = \frac{2}{5}$. The same is true for the second draw. Then p(First_Red and Second_Red) = p(First_Red) * p(Second_Red) = $\frac{2}{5} * \frac{2}{5} = \frac{4}{25}$. }} {{#x:box| Now consider the same question with the condition that you do not return the marbles after each draw. Solution. The probability of drawing a red marble on the first draw remains the same, $\frac{12}{30} = \frac{2}{5}$. The second draw, however, is different. The initial conditions have been altered by the first draw. We now have only 29 marbles in the jar and only 11 red. So, p(Second_Red/First_Red) = $\frac{11}{29}$. Using the dependent event formula, p(First_Red and Second_Red) = p(First_Red) * p(Second_Red/First_Red) = $\frac{2}{5} * \frac{11}{29} = \frac{22}{145}$ }} To summarize, if you return every marble you select, the probability of drawing another marble is unaffected; the events are INDEPENDENT. If you do not return the marbles, the number of marbles is affected and therefore DEPENDENT. ## Learning the Advanced Tools Detailed discussion of advanced solution tools is out of scope of this section, but here are some considerations to get you started: ### Combinations Good understanding of CT formulas (n!, nAk, nCk) is essential to solving complex F/T problems, where both F and T are so large you can't enumerate them manually, but only with a formula. See our Combinations Lesson. ### Expectations Some probability problems deal with money, gains, and bets. Often you have to calculate which bet will be better, or how much it will be worth. The tool that deals with this is Expectation. E = G * p, where G is gain, and p is probability. So, a 10% chance to get \$100 is worth (has E) of \$100 * 10% = \$10. Therefore, it is better than to get \$8 for granted, but worse than a 5% chance to get \$300 (E = \$300 * 5% = \$15). Complex expectation works similarly: E1 = E * p, i.e. a 10% chance to get a 25% chance to get \$100 is worth 10% * (25% * \$100) = \$2.5; This is how Expectations work. ### Distributions The three types of distributions are Binominal, Hypergeometric, and Poisson distributions. These are just handy formulas for solving 3 very specific kinds of problems, like these: • If the coin is tossed 5 times, what is the probability that at least 3 out of 5 times it will show heads? (Binominal Distribution) • There are 2 green, 3 red, and 2 blue balls in a box. 4 are drawn at random without replacement. What is the probability that of the 4 drawn balls two are red, 1 is green, and 1 is blue? (Hypergeometric Distribution) • Each hour an average of ten cars arrive at the parking lot. The lot can handle at most fifteen cars per hour. What is the probability that at a given hour cars will not be accepted? (Poisson Distribution) As you may have noticed, Poisson and Binominal Distribution problems are alike. In fact, these Distributions are two methods of solving the same kind of problems. The difference is that BD provides accurate but costly (many calculations) method, and PD provides and elegant approximation, and is therefore used only on large numbers. While BD and HD are quite likely to appear on GMAT, PD is not. For GMAT Club's members it is an open question whether one can in fact encounter PD on GMAT. In any case, there won't be two questions on PD. {{#x:box|Feel free to make necessary corrections to this guide --Dzyubam 13:56, 3 April 2008 (UTC) }} ## Other GMAT Areas of Interest GMAT Study Guide - a prep wikibook index - edit - roadmap
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 45}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508005380630493, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/53582/what-is-the-relationship-between-the-radius-of-a-ring-in-a-rotating-space-statio/53585
# What is the relationship between the radius of a ring in a rotating space station and the strength of the artificial gravity generated? Suppose engineers built a rotating space station similar to Space Station V from the film 2011: A Space Odyssey, but with multiple concentric rings where astronauts can live. ```` ____________ / ____c___ \ / / __b_ \ \ / / / a \ \ \ / / / \ \ \ \ \ \ / / / \ \ \____/ / / \ \________/ / \____________/ ```` Ring `a` is 10 meters from the center, ring `b` is 20 meters from the center, and ring `c` is 30 meters from the center. • If the station is set to rotate at a speed where the gravity within ring `b` is identical to Earth's gravity and the rings rotated together, would the gravity in rings `a` and `c` be significantly stronger or weaker? • If the concentric rings could rotate freely, would rings `a` and `c` need to rotate faster or slower than ring `b` to ensure that the gravity at each ring had identical strength? - ## 3 Answers The acceleration into the "ground" that is experienced at a radial distance $r$ from the axis of rotation (the center of the space station) is given as: $$a= \omega^2 r$$ where $\omega$ is the angular velocity of the space station (i.e. how many times it rotates per second). This is called a "centrifugal force" (not to be confused with "centripetal force" though they are intimately related). You can see that acceleration is linear with respect to how far you are from the axis of rotation. A good way to demonstrate that this equation makes sense intuitively is to tie a ball to the end of a rope. If you hold the rope very close to the ball and begin to swing it around in a circular path, it is relatively easy to hold on to. If you increase the length of the rope, you can feel that it becomes more difficult to hang on. Back to your space station example. If we set the angular velocity of the space station so that at $r=20$ meters the acceleration is equivalent to the acceleration due to gravity on Earth, it is easy to solve for the accelerations at the other two locations: Since ring A is $10$ meters $= \frac{1}{2}(20)$ meters away from the axis, the acceleration there will be half of what it is at B, or $4.9$ meters/second/second. Ring C is $30$ meters $= \frac{3}{2}(20)$ meters away from the axis, so the acceleration there will be $1.5$ times what it is at B, or $14.7$ meters/second/second. If the concentric rings were allowed to rotate independently (ignoring the fact that this would create problems when trying to get from one ring to another) then from our equation above we can see that ring A would need to rotate $\sqrt{2} \approx 1.414$ times faster than ring B, and ring C would need to rotate $\sqrt{\frac{2}{3}} \approx 0.816$ times slower for them all to have the same centrifugal acceleration. - Let's assume that a particular ring of radius $r$ is spinning at angular speed $\omega$, then a person of mass $m$ in this ring will experience a centrifugal force $F_c$ given in magnitude by $$F_c = m r\omega^2$$ and pointing radially outward. Important Aside. The centrifugal force, which points radially outward, is what's termed a "fictitious force" in classical mechanics because it is not the result of interactions with another object. Instead, it is merely an apparent force that results from making observations from a non-intertial frame of reference. However, if a person were actually walking along the outside of one of the rings, then he would experience the apparent gravity by virtue of the contact force between his feet and the ring. If this person were to apply Newton's second law in this rotating frame and treat this apparent outward force as he would a gravitational force on the surface of the Earth where the force is given in mangitude by $mg$ where $g$ is the acceleration due to gravity, then he would find that $$mg = m r \omega^2$$ and therefore, he would experience an effective gravitational acceleration which is a function of $r$ and is given by $$\boxed{g_\mathrm{eff} = r\omega^2}$$ So we see that the effective acceleration due to gravity in a given ring increases linearly with the radius of the ring and quadratically with its angular speed. I just saw that someone else also wrote an answer that takes it from here, so I hope this helps! Cheers! - The illusory force of gravity is created by the centripetal force. See Centripetal Force Stated as follows: $F_c=\frac{mv^2}{r}$ Be careful with this though - because the constant of motion is the angular velocity $\omega$. Consider the expression after substituting $v=\omega r$: $F_c=m\omega^2r$ Now you can see what happens to the force with increasing radius and thus you should be able to deduce at what angular velocities your independent rings should rotate. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9546887278556824, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/10309/conservation-law-of-energy-and-big-bang/10312
Conservation law of energy and Big Bang? Did the law of conservation of energy apply to the earliest moments of the Big Bang? If so, what theoretical physics supports this? I hear that Einstein's theory of relativity disputes the law of conservation of energy- so does that mean the law is false, or only some aspect of it? - The second part of the question (v1) seems related to physics.stackexchange.com/q/2597/2451 and physics.stackexchange.com/q/1327/2451 – Qmechanic♦ May 24 '11 at 8:44 Dear @Dan Brumleve: The 'special-relativity' tag in v3 does not fit well, since the question implies gravity, cosmology and acceleration. – Qmechanic♦ May 25 '11 at 10:23 Qmechanic, I reverted the edit. – Dan Brumleve May 25 '11 at 10:27 5 Answers Yes, the energy conservation law fails not only right after the Big Bang but in any cosmological evolution. See e.g. http://motls.blogspot.com/2010/08/why-and-how-energy-is-not-conserved-in.html The time-translational invariance is broken, so via Noether's theorem, one doesn't expect a conserved quantity. Also, if one defines the "total" stress energy tensor as a variation of the action with respect to the metric tensor, it vanishes in GR because the metric tensor is dynamical and the variation has to vanish because it's an equation of motion (Einstein's equations). If the space is asymptotically flat or AdS or similarly simple, a conservation law - for the ADM energy - may be revived. - 1 I think this is a language game. What is the definition of "energy" if not "that which is conserved"? – Dan Brumleve May 24 '11 at 8:28 1 @Lubos, about this non-conservation of energy in expanding universes, i have just one question: where it is calculated the energetic (initial and ongoing) cost of setting a universe in expansion at a given rate $\kappa$? in your article you mention the energy of matter and cosmological constant but you don't include this term in your balance – lurscher May 24 '11 at 11:54 2 Dear @Dan, it is not a language game and you can only stand by your position if you're either blinded or unable to follow elementary arguments. On the other hand, you're right that the most sensible definition of energy is that it is a quantity that is conserved as a result of the time-translational invariance. However, what I am explaining to you is that there is no (conserved) energy in GR - obviously, the symmetry isn't there either. It's like the only sensible definition of a cold fusion reactor is a fusion reactor that is cold - but this doesn't imply that it actually exists. ;-) – Luboš Motl May 25 '11 at 9:07 3 It is not silly. There is no nontrivial concept of energy in GR. Standard definitions of "really total" or "conserved" energy density in GR are identically equal to zero (or, more generally, they're independent of the degrees of freedom, or dependent on the gauge). $H=0$ is really a constraint or an equation of motion in GR because $H$ generates translations in time and all diffeomorphisms, including translations in time, are gauge symmetries that must keep physical states invariant (using quantum mechanics language, and assuming that there is no asymptotic region). – Luboš Motl May 25 '11 at 9:10 2 Well, when I say that I think that someone is an idiot, it is also just a word, and that's what I will choose to say right now. Why are you writing on this server if you admit that your texts are just sequences of words that don't mean anything? The question asked about a well-defined thing using pretty much well-defined words, my answer is right, your answer is wrong - right and wrong are also words, by the way - so what's the purpose of hiding and diluting the wrongness of your answer by trying to pretend that words don't mean anything? They do, otherwise we wouldn't use them. – Luboš Motl May 25 '11 at 9:20 show 11 more comments Conservation laws are established in general relativity if there is a Killing vector $K_a$, where for some values of the index $a$ there may be zero entries, so that for a momentum vector $p^a$ $=~(p_t,~{\vec p})$ the inner product $p^aK_a~=~constant$. The Killing vector is then an isometry such that a vector along a parallel translation defines a conserved quantity relative to the Killing vector. That conserved quantities are variables conjugate to the components of the Killing vector. How to find Killing vectors is somewhat involved, but as a rule, if a metric coefficient $g_{ab}~=~g_{ab}(t)$ then there is no Killing vector with a component along that coordinate direction. The general line element for a cosmology involves a scale factor $a~=~a(t)$, $$ds^2~=~-dt^2~+~a^2(t)(dr^2~+~r^2d\Omega^2$$ which is a pretty good clue that this spacetime has not fundamental conservation of energy. There is no timelike directed part of a Killing vector, therefore conservation of the energy conjugate variable can’t be established fundamentally. So is energy absolutely not conserved in our universe? The answer to this depends upon upon some other conditions; for it does turn out that our universe may have a unique condition which recovers energy conservation.. The FLRW equation for the scale parameter $a~=~a(t)$ is $$\big(\frac{\dot a}{a}\Big)^2~=~8\pi G\rho/3~–~\frac{k}{a^2}$$ where ${\dot a}/a~=~H$, the Hubble parameter, and flatness means $k~=~0$, spherical geometry is $k~=~1$ and hyperbolic geometry is $k~=~-1$. There is an equation of state for the mass-energy and pressure in the spacetime $$\frac{d(\rho a^3)}{dt}~+~p\frac{da^3}{dt}~=~0$$ I will consider an approximate de Sitter spacetime, which has $\rho~=~constant$, and is identified in ways not entirely understood with the quantum field vacuum. The FLRW equation for $k~=~0$ is then $$\frac{da}{dt}~=~\sqrt{8\pi G\rho/3}~a$$ which has the solution $a~=~\sqrt{3/8\pi G\rho}~exp(\sqrt{8\pi G\rho /3}t)$. Using the Einstein field equation we then have that the stress energy is $T^{00}~=~8\pi G\rho~=~\Lambda$, which is the cosmological constant. Returning the first equation, the FLRW equation, we then see that $H^2~=\Lambda/3$. The dynamical equation for the dS spacetime with $\rho~=~const$ gives ρd(a^3)/dt + pda^3/dt = 0 or $p~=~-\rho$. This is the equation of state for $p~=~w\rho$, and $w~=~-1$. This corresponds to a case where the total energy is zero and the first law of thermodynamics is $dF~=~dE~–~pdV~=~0$ means the energy that is increased in a unit volume of the universe under expansion is compensated for by a negative pressure which removes work from the system. Further $pdV~=~d(NkT)$, and for a constant thermal energy for the vacuum and $Nk~=~S$ the entropy of the universe. For this particular special case we do have an equation of state which gives a conservation of energy. Another way of seeing this is this spacetime has a time dependent conformal factor $a(t)$, and this metric is conformally equivalent to a flat spacetime, where one can define an ADM mass that is conserved. Of course the question might be raised whether this pertains to our physical universe. The inflationary period had a huge exponential acceleration, or equivalently a scale factor which grew extremely rapidly. This period should then have had conservation of energy. As for the time period before then, who knows? After the reheating period the universe became radiation dominated and energy conservation is not immediately apparent. Energy conservation may only then be established in our universe in the very distant future as it approaches an empty de Sitter vacuum state. - 1 great overview. My question is; how much energy contributes/costs inflation itself? Somehow there has to be a mechanism that switches on and off inflationary expansion, so, instead of writing ill-defined things like $\dot \Lambda$ lets assume some quintaessence powering down dynamically the values for $\dotdot a$, this is not derived from GR or thermodynamics alone since it relies on details of the quintaessence physics. How do we compute how much potential energy is stored when the inflation is turned off (present era)? – lurscher May 26 '11 at 1:14 if we are saying something about energy conservation in a expanding universe, i'm sure we should have something to say too about what energy costs are associated to setting up the dynamical fields that create a universal expansion in the first place – lurscher May 26 '11 at 1:17 btw @Lawrence, i always appreciate that you are one of the few posters that consistently belong to the "shut up and calculate" crowd; avoiding to place complex discursive arguments in their answers morally over the equations that are more capable of explain it with less ambiguity. thanks! – lurscher May 26 '11 at 1:32 There is this issue of reheating, where after about 63 efolds the inflaton potential went from a constant minus some slow linear dependent term of the field to a quadratic term. This change in the potential is rather odd, and it may be some sort of phase transition. The reheating is similar to a latent heat. There is also the very early phase of this which is a bit mysterious as well. – Lawrence B. Crowell May 26 '11 at 1:50 I tend to work somewhat in string and branes, but frankly I think the fundamental things to consider are degrees of freedom and quantum information. It is my sense that string theory and M-theory represents structures which are most appropriate to physics, but there are things maybe missing. The whole business has turned into the vast unwieldy arena that is become impossible to track. Trying to tie down cosmology with this seems maybe premature. This gets into my sense about quintessence and brane-brane interaction ideas, in that they are premature. – Lawrence B. Crowell May 26 '11 at 1:51 Energy is indeed conserved perfectly well in General Relativity, even in the extreme conditions of the early big bang. It is not merely true by definition, it is true as a result of the dynamics of the field equations. This question has been answered before e.g. at Energy conservation in General Relativity and should probably be closed as repetition, so I'll just add a link to a long discussion on the subject at http://blog.vixra.org/category/energy-conservation/ - – Philip Gibbs May 6 at 12:15 It is my, perhaps inadequate, understanding that there is indeed something, and it could be called energy, which is conserved by GR. Whether I am right or not, prof. Motl is certainly wrong to say that the Big Bang breaks time-invariance: this is a misunderstanding of time-invariance symmetry. GR is indeed time-invariant. The kind of time-invariance required to apply Noether's theorem is simply the physical fact that if we perform our experiment at time $t=t_o$ and wait $\Delta t$ seconds (or you could use units of age of the Universe) and measure the results, the answer will be the same as if we performed the experiment at time $t=t_1$ and waited $\Delta t$ seconds, thus measuring the results at time $t=t_1+\Delta t$. Physically, this means you have to arrange that the Big Bang also happened $t_1-t_o$ seconds later than it did (unless you have physical reasons for knowing that the time elapsed from the Big Bang is not relevant to the quantities you are going to measure). Mathematically, this is guaranteed as long as the Lagrangian does not depend explicitly on time. (Effective Lagrangians often do vary with time, this means the system is not a closed system but open, and of course energy is not necessarily conserved in an open system.) Therefore there is a corresponding quantity which is conserved. Whether it should be called energy might be worth discussion, but I always assumed it was... For a useful, balanced, discussion of some of the many issues relevant to the OP, although it is from a mathematics department, see http://math.ucr.edu/home/baez/physics/Relativity/GR/energy_gr.html even though I am not sure I agree with them. - Certainly yes, and Einstein's theories support this. Conservation of energy is a fundamental principle and it is true essentially by definition. Wikipedia's article on the history of energy includes this quote from Feynman: "There is a fact, or if you wish, a law, governing natural phenomena that are known to date. There is no known exception to this law—it is exact so far we know. The law is called conservation of energy; it states that there is a certain quantity, which we call energy that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity, which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number, and when we finish watching nature go through her tricks and calculate the number again, it is the same." The Wikipedia article on physical cosmology has this to say: There is no unambiguous way to define the total energy of the universe in the current best theory of gravity, general relativity. As a result it remains controversial whether one can meaningfully say that total energy is conserved in an expanding universe. For instance, each photon that travels through intergalactic space loses energy due to the redshift effect. This energy is not obviously transferred to any other system, so seems to be permanently lost. Nevertheless some cosmologists insist that energy is conserved in some sense. I interpret this to mean that while in some cases it is not meaningful to say that energy is conserved, in these same cases it is no more meaningful to say that it isn't. The issue is the well-definedness of energy, not whether or not it is conserved. - Not sure why the downvote. Did Einstein make any argument that energy is not conserved? Luboš seems to be using tools that would not have been available to Einstein. – Dan Brumleve May 24 '11 at 8:37 – Dan Brumleve May 24 '11 at 9:04 5 You got the downvote because what you say is just plain wrong. Wether Einstein had the tools or not is irrelevant by the way. We are talking about GR and it's known that for large classes of solutions, among which Friedman-Walker metrics, that there is no energy conservation. – Raskolnikov May 24 '11 at 11:51 2 @Dan Relax, not conserving energy at the Big Bang is not the end of the world. – dbrane May 24 '11 at 14:25 1 – Luboš Motl May 24 '11 at 15:03 show 7 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492191672325134, "perplexity_flag": "head"}
http://nrich.maths.org/2636/solution
nrich enriching mathematicsSkip over navigation ### Just Rolling Round P is a point on the circumference of a circle radius r which rolls, without slipping, inside a circle of radius 2r. What is the locus of P? ### Tri-split A point P is selected anywhere inside an equilateral triangle. What can you say about the sum of the perpendicular distances from P to the sides of the triangle? Can you prove your conjecture? ### Polycircles Show that for any triangle it is always possible to construct 3 touching circles with centres at the vertices. Is it possible to construct touching circles centred at the vertices of any polygon? # Tracking Points ##### Stage: 4 Challenge Level: Andre (age 15), from Romania, tackled this problem. Here's his solution, which uses vectors: In the Cartesian system of axes, the vector $\overrightarrow{AB}$, with the origin at (0,0,0) and end point at (1,1,1) could be written as: $\overrightarrow{AB}=\mathbf{i}+\mathbf{j}+\mathbf{k}$ where $\mathbf{i}$, $\mathbf{j}$ and $\mathbf{k}$ are the unit vectors of the axes Ox, Oy and Oz respectively. Now, linking end to end 10 vectors $\overrightarrow{AB}$ comes to multiplying the vector $\overrightarrow{AB}$ by 10, so that one obtains a vector with the origin at (0,0,0) and end point at (10,10,10). For the vector $\overrightarrow{CD}$: $\overrightarrow{CD}=\mathbf{i}+3\mathbf{j}+2\mathbf{k}$. Adding up 2 such vectors, one easily sees that the resulting vector has the origin at (0,0,0) and the end point at (2,6,4). Adding up 10 such vectors, the origin remains the same and the end point is (10, 30, 20). For $n$ vectors, the end point is $(n, 3n, 2n)$. If the starting point is (1,0,-1) and the vector to be drawn is $\mathbf{v}=n\mathbf{i}+n\mathbf{j}+n\mathbf{k}$, then the end point will be at $(1+n, 0+n, -1+n)$, i.e.\ at $(n+1, n, n-1)$. If the origin is at $(a,b,c)$, the end point is at $(n+a, n+b, n+c)$. If the vector is $\mathbf{u}=n\mathbf{i}+3n\mathbf{j}+2n\mathbf{k}$, and its origin is at (-3,0,2), its end point will be at $(n-3, 3n, 2n+2)$. If for the same vector the origin is at $(a,b,c)$, its end point will be at $(n+a, 3n+b, 2n+c)$. If the vector of interest is now $\mathbf{u}+\mathbf{v}=2n\mathbf{i}+ 4n\mathbf{j}+3n\mathbf{k}$, in the general case of the origin at $(a,b,c)$, the end point will be at $(2n+a,4n+b, 3n+c)$. Stephen (from Framwellgate School, Durham) solved the problem and went on to think about what happens if you alternate segments. Here is what he sent us : With AB followed by CD (if we count AB as segment 1 and CD as segment 2) we can see that 2 segments makes (1,1,1)+(1,3,2)=(2,4,3) therefore the nth segment when n is even is (n,2n,3n/2) because we need to divide it by 2, as (2,4,3) is 2 segments. For n segments when n is odd, n-1 is even so take the n-1th segment from last coordinates to give (n-1,2(n-1),3(n-1)/2) and add (1,1,1), because this is the first segment and therefore will be the last segment on odd number of segments (n) - to get: (n,2n-1,(3n-1)/2) if starting from (a,b,c) then we just need to add this coordinate to the two solutions. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322322607040405, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/181540/equivalent-definition-for-a-collection-of-simplices-to-be-a-simplicial-complex/181544
# Equivalent definition for a collection of simplices to be a simplicial complex I am reading the following lemma from Munkres' Elements of Algebraic Topology: Lemma 2.1 A collection $K$ of simplices is a simplicial complex if and only if the follow hold: 1. Every face of a simplex of $K$ is in $K$. 2. Every pair of distinct simplices of $K$ have disjoint interiors. Now I am trying to follow the proof that if 1 and 2 above hold then $K$ is a simplicial complex. Although munkres does not define what he means by "distinct simplices", I am told by Mixedmath that this means two simplices that don't have a common vertex. The proof according to Munkres goes as follows: Let $\sigma$ and $\tau$ be two distinct simplices of $K$ such that they have disjoint interiors. Let $\sigma'$ be the face of $\sigma$ that is spanned by those vertices $b_0,\ldots,b_m$ of $\sigma$ that lie in $\tau$. The claim now is that $\sigma \cap \tau$ is equal to $\sigma'$. Now one direction I understand the other which I don't is when he shows that $$\sigma \cap \tau \subseteq \sigma'.$$ The line I don't understand is this: Let $x \in \sigma \cap \tau$. Then $x \in \textrm{Int}\, s \cap \textrm{Int} \hspace{2mm} t$ for some faces $s$ of $\sigma$ and $t$ of $\tau$. How does this follow from the assumption of (2) above? - 1 Hey - that's not fair - I asked you how he defined it! It seems that distinct simplices means that they have disjoint interiors! – mixedmath♦ Aug 12 '12 at 4:19 @mixedmath I don't get how (2) means that distinct simplices have disjoint interiors. That's just an assumption in the proof. – BenjaLim Aug 12 '12 at 4:24 ## 2 Answers "Distinct simplices" means that they are distinct (that is, are not identical). The line you don't understand doesn't use (2); it uses the fact that if $x \in \sigma$ then $x \in \text{int}(s)$ for a face $s$ of $\sigma$ (which is actually unique). $s$ is precisely the minimal face of $\sigma$ (under inclusion) containing $x$ (if $x$ is not contained in the interior of $s$ then it is in the boundary of $s$ which is contained in a strict face of $s$). - Thanks for your answer. I am trying to understand what you wrote above in the case that $\sigma$ is a triangle. In this context, the faces of $\sigma$ are the three edges yes? How would this automatically mean that if $x \in \sigma$ then it lies on any one of the edges? – BenjaLim Aug 12 '12 at 4:42 1 @BenjaLim: a triangle has $8$ faces: the whole thing, the three edges, the three vertices, and the empty face. – Qiaochu Yuan Aug 12 '12 at 4:44 In that case, wouldn't it be immediate that if $x \in \sigma$ then $x \in s$ for some $s$? Why the need to mention the interior? – BenjaLim Aug 12 '12 at 4:46 @BenjaLim: because... the interior of $s$ is strictly contained in $s$? I don't understand the question. $x \in s$ is a strictly weaker statement than $x \in \text{int}(s)$. – Qiaochu Yuan Aug 12 '12 at 4:47 I guess you are right. If my point $s$ is not on the inside of the triangle, then it will be on one of the three edges. If it is a vertex, then $x$ is in the interior of that vertex which is the vertex itself. If it is on an edge and not a vertex, then it is in the interior of the edge. Is this right? – BenjaLim Aug 12 '12 at 4:49 show 2 more comments I don't have the proof or the book in front of me, but I bet it looks like this: First, let's suppose $K$ is a simplicial complex. Then $K$ contains the faces of its simplices, and we want to show that every point in $K$ belongs to the interior of a unique simplex of $K$. We know that if $x$ is in $K$, then it belongs to the interior of a face $\sigma$ of some simplex in $K$, as every point in a simplex belongs to the interior of some face. And $\sigma$ is a face in $K$, and thus $x$ belongs to the interior of at least one simplex of $K$. Suppose that $x$ was in the interior of two distinct simplices $\sigma$ and $\tau$. Then $x$ belongs to the intersection of $\sigma$ and $\tau$, which is a face, as the intersection of two simplices in a simplicial complex is always a face. Thus $x$ is in some common face $\sigma \cap \tau$ of $\sigma$ and $\tau$. This is a problem, as then this common face is a proper face of one or the other of the simplices $\sigma$ and $\tau$. It can't be in both (because $\sigma \neq \tau$, and $x$ is in the interior of both $\sigma$ and $\tau$). Thus the simplex $\sigma$ of $K$ that contains $x$ is unique. In the other direction, showing that it's a simplicial complex, is very simple. $K$ contains all the faces of its simplices, so it only remains to check that if $\sigma$ and $\tau$ are any two simplices with non-empty intersection, then $\sigma \cap \tau$ is a common face of $\sigma$ and $\tau$. So let $x \in \sigma \cap \tau$. We now know that $x$ is in the interior of a unique simplex $\omega$ of $K$. And any point of $\sigma$ or $\tau$ belongs to the interior of a unique face of that simplex, and all faces of $\sigma$ and $\tau$ belong to $K$. Thus $\omega$ is a common face of $\sigma$ and $\tau$, and it is in fact the face we want. Its uniqueness comes from the uniqueness of $\omega$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 95, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9617680907249451, "perplexity_flag": "head"}
http://blogs.mathworks.com/loren/?=p=557
# Loren on the Art of MATLAB ## Recent Question about Speed with Subarray Calculations Recently someone asked me to explain the speed behavior doing a calculation using a loop and array indexing vs. getting the subarray first. ### Contents #### Example Suppose I have a function of two inputs, the first input being the column (of a square array), the second, a scalar, and the output, a vector. ```myfun = @(x,z) x'*x+z; ``` And even though this may be calculated in a fully vectorized manner, let's explore what happens when we work on subarrays from the array input. I am now creating the input array x and the results output arrays for doing the calculation two ways, with an additional intermediate step in one of the methods. ```n = 500; x = randn(n,n); result1 = zeros(n,1); result2 = zeros(n,1); ``` #### First Method Here we see and time the first method. In this one, we create a temporary array for x(:,k) n times through the outer loop. ```tic for k = 1:n for z = 1:n result1(z) = myfun(x(:,k), z); end result1 = result1+x(:,k); end runtime(1) = toc; ``` #### Second Method In this method, we extract the column of interest first in the outer loop, and reuse that temporary array each time through the inner loop. Again we see and time the results. ```tic for k = 1:n xt = x(:,k); for z = 1:n result2(z) = myfun(xt, z); end result2 = result2+x(:,k); end runtime(2) = toc; ``` #### Same Results? First, let's make sure we get the same answer both ways. You can see that we do. ```theSame = isequal(result1,result2) ``` ```theSame = 1 ``` #### Compare Runtime Next, let's compare the times. I want to remind you that doing timing from a script generally has more overhead than when the same code is run inside a function. We just want to see the relative behavior so we should get some insight from this exercise. ```disp(['Run times are: ',num2str(runtime)]) ``` ```Run times are: 2.3936 1.9558 ``` #### What's Happening? Here's what's going on. In the first method, we create a temporary variable n times through the outer loop, even though that array is a constant for a fixed column. In the second method, we extract the relevant column once, and reuse it n times through the inner loop. Be thoughtful if you do play around with this. Depending on the details of your function, if the calculations you do each time are large compared to the time to extract a column vector, you may not see much difference between the two methods. However, if the calculations are sufficiently short in duration, then the repeated creation of the temporary variable could add a tremendous amount of overhead to the calculation. In general, you should not be worse off always capturing the temporary array the fewest number of times possible. #### Your Results? Have you noticed similar timing "puzzles" when analyzing one of your algorithms? I'd love to hear more here. Get the MATLAB code Published with MATLAB® R2013a By Loren Shure 09:47 UTC | Posted in Best Practice, Efficiency, Puzzles | Permalink | 8 Comments » ## Using Symbolic Math Toolbox to Compute Area Moments of Inertia Once more I am pleased to introduce guest blogger Kai Gehrs. Kai has been a Software Engineer at MathWorks for the past five years working on the Symbolic Math Toolbox. He has a background in mathematics and computer science. He already contributed to my BLOG in the past writing about Using Symbolic Equations And Symbolic Functions In MATLAB as well as on approaches for Simplifying Symbolic Results. ### Contents #### In a Nutshell: What Is This Article About? If you are interested in using MATLAB and the Symbolic Math Toolbox in teaching some basics in mechanical engineering, this might be of interest to you. Computing area moments of inertia is an important task in mechanics. For example, area moments of inertia play a critical role in stress, dynamic, and stability analysis of structures. In this article, we use capabilities of the Symbolic Math Toolbox to compute area moments for cross sections of elliptical tubes. We start with a basic case involving only numeric parameters, and then make the computations more general by introducing symbolic parameters. All plots used in this article have been created in the MuPAD Notebook app. You can find the source code for these plots at the end of the article. #### Basic Example: Cross Section of an Elliptical Tube The following picture shows the cross section of an elliptical tube. The following ellipses define outer and inner contours of the section: • $y = \pm 2 \, \sqrt{1 - \frac{x^2}{9}}$ describe the outer contour line for $x \in [-3,3]$. • $y = \pm \sqrt{1 - \frac{x^2}{4}}$ describe the inner contour line for $x \in [-2,2]$. We will compute the area moment of inertia of this section with respect to the $x$-axis. #### Area Moment Of Inertia The moment of inertia of an area $A$ with respect to the $x$-axis is defined in terms of the double integral $$I_x = {\int \int}_A y^2\, \mathrm{d}A.$$ You can find this definition on Wikipedia. #### Math Behind This Example In our example, the area $A$ is the hatched area shown in the previous plot. Taking advantage of the symmetry of the section with respect to both the $x$- and $y$-axes, we can restrict our computations to the first quadrant of the $x-y$-plane. To compute the necessary double integral over the hatched area, we divide this area into two separate areas $A1$ and $A2$. To compute the final area moment of inertia about the x-axis, we multiply the sum of the double integrals over $A_1$ and $A_2$ by four. $$I_x = {\int\int}_A y^2 \, \mathrm dA = 4 \cdot \Big({\int\int}_{A_1} y^2 \, \mathrm dA_1 + {\int\int}_{A_2} y^2 \, \mathrm dA_2\Big).$$ Now, we only need to compute these two double integrals. The mathematical theory behind multi-dimensional integration tells us that we can rewrite each of these double integrals in terms of two integrals: $$I_1 = {\int\int}_{A_1} y^2 \, \mathrm dA_1 = \int_0^2 \int_{\sqrt{1 - \frac{x^2}{4}}}^{2 \, \sqrt{1 - \frac{x^2}{9}}} y^2 \, \mathrm dy \, \mathrm dx,$$ $$I_2 = {\int\int}_{A_2} y^2 \, \mathrm dA_2 = \int_2^3 \int_{0}^{2 \, \sqrt{1 - \frac{x^2}{9}}} y^2\, \mathrm dy\, \mathrm dx.$$ The int function from the Symbolic Math Toolbox can compute these integrals. #### Symbolic Integration We define $x$ and $y$ as symbolic variables: ```syms x y; ``` We start computing the inner integral $\int_{\sqrt{1 - \frac{x^2}{4}}}^{2 \, \sqrt{1 - \frac{x^2}{9}}} y^2 \, \mathrm dy$ of $I_1$: ```innerI1 = int(y^2, y, sqrt(1-x^2/4), 2*sqrt(1-x^2/9)) ``` ```innerI1 = (8*(9 - x^2)^(3/2))/81 - (4 - x^2)^(3/2)/24 ``` Next, we integrate innerI1 with respect to $x$ from $0$ to $2$. This provides $I_1$: ```I1 = int(innerI1, x, 0, 2) ``` ```I1 = 3*asin(2/3) - pi/8 + (74*5^(1/2))/81 ``` We use the same strategy to compute $I_2$. First, we compute the inner integral $\int_{0}^{2 \, \sqrt{1 - \frac{x^2}{9}}} y^2\, \mathrm dy$. Then, we integrate the resulting expression with respect to $x$ from $2$ to $3$: ```I2 = int(int(y^2, y, 0, 2*sqrt(1-x^2/9)), x, 2, 3); ``` Hence, the area moment of inertia of the elliptical tube with respect to the $x$-axis is ```Ix = 4 * (I1 + I2) ``` ```Ix = (11*pi)/2 ``` We can approximate the result numerically by using the vpa function. For example, we approximate the result using 5 significant (nonzero) digits: ```vpa(Ix, 5) ``` ```ans = 17.279 ``` #### Advanced Example: Cross Section of an Elliptical Tube Defined by Symbolic Parameters We can use a numerical integrator, such as MATLAB's integral2, to compute the area moment of inertia in the previous example. A numerical integrator might return slightly less accurate results, but other than that there is not much benefit from using symbolic integration there. But what if we consider the cross section of a more general elliptical tube whose shape is defined by arbitrary symbolic parameters? For example, we want to compute the area moment of inertia of this elliptical tube. Its contour lines are still ellipses, but they are defined by the symbolic parameters $r_1$, $r_2$, $R_1$ and $R_2$: • $y = \pm R_2 \, \sqrt{1 - \frac{x^2}{R_1^2}}$ describe the outer contour line for $x \in [-R_1,R_1]$. • $y = \pm r_2 \, \sqrt{1 - \frac{x^2}{r_1^2}}$ describe the inner contour line for $x \in [-r_1,r_1]$. Now we need to compute these integrals: $$I_1 = {\int\int}_{A_1} y^2 \, \mathrm dA_1 = \int_0^{r_1} \int_{r_2 \, \sqrt{1 - \frac{x^2}{r_1^2}}}^{R_2 \, \sqrt{1 - \frac{x^2}{R_1^2}}} y^2 \, \mathrm dy \, \mathrm dx,$$ $$I_2 = {\int\int}_{A_2} y^2 \, \mathrm dA_2 = \int_{r_1}^{R_1} \int_{0}^{R_2 \, \sqrt{1 - \frac{x^2}{R_1^2}}} y^2\, \mathrm dy\, \mathrm dx.$$ Although the situation is more complicated now, we can still use the same strategy as above, using symbolic variables instead of numbers. Nevertheless, we need to add one more step: specify relationships between symbolic parameters. This is because the variables $r_1$, $r_2$, $R_1$, and $R_2$ can be any complex number, unless we explicitly restrict their values. For example, the system does not know if these variables are positive or negative, if $r_1 < R_1$ or otherwise. In this example, we want to specify that $r_1>0$, $r_2>0$, $R_1>r_1$, and $R_2>r_2$. In the Symbolic Math Toolbox, such restrictions are called assumptions on variables. They can be set by using the assume and assumeAlso functions: ```syms x y r1 r2 R1 R2; assume(r1 > 0); assume(r2 > 0); assumeAlso(r1 < R1); assumeAlso(r2 < R2); I1 = int(int(y^2, y, r2*sqrt(1-x^2/r1^2), R2*sqrt(1-x^2/R1^2)), x, 0, r1); I2 = int(int(y^2, y, 0, R2*sqrt(1-x^2/R1^2)), x, r1, R1); Ixgeneral = 4 * (I1 + I2); pretty(Ixgeneral) ``` ``` / 4 / r1 \ \ / 4 / r1 \ \ | 3 R1 asin| -- | | | 4 3 R1 asin| -- | | 3 | \ R1 / | 3 | 3 pi R1 \ R1 / | 4 R2 | #1 + ---------------- | 4 R2 | #1 - -------- + ---------------- | \ 8 / \ 16 8 / ------------------------------- - ------------------------------------------ 3 3 3 R1 3 R1 3 pi r1 r2 - --------- 4 where / 2 3 \ | 5 R1 r1 r1 | 2 2 1/2 #1 == | -------- - --- | (R1 - r1 ) \ 8 4 / ``` Now, let's substitute our symbolic parameters with the same values that we had in the first example. Do we get the same result? Let's double-check: when substituting $r_1=2,r_2,=1,R_1=3,R_2=2$ in Ixgeneral, do we get the same result for Ix as obtained for the section initially considered? As expected, the general solution Ixgeneral reduces to the specific solution Ix when we substitute our original dimensions: ```vpa(subs(Ixgeneral, [r1, R1, r2, R2],[2, 3, 1, 2]), 5) ``` ```ans = 17.279 ``` Note that it is essential to set the assumptions on $r_1$, $r_2$, $R_1$ and $R_2$. By default, the Symbolic Math Toolbox assumes that all variables including symbolic parameters are arbitrary complex numbers. This makes computing integrals much more complicated and time-consuming. For example, try computing Ixgeneral without setting the assumptions on $r_1$, $r_2$, $R_1$ and $R_2$. In general, it can be impossible to even get the result without any additional assumptions. #### How to Create MuPAD Graphics Shown in This Article To generate the graphics shown in this article, use MuPAD Notebook app. Note that the code in this section does not run in the MATLAB Command Window. To open the MuPAD Notebook app: 1. On the MATLAB Toolstrip, click the Apps tab. 2. On the Apps tab, click the down arrow at the end of the Apps section. 3. Under Math, Statistics and Optimization, click the MuPAD Notebook button. Alternatively, type mupad in the MATLAB Command Window. Paste the following commands to a MuPAD notebook to obtain the four graphics used above: `Ellipse:= (x,a,b) -> sqrt(b^2*(1-x^2/a^2)):` ```f1 := plot::Function2d(Ellipse(x,2,1), x = -3..3): f2 := plot::Function2d(Ellipse(x,3,2), x = -3..3): f3 := plot::Function2d(Ellipse(x,3,2), x = -3..-2): f4 := plot::Function2d(Ellipse(x,3,2), x = 2..3):``` ```f5 := plot::Function2d(-Ellipse(x,2,1), x = -3..3): f6 := plot::Function2d(-Ellipse(x,3,2), x = -3..3): f7 := plot::Function2d(-Ellipse(x,3,2), x = -3..-2): f8 := plot::Function2d(-Ellipse(x,3,2), x = 2..3):``` ```f9 := plot::Function2d(Ellipse(x,2,1), x = 0..2, VerticalAsymptotesVisible = FALSE): f10 := plot::Function2d(Ellipse(x,3,2), x = 0..3, VerticalAsymptotesVisible = FALSE): f11 := plot::Function2d(Ellipse(x,3,2), x = 2..3, VerticalAsymptotesVisible = FALSE):``` ```plot(plot::Hatch(f1,f2),plot::Hatch(f3),plot::Hatch(f4),f1,f2, plot::Hatch(f5,f6),plot::Hatch(f7),plot::Hatch(f8),f5,f6, Scaling = Constrained, GridVisible = TRUE, VerticalAsymptotesVisible = FALSE, Height = 120, Width = 200, Header = "Cross section of elliptical tube (x-y-plane)"):``` ```plot(plot::Hatch(f9,f10),plot::Hatch(f11),f9,f10, Scaling = Constrained, GridVisible = TRUE, Height = 120, Width = 200, Header = "Restriction to first quadrant (x-y-plane)"):``` ```plot(plot::Hatch(f9,f10,Color=RGB::Magenta),plot::Hatch(f11,Color = RGB::Green),f9,f10, Scaling=Constrained, GridVisible = TRUE, plot::Text2d("A1",[1.0,1.25]), plot::Text2d("A2",[2.5,0.25]), Height = 120, Width = 200, Header = "Integration areas"):``` ```plot(plot::Hatch(f1,f2),plot::Hatch(f3),plot::Hatch(f4),f1,f2, plot::Hatch(f5,f6),plot::Hatch(f7),plot::Hatch(f8),f5,f6, plot::Text2d("r1",[1.8,0.05]), plot::Text2d("r2",[0.05,0.85]), plot::Text2d("R1",[2.74,0.05]), plot::Text2d("R2",[0.05,1.84]), Scaling = Constrained, GridVisible = TRUE, VerticalAsymptotesVisible = FALSE, XTicksLabelsVisible = FALSE, YTicksLabelsVisible = FALSE, ViewingBox = [-3.5..3.5,-2.5..2.5], Height = 120, Width = 200, Header = "Cross section of elliptical tube (more general situation)"):``` #### Have You Tried Symbolic Math Toolbox? Have you used the Symbolic Math Toolbox in computations related to mechanics? Let me know here. Get the MATLAB code Published with MATLAB® R2013a By Loren Shure 06:14 UTC | Posted in Symbolic | Permalink | No Comments » ## MATLAB to FPGA using HDL Coder(TM) It's my pleasure to introduce guest blogger Kiran Kintali. Kiran is the product development lead for HDL Coder at MathWorks. In this post, Kiran introduces a new capability in HDL Coder™ that generates synthesizable VHDL/Verilog code directly from MATLAB and highlights some of the key features of this new MATLAB based workflow. ### Contents #### Introduction to HDL Code Generation from MATLAB If you are using MATLAB to model digital signal processing (DSP) or video and image processing algorithms that eventually end up in FPGAs or ASICs, read on... FPGAs provide a good compromise between general purpose processors (GPPs) and application specific integrated circuits (ASICs). GPPs are fully programmable but are less efficient in terms of power and performance; ASICs implement dedicated functionality and show the best power and performance characteristics, but require extremely expensive design validation and implementation cycles. FPGAs are also used for prototyping in ASIC workflows for hardware verification and early software development. Due to the order of magnitude performance improvement when running high-throughput, high-performance applications, algorithm designers are increasingly using FPGAs to prototype and validate their innovations instead of using traditional processors. However, many of the algorithms are implemented in MATLAB due to the simple-to-use programming model and rich analysis and visualization capabilities. When targeting FPGAs or ASICs these MATLAB algorithms have to be manually translated to HDL. For many algorithm developers who are well-versed with software programming paradigms, mastering the FPGA design workflow is a challenge. Unlike software algorithm development, hardware development requires them to think parallel. Other obstacles include: learning the VHDL or Verilog language, mastering IDEs from FPGA vendors, and understanding esoteric terms like "multi-cycle path" and "delay balancing". In this post, I describe an easier path from MATLAB to FPGAs. I will show how you can automatically generate HDL code from your MATLAB algorithm, implement the HDL code on an FPGA, and use MATLAB to verify your HDL code. #### MATLAB to Hardware Workflow The process of translating MATLAB designs to hardware consists of the following steps: 1. Model your algorithm in MATLAB - use MATLAB to simulate, debug, and iteratively test and optimize the design. 2. Generate HDL code - automatically create HDL code for FPGA prototyping. 3. Verify HDL code - reuse your MATLAB test bench to verify the generated HDL code. 4. Create and verify FPGA prototype - implement and verify your design on FPGAs. There are some unique challenges in translating MATLAB to hardware. MATLAB code is procedural and can be highly abstract; it can use floating-point data and has no notion of time. Complex loops can be inferred from matrix operations and toolbox functions. Implementing MATLAB code in hardware involves: • Converting floating-point MATLAB code to fixed-point MATLAB code with optimized bit widths suitable for efficient hardware generation. • Identifying and mapping procedural constructs to concurrent area- and speed-optimized hardware operations. • Introducing the concept of time by adding clocks and clock rates to schedule the operations in hardware. • Creating resource-shared architectures to implement expensive operators like multipliers and for-loop bodies. • Mapping large persistent arrays to block RAM in hardware HDL Coder™ simplifies the above tasks though workflow automation. #### Example MATLAB Algorithm Let’s take a MATLAB function implementing histogram equalization and go through this workflow. This algorithm, implemented in MATLAB, enhances image contrast by transforming the values in an intensity image so that the histogram of the output image is approximately flat. type mlhdlc_heq.m ```% Histogram Equalization Algorithm function [pixel_out] = mlhdlc_heq(x_in, y_in, pixel_in, width, height) ``` ```persistent histogram persistent transferFunc persistent histInd persistent cumSum ``` ```if isempty(histogram) histogram = zeros(1, 2^8); transferFunc = zeros(1, 2^8); histInd = 0; cumSum = 0; end ``` ```% Figure out indices based on where we are in the frame if y_in < height && x_in < width % valid pixel data histInd = pixel_in + 1; elseif y_in == height && x_in == 0 % first column of height+1 histInd = 1; elseif y_in >= height % vertical blanking period histInd = min(histInd + 1, 2^8); elseif y_in < height % horizontal blanking - do nothing histInd = 1; end ``` ```%Read histogram histValRead = histogram(histInd); ``` ```%Read transfer function transValRead = transferFunc(histInd); ``` ```%If valid part of frame add one to pixel bin and keep transfer func val if y_in < height && x_in < width histValWrite = histValRead + 1; %Add pixel to bin transValWrite = transValRead; %Write back same value cumSum = 0; elseif y_in >= height %In blanking time index through all bins and reset to zero histValWrite = 0; transValWrite = cumSum + histValRead; cumSum = transValWrite; else histValWrite = histValRead; transValWrite = transValRead; end ``` ```%Write histogram histogram(histInd) = histValWrite; ``` ```%Write transfer function transferFunc(histInd) = transValWrite; ``` ```pixel_out = transValRead; ``` #### Example MATLAB Test Bench Here is the test bench that verifies that the algorithm works with an example image. (Note that this testbench uses Image Processing Toolbox functions for reading the original image and plotting the transformed image after equalization.) type mlhdlc_heq_tb.m ```%% Test bench for Histogram Equalization Algorithm clear mlhdlc_heq; testFile = 'office.png'; RGB = imread(testFile); ``` ```% Get intensity part of color image YCBCR = rgb2ycbcr(RGB); imgOrig = YCBCR(:,:,1); ``` ```[height, width] = size(imgOrig); imgOut = zeros(height,width); hBlank = 20; % make sure we have enough vertical blanking to filter the histogram vBlank = ceil(2^14/(width+hBlank)); ``` ```for frame = 1:2 disp(['working on frame: ', num2str(frame)]); for y_in = 0:height+vBlank-1 %disp(['frame: ', num2str(frame), ' of 2, row: ', num2str(y_in)]); for x_in = 0:width+hBlank-1 if x_in < width && y_in < height pixel_in = double(imgOrig(y_in+1, x_in+1)); else pixel_in = 0; end ``` ` [pixel_out] = mlhdlc_heq(x_in, y_in, pixel_in, width, height);` ``` if x_in < width && y_in < height imgOut(y_in+1,x_in+1) = pixel_out; end end end end``` ```% Make color image from equalized intensity image % Rescale image imgOut = double(imgOut); imgOut(:) = imgOut/max(imgOut(:)); imgOut = uint8(imgOut*255); ``` ```YCBCR(:,:,1) = imgOut; RGBOut = ycbcr2rgb(YCBCR); ``` ```figure(1) subplot(2,2,1); imshow(RGB, []); title('Original Image'); subplot(2,2,2); imshow(RGBOut, []); title('Equalized Image'); subplot(2,2,3); hist(double(imgOrig(:)),2^14-1); title('Histogram of original Image'); subplot(2,2,4); hist(double(imgOut(:)),2^14-1); title('Histogram of equalized Image'); ``` Let's simulate this algorithm to see the results. ```mlhdlc_heq_tb ``` ```working on frame: 1 working on frame: 2 ``` #### HDL Workflow Advisor The HDL Workflow Advisor (see the snapshot below) helps automate the steps and provides a guided path from MATLAB to hardware. You can see the following key steps of the workflow in the left pane of the workflow advisor: 1. Fixed-Point Conversion 2. HDL Code Generation 3. HDL Verification 4. HDL Synthesis and Analysis Let's look at each workflow step in detail. Fixed-Point Conversion Signal processing applications are typically implemented using floating-point operations in MATLAB. However, for power, cost, and performance reasons, these algorithms need to be converted to use fixed-point operations when targeting hardware. Fixed-point conversion can be very challenging and time-consuming, typically demanding 25 to 50 percent of the total design and implementation time. The automatic floating-point to fixed-point conversion workflow in HDL Coder™ can greatly simplify and accelerate this conversion process. The floating-point to fixed-point conversion workflow consists of the following steps: 1. Verify that the floating-point design is compatible with code generation. 2. Propose fixed-point types based on computed ranges, either through the simulation of the testbench or through static analysis that propagates design ranges to compute derived ranges for all the variables. 3. Generate fixed-point MATLAB code by applying proposed fixed-point types. 4. Verify the generated fixed-point code and compare the numerical accuracy of the generated fixed-point code with the original floating point code. Note that this step is optional. You can skip this step if your MATLAB design is already implemented in fixed-point. HDL Code Generation The HDL Code Generation step generates HDL code from the fixed-point MATLAB code. You can generate either VHDL or Verilog code that implements your MATLAB design. In addition to generating synthesizable HDL code, HDL Coder™ also generates various reports, including a traceability report that helps you navigate between your MATLAB code and the generated HDL code, and a resource utilization report that shows you, at the algorithm level, approximately what hardware resources are needed to implement the design, in terms of adders, multipliers, and RAMs. During code generation, you can specify various optimization options to explore the design space without having to modify your algorithm. In the Design Space Exploration and Optimization Options section below, you can see how you can modify code generation options and optimize your design for speed or area. HDL Verification Standalone HDL test bench generation: HDL Coder™ generates VHDL and Verilog test benches from your MATLAB scripts for rapid verification of generated HDL code. You can customize an HDL test bench using a variety of options that apply stimuli to the HDL code. You can also generate script files to automate the process of compiling and simulating your code in HDL simulators. These steps help to ensure the results of MATLAB simulation match the results of HDL simulation. HDL Coder™ also works with HDL Verifier to automatically generate two types of cosimulation testbenches: • HDL cosimulation-based verification works with Mentor Graphics® ModelSim® and QuestaSim®, where MATLAB and HDL simulation happen in lockstep. • FPGA-in-the-Loop simulation allows you to run a MATLAB simulation with an FPGA board in strict synchronization. You can use MATLAB to feed real world data into your design on the FPGA, and ensure that the algorithm will behave as expected when implemented in hardware. HDL Synthesis Apart from the language-related challenges, programming for FPGAs requires the use of complex EDA tools. Generating a bitstream from the HDL design and programming the FPGA can be daunting tasks. HDL Coder™ provides automation here, by creating project files for Xilinx® and Altera® that are configured with the generated HDL code. You can use the workflow steps to synthesize the HDL code within the MATLAB environment, see the results of synthesis, and iterate on the MATLAB design to improve synthesis results. #### Design Space Exploration and Optimization Options HDL Coder™ provides the following optimizations to help you explore the design space trade-offs between area and speed. You can use these options to explore various architectures and trade-offs without having to manually rewrite your algorithm. Speed Optimizations • Pipelining : To improve the design’s clock frequency, HDL Coder enables you to insert pipeline registers in various locations within your design. For example, you can insert registers at the design inputs and outputs, and also at the output of a given MATLAB variable in your algorithm. • Distributed Pipelining : HDL Coder also provides an optimization based on retiming to automatically move pipeline registers you have inserted to maximize clock frequency, by minimizing the delay through combinational paths in your design. Area Optimizations • RAM mapping: HDL Coder™ maps matrices to wires or registers in hardware. If persistent matrix variables are mapped to registers, they can take up a large amount of FPGA area. HDL Coder™ automatically maps persistent matrices to block RAM to improve area efficiency. The challenge in mapping MATLAB matrices to block RAM is that block RAM in hardware typically has a limited set of read and write ports. HDL Coder™ solves this problem by automatically partitioning and scheduling the matrix reads and writes to honor the block RAM’s port constraints, while still honoring the other control- and data-dependencies in the design. • Resource sharing: This optimization identifies functionally equivalent multiplier operations in MATLAB code and shares them. You can control the amount of multiplier sharing in the design. • Loop streaming: A MATLAB for-loop creates a FOR_GENERATE loop in VHDL. The body of the loop is replicated as many times in hardware as the number of loop iterations. This results in an inefficient use of area. The loop streaming optimization creates a single hardware instance of the loop body that is time-multiplexed across loop iterations. • Constant multiplier optimization: This design level optimization converts constant multipliers into shift and add operations using canonical signed digit (CSD) techniques. #### Best Practices Now, let's look at few best practices related to writing MATLAB code when targeting FPGAs. When writing a MATLAB design: • Use the code generation subset of MATLAB supported for HDL code generation. • Keep the top-level interface as simple as possible. The top-level function size, types, and complexity determine the interface of the chip implemented in hardware. • Do not pass in a big chunk of parallel data into the design. Parallel data requires a large number of IO pins on the chip, and would probably not be synthesizable. In a typical image processing design, you should serialize the pixels as inputs and buffer them internally in the algorithm. When writing a MATLAB test bench: • Call the design from the testbench function. • Exercise the design thoroughly. This is particularly important for floating-point to fixed-point conversion, where HDL Coder™ determines the ranges of the variables in the algorithm based on the values the testbench assigns to the variables. You can reuse this testbench to generate an HDL testbench for testing the generated hardware. • Simulate the design with the testbench prior to code generation to make sure there are no simulation errors, and to make sure all the required files are on the path. #### Conclusion HDL Coder™ provides a seamless workflow when you want to implement your algorithm in an FPGA. In this post, I have shown you how to take an image processing algorithm written in MATLAB, convert it to fixed-point, generate HDL code, verify the generated HDL code using the test bench, and finally, synthesize the design and implement it in hardware. We hope this brief introduction to the HDL Coder™ and MATLAB-to-HDL code generation, verification framework has shown how you can quickly get started on implementing your MATLAB designs and target FPGAs. Please let us know in the comments for this post how you might use this new functionality. Or, if you've already tried using HDL Coder™, let us know about your experiences here. Get the MATLAB code Published with MATLAB® R2013a By Loren Shure 07:22 UTC | Posted in News | Permalink | 14 Comments » ## New Datatype under Development for Possible MATLAB Release There is a new datatype we are playing around with that we hope to make available in an upcoming release and we would like your input beforehand. ### Contents #### New Datatype in Action Let me show you the new datatype in action so you can first get a feel for it. ```inputData = magic(3) ``` ```inputData = 8 1 6 3 5 7 4 9 2 ``` ```outputValues = dis(inputData); ``` #### Let's Examine the Output ```outputValues ``` ```Why are you asking? 4 2 3 1 9 6 8 7 5 ``` Well, that's a bit strange, isn't it? I wonder what the relationship between inputData and outputValues is. What can we learn about outputValues? ```whos outputValues ``` ``` Name Size Bytes Class Attributes outputValues 1x1 248 dis ``` Well, it's a dis array. Let's look at it again. ```outputValues ``` ```What's it matter to you? 5 6 9 4 3 2 7 8 1 ``` Say what? Let's check it a few more times. ```outputValues outputValues outputValues ``` ```Who wants to know? 6 3 9 5 8 7 4 2 1 Who wants to know? 5 2 9 1 4 8 7 6 3 Who are you to ask me that? 5 3 7 9 8 2 1 4 6 ``` Hoping you get the double meaning here - the dis array not only mixes up the values of the input for display purposes, but also tries to gently *dis*respect you along the way. Even though this is a silly class, I'll show you the code so you can see how simple it is to make such a class. ```type dis ``` ```classdef dis %dis dis is a class. % In fact, it's a declasse class. properties Data end properties (Access=protected) Original end properties (Constant) Answers = {'Why are you asking?' ,... 'What''s it matter to you?',... 'Who are you to ask me that?',... 'Who wants to know?',... 'What''s the big deal?'} end methods function display(obj) disp(dis.Answers{randi(length(dis.Answers),1)}) obj.Data(:) = obj.Data(randperm(numel(obj.Data))); disp(obj.Data) end function obj = dis(in) obj.Original = in; obj.Data = reshape(in(randperm(numel(in))),size(in)); end end end ``` #### Should We Invest More Resources? Of course, I could also add some numeric functions like plus to dis, but I didn't take the time, in case you didn't find this possible new MATLAB addition useful. So please share your thoughts with us here. Get the MATLAB code Published with MATLAB® R2013a By Loren Shure 08:14 UTC | Posted in Fun | Permalink | 15 Comments » ## Multiple Y Axes We were musing here about how common it is to want more than two Y axes on a plot. Checking out the File Exchange, there seem to be several candidates, indicating that this is something at least some people find useful. ### Contents #### Sample Plot Here's a sample plot using plotyy that comes with MATLAB. ```x = 0:0.01:20; y1 = 200*exp(-0.05*x).*sin(x); y2 = 0.8*exp(-0.5*x).*sin(10*x); plotyy(x,y1,x,y2,'plot'); ``` #### List of Some Possibilities In addition to plotyy in MATLAB, here's a list of some of the candidates from the File Exchange. #### What are You Plotting with More Y Axes? I am curious to know what kind of data or results you are plotting so that having multiple y-axes makes a compelling presentation. Let us know here. Get the MATLAB code Published with MATLAB® R2013a By Loren Shure 14:42 UTC | Posted in Graphics | Permalink | 15 Comments » ## Using the MATLAB Unit Testing Infrastructure for Grading Assignments Steven Lord, Andy Campbell, and David Hruska are members of the Quality Engineering group at MathWorks who are guest blogging today to introduce a new feature in R2013a, the MATLAB unit testing infrastructure. There are several submissions on the MATLAB Central File Exchange related to unit testing of MATLAB code. Blogger Steve Eddins wrote one highly rated example back in 2009. In release R2013a, MathWorks included in MATLAB itself a MATLAB implementation of the industry-standard xUnit testing framework. If you're not a software developer, you may be wondering if this feature will be of any use to you. In this post, we will describe one way someone who may not consider themselves a software developer may be able to take advantage of this framework using the example of a professor grading students' homework submissions. That's not to say that the developers in the audience should move on to the next post; you can use these tools to test your own code just like a professor can use them to test code written by his or her students. There is a great deal of functionality in this feature that we will not show here. For more information we refer you to the MATLAB Unit Testing Framework documentation. ### Contents #### Background In order to use this feature, you should be aware of how to define simple MATLAB classes in classdef files, how to define a class that inherits from another, and how to specify attributes for methods and properties of those classes. The object-oriented programming documentation describes these capabilities. #### Problem Statement As a professor in an introductory programming class, you want your students to write a program to compute Fibonacci numbers. The exact problem statement you give the students is: ```Create a function "fib" that accepts a nonnegative integer n and returns the nth Fibonacci number. The Fibonacci numbers are generated by this relationship:``` ```F(0) = 1 F(1) = 1 F(n) = F(n-1) + F(n-2) for integer n > 1``` `Your function should throw an error if n is not a nonnegative integer.` #### Basic Unit Test The most basic MATLAB unit test is a MATLAB classdef class file that inherits from the matlab.unittest.TestCase class. Throughout the rest of this post we will add additional pieces to this basic framework to increase the capability of this test and will change its name to reflect its increased functionality. ```dbtype basicTest.m ``` ```1 classdef basicTest < matlab.unittest.TestCase 2 3 end ``` ```test = basicTest ``` ```test = basicTest with no properties. ``` #### Running a Test To run the test, we can simply pass test to the run function. There are more advanced ways that make it easier to run a group of tests, but for our purposes (checking one student's answer at a time) this will be sufficient. When you move to checking multiple students' answers at a time, you can use run inside a for loop. Since basicTest doesn't actually validate the output from the student's function, it doesn't take very long to execute. ```results = run(test) ``` ```results = 0x0 TestResult array with properties: Name Passed Failed Incomplete Duration Totals: 0 Passed, 0 Failed, 0 Incomplete. 0 seconds testing time. ``` Let's say that a student named Thomas submitted a function fib.m as his solution to this assignment. Thomas's code is stored in a sub-folder named thomas. To set up our test to check Thomas's answer, we add the folder holding his code to the path. ```addpath('thomas'); dbtype fib.m ``` ```1 function y = fib(n) 2 if n <= 1 3 y = 1; 4 else 5 y = fib(n-1)+fib(n-2); 6 end ``` #### Test that F(0) Equals 1 The basicTest is a valid test class, and we can run it, but it doesn't actually perform any validation of the student's test file. The methods that will perform that validation need to be written in a methods block that has the attribute Test specified. The matlab.unittest.TestCase class includes qualification methods that you can use to test various qualities of the results returned by the student files. The qualification method that you will likely use most frequently is the verifyEqual method, which passes if the two values you pass into it are equal and reports a test failure if they are not. The documentation for the matlab.unittest.TestCase class lists many other qualification methods that you can use to perform other types of validation, including testing the data type and size of the results; matching a string result to an expected string; testing that a given section of code throws a specific errors or issues a specific warning; and many more. This simple test builds upon generalTest by adding a test method that checks that the student's function returns the value 1 when called with the input 0. ```dbtype simpleTest.m ``` ```1 classdef simpleTest < matlab.unittest.TestCase 2 methods(Test) 3 function fibonacciOfZeroShouldBeOne(testCase) 4 % Evaluate the student's function for n = 0 5 result = fib(0); 6 testCase.verifyEqual(result, 1); 7 end 8 end 9 end ``` Thomas's solution to the assignment satisfies this basic check. We can use the results returned from run to display the percentage of the tests that pass. ```results = run(simpleTest) percentPassed = 100 * nnz([results.Passed]) / numel(results); disp([num2str(percentPassed), '% Passed.']); ``` ```Running simpleTest . Done simpleTest __________ results = TestResult with properties: Name: 'simpleTest/fibonacciOfZeroShouldBeOne' Passed: 1 Failed: 0 Incomplete: 0 Duration: 0.0112 Totals: 1 Passed, 0 Failed, 0 Incomplete. 0.011168 seconds testing time. 100% Passed. ``` #### Test that F(pi) Throws an Error Now that we have a basic positive test in place we can add in a test that checks the behavior of the student's function when passed a non-integer value (like n = pi) as input. The assignment stated that when called with a non-integer value, the student's function should error. Since the assignment doesn't require a specific error to be thrown, the test passes as long as fib(pi) throws any exception. ```dbtype errorCaseTest.m ``` ```1 classdef errorCaseTest < matlab.unittest.TestCase 2 methods(Test) 3 function fibonacciOfZeroShouldBeOne(testCase) 4 % Evaluate the student's function for n = 0 5 result = fib(0); 6 testCase.verifyEqual(result, 1); 7 end 8 function fibonacciOfNonintegerShouldError(testCase) 9 testCase.verifyError(@()fib(pi), ?MException); 10 end 11 end 12 end ``` Thomas forgot to include a check for a non-integer valued input in his function, so our test should indicate that by reporting a failure. ```results = run(errorCaseTest) percentPassed = 100 * nnz([results.Passed]) / numel(results); disp([num2str(percentPassed), '% Passed.']); ``` ```Running errorCaseTest . ================================================================================ Verification failed in errorCaseTest/fibonacciOfNonintegerShouldError. --------------------- Framework Diagnostic: --------------------- verifyError failed. --> The function did not throw any exception. Expected Exception Type: MException Evaluated Function: @()fib(pi) ------------------ Stack Information: ------------------ In C:\Program Files\MATLAB\R2013a\toolbox\matlab\testframework\+matlab\+unittest\+qualifications\Verifiable.m (Verifiable.verifyError) at 637 In H:\Documents\LOREN\MyJob\Art of MATLAB\errorCaseTest.m (errorCaseTest.fibonacciOfNonintegerShouldError) at 9 ================================================================================ . Done errorCaseTest __________ Failure Summary: Name Failed Incomplete Reason(s) ============================================================================================= errorCaseTest/fibonacciOfNonintegerShouldError X Failed by verification. results = 1x2 TestResult array with properties: Name Passed Failed Incomplete Duration Totals: 1 Passed, 1 Failed, 0 Incomplete. 0.026224 seconds testing time. 50% Passed. ``` Another student, Benjamin, checked for a non-integer value in his code as you can see on line 2. ```rmpath('thomas'); addpath('benjamin'); dbtype fib.m ``` ```1 function y = fib(n) 2 if (n ~= round(n)) || n < 0 3 error('N is not an integer!'); 4 elseif n == 0 || n == 1 5 y = 1; 6 else 7 y = fib(n-1)+fib(n-2); 8 end ``` Benjamin's code passed both the test implemented in the fibonacciOfZeroShouldBeOne method (which we copied into errorCaseTest from simpleTest) and the new test case implemented in the fibonacciOfNonintegerShouldError method. ```results = run(errorCaseTest) percentPassed = 100 * nnz([results.Passed]) / numel(results); disp([num2str(percentPassed), '% Passed.']); ``` ```Running errorCaseTest .. Done errorCaseTest __________ results = 1x2 TestResult array with properties: Name Passed Failed Incomplete Duration Totals: 2 Passed, 0 Failed, 0 Incomplete. 0.010132 seconds testing time. 100% Passed. ``` #### Basic Test for Students, Advanced Tests for Instructor The problem statement given earlier in this post is a plain text description of the homework assignment we assigned to the students. We can also state the problem for the students in code (if they're using release R2013a or later) by giving them a test file they can run just like simpleTest or errorCaseTest. They can directly use this "requirement test" to ensure their functions satisfy the requirements of the assignment. ```dbtype studentTest.m ``` ```1 classdef studentTest < matlab.unittest.TestCase 2 methods(Test) 3 function fibonacciOfZeroShouldBeOne(testCase) 4 % Evaluate the student's function for n = 0 5 result = fib(0); 6 testCase.verifyEqual(result, 1); 7 end 8 function fibonacciOfNonintegerShouldError(testCase) 9 testCase.verifyError(@()fib(pi), ?MException); 10 end 11 end 12 end ``` In order for the student's code to pass the assignment, it will need to pass the test cases given in the studentTest unit test. However, we don't want to use studentTest as the only check of the student's code. If we did, the student could write their function to cover only the test cases in the student test file. We could solve this problem by having two separate test files, one containing the student test cases and one containing additional test cases the instructor uses in the grading process. Can we avoid having to run both test files manually or duplicating the code from the student test cases in the instructor test? Yes! To do so, we write an instructor test file to incorporate, through inheritance, the student test file. We can then add additional test cases to the instructor test file. When we run this test it should run three test cases; two inherited from studentTest, fibonacciOfZeroShouldBeOne and fibonacciOfNonintegerShouldError, and one from instructorTest itself, fibonacciOf5. ```dbtype instructorTest.m ``` ```1 classdef instructorTest < studentTest 2 % Because the student test file is a matlab.unittest.TestCase and 3 % instructorTest inherits from it, instructorTest is also a 4 % matlab.unittest.TestCase. 5 6 methods(Test) 7 function fibonacciOf5(testCase) 8 % Evaluate the student's function for n = 5 9 result = fib(5); 10 testCase.verifyEqual(result, 8, 'Fibonacci(5) should be 8'); 11 end 12 end 13 end ``` Let's look at Eric's test file that passes the studentTestFile test, but in which he completely forgot to implement the F(n) = F(n-1)+F(n-2) recursion step. ```rmpath('benjamin'); addpath('eric'); dbtype fib.m ``` ```1 function y = fib(n) 2 if (n ~= round(n)) || n < 0 3 error('N is not an integer!'); 4 end 5 y = 1; ``` It should pass the student unit test. ```results = run(studentTest); percentPassed = 100 * nnz([results.Passed]) / numel(results); disp([num2str(percentPassed), '% Passed.']); ``` ```Running studentTest .. Done studentTest __________ 100% Passed. ``` It does NOT pass the instructor unit test because it fails one of the test cases. ```results = run(instructorTest) percentPassed = 100 * nnz([results.Passed]) / numel(results); disp([num2str(percentPassed), '% Passed.']); ``` ```Running instructorTest .. ================================================================================ Verification failed in instructorTest/fibonacciOf5. ---------------- Test Diagnostic: ---------------- Fibonacci(5) should be 8 --------------------- Framework Diagnostic: --------------------- verifyEqual failed. --> NumericComparator failed. --> The values are not equal using "isequaln". Actual Value: 1 Expected Value: 8 ------------------ Stack Information: ------------------ In C:\Program Files\MATLAB\R2013a\toolbox\matlab\testframework\+matlab\+unittest\+qualifications\Verifiable.m (Verifiable.verifyEqual) at 411 In H:\Documents\LOREN\MyJob\Art of MATLAB\instructorTest.m (instructorTest.fibonacciOf5) at 10 ================================================================================ . Done instructorTest __________ Failure Summary: Name Failed Incomplete Reason(s) ========================================================================== instructorTest/fibonacciOf5 X Failed by verification. results = 1x3 TestResult array with properties: Name Passed Failed Incomplete Duration Totals: 2 Passed, 1 Failed, 0 Incomplete. 0.028906 seconds testing time. 66.6667% Passed. ``` Benjamin, whose code we tested above, wrote a correct solution to the homework problem. ```rmpath('eric'); addpath('benjamin'); results = run(instructorTest) percentPassed = 100 * nnz([results.Passed]) / numel(results); disp([num2str(percentPassed), '% Passed.']); rmpath('benjamin'); ``` ```Running instructorTest ... Done instructorTest __________ results = 1x3 TestResult array with properties: Name Passed Failed Incomplete Duration Totals: 3 Passed, 0 Failed, 0 Incomplete. 0.015946 seconds testing time. 100% Passed. ``` #### Conclusion In this post, we showed you the basics of using the new MATLAB unit testing infrastructure using homework grading as a use case. We checked that the student's code worked (by returning the correct answer) for one valid value and worked (by throwing an error) for one invalid value. We also showed how you can use this infrastructure to provide an aid/check for the students that you can also use as part of your grading. We hope this brief introduction to the unit testing framework has shown you how you can make use of this feature even if you don't consider yourself a software developer. Let us know in the comments for this post how you might use this new functionality. Or, if you've already tried using matlab.unittest, let us know about your experiences here. Get the MATLAB code Published with MATLAB® R2013a By Loren Shure 10:18 UTC | Posted in How To, New Feature | Permalink | 8 Comments » ## Major New Release at Chebfun – Two Dimensional Capabilities Perhaps you've read about Chebfun before here or on Cleve''s blog. Chebfun allows you to do numerical computing with functions and not simply numbers. If not, you might want to investigate. If so, you may be very interested in new capabilities available. ### Contents #### Major New Capability Just recently, the Chebfun team released a major new capability on their site. While Chebfun is extremely useful, it was restricted to solving problems in one dimension. Now you may solve problems with two independent variables using Chebfun2! As before, their website includes code you can freely download and many interesting examples. The examples are well documented and easy to learn from. I will soon have time to explore Chebfun2 because of an irrational amount of airline time. I also love how easy it is to install - you can do it from MATLAB. They kindly post the code right on the website so you can paste it into your running session. #### Many Examples They have many examples included in the software, and posted on their web site, using the publish command in MATLAB. The high-level examples are organized into these categories: 1. Approximation 2. Complex Analysis 3. Geometry 4. Optimization 5. Statistics 6. Rootfinding 7. Vector Calculus 8. Fun ! I hope with this breadth of examples, you are able to find ones interesting or relevant (or both) to you. Once you've gotten a chance to try out Chebfun2, I'd love to hear about your experiences. Post them here to share with us. Get the MATLAB code Published with MATLAB® R2012b By Loren Shure 09:25 UTC | Posted in Community, Fun, New Feature, News | Permalink | 2 Comments » ## Logical Indexing – Multiple Conditions I've talked about logical indexing before in some of the linked posts, but recent work makes me want to show it off again. One of the nice things about logical indexing is that it is very easy and natural to combine the results of different conditions to select items based on multiple criteria. ### Contents #### What is Logical Indexing? Suppose I have an array of integers, not sorted, and want to find the ones that are less than a certain number. Here's how I can do it using the function find. ```X = randperm(20) target = 5; ``` ```X = Columns 1 through 13 3 7 1 16 20 15 13 9 14 8 11 19 18 Columns 14 through 20 4 10 5 6 17 12 2 ``` ```ind = find(X < target) ``` ```ind = 1 3 14 20 ``` You can see that find returns the indices into the array X that have values less than the target. And we can use these to extract the values. ```Xtarget = X(ind) ``` ```Xtarget = 3 1 4 2 ``` Another way to accomplish the same outcome is to use the logical expression to directly perform the indexing operation. Here's what I mean. ```logInd = X < target ``` ```logInd = Columns 1 through 13 1 0 1 0 0 0 0 0 0 0 0 0 0 Columns 14 through 20 1 0 0 0 0 0 1 ``` MATLAB returns an array that matches the elements of the array X, element-by-element holding 1s where the matching values in X are the desired values, and 0s otherwise. The array logInd is not an array of double numbers, but have the class logical. ```whos logInd ``` ``` Name Size Bytes Class Attributes logInd 1x20 20 logical ``` I can now use this array to extract the desired values from X. ```XtargetLogical = X(logInd) ``` ```XtargetLogical = 3 1 4 2 ``` Both methods return the results. ```isequal(Xtarget, XtargetLogical) ``` ```ans = 1 ``` #### Compound conditions. Let me create an anonymous function that returns true (logical(1)) for values that are even integers. ```iseven = @(x) ~logical(rem(x,2)) ``` ```iseven = @(x)~logical(rem(x,2)) ``` Test iseven. ```iseven(1:5) ``` ```ans = 0 1 0 1 0 ``` #### Find Values Meeting More Than One Condition Now I would like to find the values in X that are less than target and are even. This is very natural to do with logical indexing. We have the pieces of code we need already. ```compoundCondInd = (X < target) & iseven(X) ``` ```compoundCondInd = Columns 1 through 13 0 0 0 0 0 0 0 0 0 0 0 0 0 Columns 14 through 20 1 0 0 0 0 0 1 ``` We can see we found suitable values at locations 3 and 19. And we can extract those values next. ```X(compoundCondInd) ``` ```ans = 4 2 ``` #### Did You Notice? Did you see how easy it is to combine multiple conditions? I simply look for each of condition, getting back logical arrays, and then compute a logical array where the two input arrays are both true (via &). I could, of course, calculate a compound condition where only either one or the other condition needs to be true using logical or (via |). #### A Recent Application I recently used this in the context of finding suspect data values. I had 2 arrays, hourly temperature and speed. The problem is that when the temperature gets near or below freezing, the speed sensor might freeze. But I didn't want to delete ALL the values below freezing. So I looked for data where the temperature was sufficiently low AND the speed was very low (which could potentially mean the sensor was frozen). That way, I did not need to discard all data at low temperatures. #### Have You Used Compound Indexing? Did you do it like I did, using logical expressions? Or did you use some other techniques? What were you trying to achieve with your compound indexing? Let me know here. Get the MATLAB code Published with MATLAB® R2012b By Loren Shure 08:48 UTC | Posted in Indexing | Permalink | 22 Comments » ## Introduction to Functional Programming with Anonymous Functions, Part 3 Tucker McClure is an Application Engineer with The MathWorks. He spends his time helping our customers accelerate their work with the right tools and problem-solving techniques. Today, he'll be discussing how "functional programming" can help create brief and powerful MATLAB code. ### Contents #### Recap When we left off, we had implemented conditional statements, recursion, and multi-line statements in anonymous functions, so today we'll tackle loops. Before we get started, let's implement the functions that we'll need again. ```iif = @(varargin) varargin{2*find([varargin{1:2:end}], 1, 'first')}(); recur = @(f, varargin) f(f, varargin{:}); curly = @(x, varargin) x{varargin{:}}; ``` #### Loops Note that the recursive sequences we created in the last part could also have been implemented with for loops. For instance, here's factorial of n: ``` factorial = 1; for k = 1:n factorial = k * factorial; end``` Many times, recursive functions can be written iteratively in loops. However, we can't use for or while in an anonymous function, so instead of asking how we can unwrap recursive functions into iterative loops, let's ask the reverse: how can we implement loops with recursive functions? #### Loops via Recursion To loop properly, one must know: • What to do each iteration • If the process should continue to the next iteration • What's available when the loop begins Allowing the "what to do" to be a function (fcn) of some state (x), the "if it should continue" to be another function (cont) of the state, and "what's available when the loop begins" to be the initial state (x0), we can write a loop function. This is a big step, so bear with me for some explanation! On each step, the loop function will call the cont function, passing in all elements of the state, x, as in cont(x{:}). If that returns false (meaning we shouldn't continue), the current state, x, is returned. Otherwise, if we should continue, it calls fcn with all elements of the current state, as in fcn(x{:}), and passes the output from that to the next iteration. Letting this single iteration be denoted as f, we can build the anonymous function loop using our recur function. ```loop = @(x0, cont, fcn) ... % Header recur(@(f, x) iif(~cont(x{:}), x, ... % Continue? true, @() f(f, fcn(x{:}))), ... % Iterate x0); % from x0. ``` For this trivial example, the state is simply the iteration count. We'll increase the count every iteration until the count >= n and return the final count. All this does therefore is count from 0 to the input n. Not very interesting, but it demonstrates the loop. ```count = @(n) loop({0}, ... % Initialize state, k, to 0 @(k) k < n, ... % While k < n @(k) {k + 1}); % k = k + 1 (returned as cell array) arrayfun(count, 1:10) ``` ```ans = [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] ``` I suppose that worked, but why are we using cell arrays to store the state, such as {0} and {k+1}? There are two reasons. First, if x is a cell array, then when we dump all elements of x into fcn, they become multiple arguments! That is, fcn(x{:}) is the same as fcn(x{1}, x{2}, ...). So instead of our function taking a big cell array for an input, it can take named arguments, which we'll use below. Second, we do this because it allows a function to return multiple elements that will be used by the next iteration, so if a function needed to return y and z, which would be arguments to the next iteration, it can simply return one cell array, {y, z}. It makes it easy to use. Here's a factorial example demonstrating this. The state is two different things: the iteration count, k, and factorial of the previous number, x. Note that both values of the state, k and x, are inputs to all of the functions. Note here how we're using @(k, x) for our functions. By allowing x to be a cell array, each element of the array becomes an argument such as k or x! ```factorial = @(n) loop({1, 1}, ... % Start with k = 1 and x = 1 @(k, x) k <= n, ... % While k <= n @(k, x) {k + 1, ... % k = k + 1; k * x}); % x = k * x; ``` Call it: ```factorial(5) ``` ```ans = [6] [120] ``` Wait, we wanted 120 (the fifth number of the factorial sequence), so what's that 6 doing there? #### A Better Loop Remember how we return the full state? That's not very useful for this factorial example, as we get both k and the number we want as outputs in that cell array. Because the whole state isn't generally useful, let's add a cleanup function to our loop. We'll execute this when the loop is done (right after ~cont(...) returns false). Our cleanup function will take the full state and return only the important parts. ```loop = @(x0, cont, fcn, cleanup) ... % Header recur(@(f, x) iif(~cont(x{:}), cleanup(x{:}), ... % Continue? true, @() f(f, fcn(x{:}))), ... % Iterate x0); % from x0. ``` Now here's factorial, with clean output. ```factorial = @(n) loop({1, 1}, ... % Start with k = 1 and x = 1 @(k,x) k <= n, ... % While k <= n @(k,x) {k + 1, ... % k = k + 1; k * x}, ... % x = k * x; @(k,x) x); % End, returning x ``` The result: ```factorial(5) ``` ```ans = 120 ``` First seven numbers of factorial: ```arrayfun(factorial, 1:7) ``` ```ans = Columns 1 through 6 1 2 6 24 120 720 Column 7 5040 ``` That's better. I'll be the first to admit that the loop is a bit longer and much more rigid than a normal MATLAB loop. On the other hand, it can be used in anonymous functions, and its syntax has a certain cleanliness to it in that it doesn't modify any variables that live outside the loop; it has its own scope. This is one nice feature of loop being a function that takes code (functions) as arguments. #### Doing More in a Loop Let's say we want to do something else in the loop, but don't want its output passed to the next iteration, like printing something out. Remember the do_three_things example from last time? We executed numerous statements by putting them in a cell array and used curly to access the output we cared about. We can do that here, in a loop. For example, let's write out a function to print n digits of the factorial sequence. We'll use an array to store two things. The first will be the number that fprintf returns, which we don't care about. The second will be the cell array we want to return, k and x. We'll access that cell array with curly, as in curly({..., {k, x}}, 2), which just returns {k, x}. ```say_it = @(k, x) fprintf('Factorial(%d): %d\n', k, x); print_factorial = @(n) loop({1, 1}, ... % Start with k=1, x=1 @(k,x) k <= n, ... % While k <= n @(k,x) curly({say_it(k,k*x),... % Print, discard {k + 1, ... % k = k + 1; k * x}}, ... % x = k * x; 2), ... % Return {k+1,k*x}. @(k,x) x); % End, returning x ``` ```print_factorial(7); ``` ```Factorial(1): 1 Factorial(2): 2 Factorial(3): 6 Factorial(4): 24 Factorial(5): 120 Factorial(6): 720 Factorial(7): 5040 ``` Now we're executing multiple things and only returning what we want while inside a loop built built on recursion and anonymous conditionals! We've come a long way since Part 1. As a practical note, recall that because these loops use recursion, there's a limit to the number of times they can loop (MATLAB has a recursion limit, which is a setting in Preferences). Also, a recursive implementation of a loop isn't the most efficient. For this reason, it's best to implement loop itself in a file that can then be used in the same way. If it's in a file, it can also be kept on the MATLAB path so that it can be used anywhere. ``` function x = loop(x, cont, f, cleanup) while cont(x{:}) x = f(x{:}); end if nargin == 4 x = cleanup(x{:}); end end``` #### Final Example This brings us to our final example. Below, we'll simulate a simple harmonic oscillator over time, using a structure to store dissimilar states, including a complete time history of the oscillator. This might simulate, for example, the sway of a lamp that's hanging from the ceiling after an earthquake. ```% First, calculate a state transition matrix that represents a harmonic % oscillator with damping. Multiplying this by |x| produces |x| at a % slightly later time. The math here isn't important to the example. Phi = expm(0.5*[0 1; -1 -0.2]); % Now create the loop. x = loop({[1; 0], 1}, ... % Initial state, x = [1; 0] @(x,k) k <= 100, ... % While k <= 100 @(x,k) {[x, Phi * x(:, end)], ... % Update x k + 1}, ... % Update k @(x,k) x); % End, return x % Create a plot function. plot_it = @(n, x, y, t) {subplot(2, 1, n), ... % Select subplot. plot(x(n, :)), ... % Plot the data. iif(nargin==4, @() title(t), ... % If there's a true, []), ... % title, add it. ylabel(y), ... % Label y xlabel('Time (s)')}; % and x axes. % Plot the result. plot_it(1, x, 'Position (m)', 'Harmonic Oscillator'); plot_it(2, x, 'Velocity (m/s)'); ``` #### Summary That's it for loops via recursion! Let's look back at what we did over these three parts. First, we started with a simple map utility function to demonstrate the function-of-functions idea. Then we created our ubiquitous inline if, which further enabled recursion (a conditional is necessary to make recursion stop!). We also showed using multiple statements by storing their outputs in a cell array. Finally, we created a loop construct on top of our recursion functions. At this point, we've done more than just scratch the surface of functional programming. We've used MATLAB's interesting constructs, such as function handles, cell arrays, and varargin to implement a functional programming framework, allowing a new syntax within MATLAB, where code can be arguments to flow control functions. Here's a roundup of what we created. ```map = @(val, fcns) cellfun(@(f) f(val{:}), fcns); mapc = @(val, fcns) cellfun(@(f) f(val{:}), fcns, 'UniformOutput', 0); iif = @(varargin) varargin{2*find([varargin{1:2:end}], 1, 'first')}(); recur = @(f, varargin) f(f, varargin{:}); paren = @(x, varargin) x(varargin{:}); curly = @(x, varargin) x{varargin{:}}; loop = @(x0,c,f,r)recur(@(g,x)iif(c(x{:}),@()g(g,f(x{:})),1,r(x{:})),x0); ``` These have also been programmed as "normal" MATLAB functions so that they can be kept on the path and used whenever they're needed. These can be found under "Functional Programming Constructs" in File Exchange, here. Thanks for reading. I hope this has both enabled a new level of detail in anonymous functions in MATLAB and helped demonstrate the wide range of possibilities available within the MATLAB language. Do you have other functional programming patterns you use in your code? For instance, a do-while loop is just like our loop above except that it always runs at least one iteration. Any ideas how to program this or other interesting constructs in anonymous functions? Please let us know here! Get the MATLAB code Published with MATLAB® R2012b By Loren Shure 07:18 UTC | Posted in Function Handles, functional programming | Permalink | 5 Comments » ## Introduction to Functional Programming with Anonymous Functions, Part 2 Tucker McClure is an Application Engineer with The MathWorks. He spends his time helping our customers accelerate their work with the right tools and problem-solving techniques. Today, he'll be discussing how "functional programming" can help create brief and powerful MATLAB code. ### Contents #### Recap Last time, we said that functional programming was marked by storing functions as variables (function handles) and working with functions that act on other functions. We put these ideas together to implement our own version of a map function for handling multiple inputs and outputs from multiple functions simultaneously, and we created iif, an "inline if", to allow the use of conditional statements inside of anonymous functions. So how might we work with recursive functions -- functions of themselves? We'll see how a functional programming style allows us to implement recursive functionality inside anonymous functions, and this will pave the way for the final part, in which we'll implement loops, without ever using for or while (which we can't use in anonymous functions). Before we get started, let's implement iif again; we're going to need it frequently. ```iif = @(varargin) varargin{2*find([varargin{1:2:end}], 1, 'first')}(); ``` #### Anonymous Function Recursion Recall that a recursive function is a function that calls itself. It therefore needs some way to refer to itself. When we write an anonymous function, it isn't "named" (hence, "anonymous"), so it can't call itself by name. How can we get around this? Let's start with a Fibonacci sequence example. Recall that the nth number of the Fibonacci sequence is the sum of the previous two numbers, starting with 1 and 1, yielding 1, 1, 2, 3, 5, 8, 13, 21, etc. This is easy to implement recursively. ``` fib = @(n) iif(n <= 2, 1, ... % First two numbers true, @() fib(n-1) + fib(n-2)); % All later numbers``` But hey, that can't work! We haven't defined fib yet, so how could this anonymous function call it? In fact, the anonymous function will never "know" we're referring to it as fib, so this won't work at all. Therefore, instead of trying to call fib directly, let's provide another input: the handle of a function to call, f. ```fib = @(f, n) iif(n <= 2, 1, ... % First two numbers true, @() f(f, n-1) + f(f, n-2)); % All later numbers ``` Getting closer. Now, if we pass fib to fib along with the number we want, it will call fib, passing in fib as the first argument, recursively until we get our answer. ```fib(fib, 6) ``` ```ans = 8 ``` Ok, that's right. The sixth number of the sequence is 8. On the other hand, the syntax we've created is terrible. We have to provide the function to itself? I'd rather not. Instead, let's just write a new function that hands fib to fib along with the input n. ```fib2 = @(n) fib(fib, n); fib2(4) fib2(5) fib2(6) ``` ```ans = 3 ans = 5 ans = 8 ``` That's a lot closer to what we want, but there's one more step. Let's write a function called recur to hand a function handle to itself, along with any other arguments. This makes recursion less cumbersome. ```recur = @(f, varargin) f(f, varargin{:}); ``` That was simple, so now let's re-write fib. The first argument to recur is the function, which we'll define inline. The second is n. That's all there is to it. It now reads as "Recursively call a function that, if k <= 2, returns one, and otherwise returns the recursive function of k-1 plus that of k-2, starting with the user's input n." (If it doesn't read quite this clearly at first, that's ok. It takes some getting used to. Comment liberally if necessary!) ```fib = @(n) recur(@(f, k) iif(k <= 2, 1, ... true, @() f(f, k-1) + f(f, k-2)), ... n); ``` And we can find the first ten numbers of the sequence via arrayfun. ```arrayfun(fib, 1:10) ``` ```ans = 1 1 2 3 5 8 13 21 34 55 ``` Factorial (f(n) = 1 * 2 * 3 * ... n) is another easy operation to represent recursively. ```factorial = @(n) recur(@(f, k) iif(k == 0, 1, ... true, @() k * f(f, k-1)), n); arrayfun(factorial, 1:7) ``` ```ans = Columns 1 through 6 1 2 6 24 120 720 Column 7 5040 ``` A number to an integer power has a nearly identical form. Here's 4.^(0:5). ```pow = @(x, n) recur(@(f, k) iif(k == 0, 1, ... true, @() x * f(f, k-1)), n); arrayfun(@(n) pow(4, n), 0:5) ``` ```ans = 1 4 16 64 256 1024 ``` That was a big step for anonymous functions, using both recursion and an inline conditional together with ease. Like map and iif, recur, looks strange at first, but once it's been seen, it's hard to forget how it works (just make one of the inputs a function handle and pass it to itself). And recursion doesn't have to stop at interesting mathematical sequences of numbers. For instance, in the next part, we'll use this to implement loops in, but first, we'll need a some helper functions and a good way to execute multiple statements in an anonymous function. #### Helpers These little functions are useful in many circumstances, and we're going to need curly frequently. ```paren = @(x, varargin) x(varargin{:}); curly = @(x, varargin) x{varargin{:}}; ``` They allow us to write x(3, 4) as paren(x, 3, 4) and similarly for curly braces. That is, now we can think of parentheses and curly braces as functions! At first this might not seem useful. However, imagine writing a function to return the width and height of the screen. The data we need is available from this call: ```get(0, 'ScreenSize') ``` ```ans = 1 1 1920 1200 ``` However, we don't need those preceeding ones. We could save the output to a variable, say x, and then access x(3:4), but if we need this in an anonymous function, we can't save to a variable. How do we access just elements 3 and 4? There are numerous ways, but paren and curly are similar to constructs found in other languages and are easy to use, so we'll use those here. Now we can write our screen_size function to return just the data we want. ```screen_size = @() paren(get(0, 'ScreenSize'), 3:4); screen_size() ``` ```ans = 1920 1200 ``` While on the subject, note that we can actually use any number of indices or even ':'. ```magic(3) paren(magic(3), 1:2, 2:3) paren(magic(3), 1:2, :) ``` ```ans = 8 1 6 3 5 7 4 9 2 ans = 1 6 5 7 ans = 8 1 6 3 5 7 ``` We do the same with the curly braces. Here, the regular expression pattern will match both 'rain' and 'Spain', but we'll only select the second match. ```spain = curly(regexp('The rain in Spain....', '\s(\S+ain)', 'tokens'), 2) ``` ```spain = 'Spain' ``` (Click for Regexp help.) It also works with ':' (note that the single quotes are required). ```[a, b] = curly({'the_letter_a', 'the_letter_b'}, ':') ``` ```a = the_letter_a b = the_letter_b ``` #### Executing Multiple Statements With curly in place, let's examine something a little different. Consider the following: ```do_three_things = @() {fprintf('This is the first thing.\n'), ... fprintf('This is the second thing.\n'), ... max(eig(magic(3)))}; do_three_things() ``` ```This is the first thing. This is the second thing. ans = [25] [26] [15] ``` We've executed three statements on a single line. All of the outputs are stored in the cell array, so we have three elements in the cell array. The first two outputs are actually garbage as far as we're concerned (they're just the outputs from fprintf, which is the number of bytes written, which we don't care about at all). The last output is from max(eig(magic(3))); That is, the biggest eigenvalue of magic(3) is exactly 15. Let's say we just wanted that final value, the eigenvalue. It's the third element of the cell array, so we can grab it with curly. ```do_three_things = @() curly({fprintf('This is the first thing.\n'), ... fprintf('This is the second thing.\n'), ... max(eig(magic(3)))}, 3); do_three_things() ``` ```This is the first thing. This is the second thing. ans = 15 ``` For a more complex example, let's say we want to write a function to: 1. Create a small figure in the middle of the screen 2. Plot some random points 3. Return the handles of the figure and the plot Then by storing all of the outputs in a cell array and using curly to access the outputs we care about, we can make a multi-line function with multiple outputs, all in a simple anonymous function. ```dots = @() curly({... figure('Position', [0.5*screen_size() - [100 50], 200, 100], ... 'MenuBar', 'none'), ... % Position the figure plot(randn(1, 100), randn(1, 100), '.')}, ... % Plot random points ':'); % Return everything [h_figure, h_dots] = dots() ``` ```h_figure = 3 h_dots = 187 ``` (As a quick aside, note that if a statement doesn't return anything, we can't put it in a cell array, and so we can't use it this way. There are ways around this, discussed here.) #### To Be Continued Today, we've come a long way, from a simple condition through recursion and executing multiple statements. Here's a roundup of the functions so far. ```map = @(val, fcns) cellfun(@(f) f(val{:}), fcns); mapc = @(val, fcns) cellfun(@(f) f(val{:}), fcns, 'UniformOutput', 0); iif = @(varargin) varargin{2*find([varargin{1:2:end}], 1, 'first')}(); recur = @(f, varargin) f(f, varargin{:}); paren = @(x, varargin) x(varargin{:}); curly = @(x, varargin) x{varargin{:}}; ``` These can also be found here, implemented as regular MATLAB functions that can be kept on the path. Next time, we'll look at loops. Until then, have you worked with functions such as paren or curly? How else are people implementing these or similar operations? Let us know here. Get the MATLAB code Published with MATLAB® R2012b By Loren Shure 11:00 UTC | Posted in Function Handles, functional programming | Permalink | 14 Comments » ### About Loren Shure works on design of the MATLAB language at MathWorks. She writes here about once a week on MATLAB programming and related topics. ### Blog Admin These postings are the author's and don't necessarily represent the opinions of MathWorks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.892881453037262, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/116545-eigenvectors.html
# Thread: 1. ## eigenvectors Hey, I'm really struggling today. We just started doing eigenvectors, and I'm supposed to solve this following problem without using determinants. Prove that the eigenvectors of the matrix 2 0 0 2 generate a 2-Dimensional space and give a basis for it. Also, state the eigenvalues. I just don't think this stuff is harder than some of the other stuff I've done, but I'm really pathetic right now. 2. The eigenvalues are in your given matrix. ${\lambda}=2$ 3. Originally Posted by grandunification Hey, I'm really struggling today. We just started doing eigenvectors, and I'm supposed to solve this following problem without using determinants. Prove that the eigenvectors of the matrix 2 0 0 2 generate a 2-Dimensional space and give a basis for it. Also, state the eigenvalues. I just don't think this stuff is harder than some of the other stuff I've done, but I'm really pathetic right now. The definition of eigenvalue and eigenvector says that you must have $\begin{bmatrix}2 & 0 \\ 0 & 2\end{bmatrix}\begin{bmatrix}x \\ y\end{bmatrix}= \begin{bmatrix} \lamba x \\ \lambda y\end{bmatrix}$. Do the multiplication on the left and compare the vectors.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495344161987305, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/25179-equation-set-integers.html
# Thread: 1. ## equation on the set of integers: Solve the following equation on the set of integers: $\sin(\frac{\pi}{3}(x-\sqrt{x^2-3x-12}))=0$ 2. Originally Posted by perash Solve the following equation on the set of integers: $\sin(\frac{\pi}{3}(x-\sqrt{x^2-3x-12}))=0$ Which is asking you to find all the solutions of: $x-\sqrt{x^2-3x-12}\equiv 0 \mod 3$ RonL 3. Hello, perash! I haven't solve it yet . . . It's quite involved . . . Solve the following equation on the set of integers: $\sin\left[\frac{\pi}{3}\left(x-\sqrt{x^2-3x-12}\right)\right]\:=\:0$ The Captain is absolutely correct . . . We have: . $\sin\left[\pi\left(\frac{x - \sqrt{x^2-3x-12}}{3}\right)\right] \:=\:0$ We know that: . $\sin(\pi k) \:=\:0$ . . . for any integer $k.$ Hence: . $\frac{x-\sqrt{x^2-3x-12}}{3}$ must be an integer. . . We can let: . $\frac{x-\sqrt{x^2-3x-12}}{3} \:=\:k$ There are more restrictions: . . $x^2-3x-12$ must be nonnegative: . $x^2-3x-12 \:\geq\:0\quad\Rightarrow\quad x \leq -3,\;x \geq 6$ . . And, of course, $x^2-3x-12$ must be a square. 4. So far I’ve found three possible values for x: −13, 7, 16. 5. Originally Posted by CaptainBlack $x-\sqrt{x^2-3x-12}\equiv 0 \mod 3$ Okay, my stab at this is confusing me. $x - \sqrt{x^2 - 3x - 12} \equiv 0~\text{ mod 3}$ $x - \sqrt{x^2} \equiv 0~\text{ mod 3}$ $x - |x| \equiv 0~\text{ mod 3}$ Isn't this an identity in mod 3? But the original equation is not true for all x within the domain. -Dan 6. You can’t remove multiples of 3 within the square root. $\sqrt{4-3\times1}$ ≢ $\sqrt{4}\pmod{3}$. 7. Originally Posted by JaneBennet You can’t remove multiples of 3 within the square root. $\sqrt{4+3\times7}$ ≢ $\sqrt{4}\pmod{3}$. Hmmm... I had never noticed that before. Thank you. (Upon reflection I should have known that. I studied that section of Algebra at one point.) By the way, if you like, you can do the "not equivalent" in LaTeX: $\not \equiv$ by typing \not \equiv -Dan 8. Originally Posted by JaneBennet So far I’ve found three possible values for x: −13, 7, 16. 16 doesn't work. I have "reduced" the problem to solving the Diophantine equation: $n^2 - 4m^2 = 57$ (I can explain how I got this, but I'd like the original poster to do some work on their own.) So $(n + 2m)(n - 2m) = 57$ There are going to be some obvious limits on m and n here because n + 2m has to be one of 1, 3, 19, 57. So we are looking the solution to one of two systems of equations: $n + 2m = 57$ and $n - 2m = 1$ or $n + 2m = 19$ and $n - 2m = 3$ The first has a solution of m = 14 and n = 29, giving an x value of -13. The second has a solution of m = 4 and n = 11, giving an x value of 7. So the only two solutions are x = -13 and x = 7. -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9293985962867737, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/homework?page=23&sort=newest&pagesize=50
# Tagged Questions Applies to questions of primarily educational value - not only questions that arise from actual homework assignments, but any question where it is preferable to guide the asker to the answer rather than giving it away outright. 1answer 70 views ### Photonics: Slab As a Lens [closed] The question can be found here: http://gyazo.com/fc4d26cd35e6ce368ad2a8ed504f1dcc The refractive index it references can be found here: http://gyazo.com/94fd2f3b5ea7da9226c3acd56b0024c1 I'm not ... 1answer 233 views ### Kinematics - Find theta with Coefficient of Friction? I recently found a problem that looked like this: A box sits on a horizontal wooden ramp. The coefficient of static friction between the box and the ramp is ... 3answers 183 views ### Classical mechanics. One-dimensional motion Here is one task below. How to solve equation $$m\ddot {x} + ax = F(t), x(0) = \dot x (0) = 0$$ in quadratures by using two methods? I tried to create a system of equations \begin{matrix} \dot ... 2answers 109 views ### How do you calculate the time to emission of an electron from a metal given the incident radiation? Here's the question: A monochromatic point source of light radiates 25 W at a wavelength of 5000 angstroms. A plate of metal is placed 100 cm from the source. Atoms in the metal have a radius of 1 ... 2answers 745 views ### How to add two perpendicular 2D vectors [closed] vector D = 4 cm North vector J = 4.5 cm West what is D+J? In a more general sense, how can two 2D vectors that are perpendicular to each other be added? 1answer 262 views ### Lagrangian density for a Piano String So I'm trying to do this problem where I'm given the Lagrangian density for a piano string which can vibrate both transversely and longitudinally. $\eta(x,t)$ is the transverse displacement and ... 0answers 159 views ### include the stretch of the spring own weight in potential energy for spring pendulum? we are given a problem with spring with its own mass $m$. I am confused how to set up the PE term in the Lagrangian. Assume the spring has length of $L_{0}$ when it is laying on a table horizontally. ... 1answer 203 views ### Solving by using Gauss law Task: find the vector $\mathbf E$ in the center of the sphere with radius $R$, which has charge volume distribution \$\rho , \rho = (\mathbf a \cdot \mathbf r ), \mathbf a = \operatorname{const}, ... 2answers 215 views ### Calculating two particle position and velocity with each other? I've calculated the displacements and average velocities of both particles. $A$ average velocity $= 1.1180m/s$ displacement is $11.1803m$. $B$ average velocity $= 0m/s$ displacement is $0m$. ... 0answers 477 views ### Create general equation of racing cars with head start [closed] Cars A and B are racing each other along the same straight road in the following manner: Car A has a head start and is a distance $d_A$ beyond the starting line at $t=0$. The starting line is at ... 1answer 251 views ### Modulus of Elasticity Consider a cube of unit dimensions. Let $\alpha$ and $\beta$ be the lateral and longitudinal strains. The expressions for moduli of elasticity on applying unit tension - 1) At one edge: Young's ... 1answer 156 views ### Symbol for dashpot/damper (in a harmonic oscillator) In diagrams that contain the dashpot symbol, sometimes the mass is attached to the "interior" end of the dashpot, other times the mass is attached to the "base" end. For example, consider the ... 2answers 276 views ### Tensor product decomposition of SU(2) I have a rather trivial question. I am looking for the decomposition of $1/2\otimes 1/2\otimes 1/2$. It should give, $0,1/2$ and $3/2$. I thought one must get as the overall dimension of this space 8, ... 2answers 212 views ### How to measure g using a metre stick and a ball Can I measure the value of g using only a metre stick and a ball? I am not supposed to use a stopwatch and that has been the problem. NOTE: I do not know if a solution exists or not. 1answer 58 views ### How to calculate the amout of time for oil to get out from a pneumatic cylinder? I have a system. You can assume it is just a 1" diameter steel pneumatic cylinder (1 foot long). Assume we ignore the bore size issue at this point. One chamber is filled up by oil (6" of volume), ... 0answers 81 views ### Mass loss rate of planetary nebulae The “interacting wind” model of planetary nebulae is based on the idea that the white dwarf phase of stellar evolution is preceded by a red giant phase. A fast wind from the hot white dwarf overtakes ... 1answer 105 views ### Help on unit conversion problem This is a problem from school. I will show my attempt. The question: "The gas constant for dry air R is 287 $\frac{m^2}{s^2*K}$. Assuming the temperature is 330 K and the pressure is 1050 hPa, what ... 1answer 180 views ### Scattering problem: Finding the speed of the scatterer after collision A particle of mass $M$ moving in a straight line with speed $v$ collides with a stationary particle of the same mass. In the center of mass coordinate system, the first particle is deflected by 90 ... 1answer 87 views ### What's the wavelength of an electron after hitting a potential barrier? I have this question: An electron with Energy $E = 40 eV$ hits a potential barrier with $E_0 = 30 eV$. What is the wavelength of the electron after hitting the potential barrier? I worked from ... 0answers 62 views ### How to find the directional cosines? l+m+n=0 where l,m,n are the directional cosines of a point P in the x,y,z plane. How do I find the angles the radius vector makes with the axes. I get two equations l+m+n=0 and l^2+m^2+n^2=1. But how ... 0answers 813 views ### Moment of Inertia problem [closed] I'm revising for my exam and there's this one problem where I can't get the correct answer, so I'd like to know how to solve it. The problem is: Two boys, each with a mass of 30kg, are sitting ... 1answer 303 views ### Calculating velocities using Reference frames Suppose an object A is traveling at a velocity of 100 m/s, and another object B is traveling at 105 m/s. With both the objects traveling through the same direction, taking A as a reference frame, the ... 1answer 281 views ### Two Blocks of mass M1 and M2 are connected by a spring of force constant k [closed] If Block 1 is elongated towards right to a distance $x_1$ and Block 2 is elongated towards left to a distance of $x_2$ simultaneously, what is the work done by the spring on each of these blocks ... 1answer 72 views ### Finding the number of particles scattered by a certain angle I'm trying to do the problem below, but it seems like there is incomplete information. PROBLEM STATEMENT: In a scattering experiment, $10^6$ $\alpha$ particles are scattered at an angle of ... 1answer 2k views ### How to calculate work done by variable kinetic friction force? A heavy chain with a mass per unit length $\rho$ is pulled by the constant force $F$ along a horizontal surface consisting of a smooth section and a rough section. The chain is initially at rest on ... 0answers 104 views ### How to show the oblique parameters S, T, and U are coefficients of d=6 operators In Morii, Lim, Mukherjee, The Physics of the Standard Model and Beyond. 2004, ch. 8, they claim that the Peskin–Takeuchi oblique parameters S, T and U are in fact Wilson coefficients of certain ... 1answer 255 views ### What is physics of a normal jump? What is the physics behind a normal human jump ? when a normal human wants to jump. First, some energy is stored in their thighs and The elastic tendons Just like a spring. In mechanics and ... 0answers 74 views ### The force exerted by a ~10 Tesla magnet on a C13 isotope? How much force would a ~10 Tesla magnet exert on a weakly magnetic C13 isotope? If I made a molecule of diamond with $N$ C13 atoms, how large would $N$ need to be for me to pull on it with something ... 1answer 190 views ### Time taken for object in space to fall to earth The Problem For a small mass a distance $R_i$ away from the center of the Earth, how long would it take for the object to fall to the surface of the Earth, assuming that the only force acting upon the ... 1answer 141 views ### Gauss's Law in action Need someone to tell me if I got this done correctly (a) Draw Gaussuian cylinder inside the black cylinder to find charge enclosed $Q_{en} = Q(\frac{r}{a})^2$ Apply Gauss's Law \$E2\pi r \ell = ... 2answers 284 views ### relativistic acceleration equation A Starship is going to accelerate from 0 to some final four-velocity, but it cannot accelerate faster than $g_M$, otherwise it will crush the astronauts. what is the appropiate equation to constraint ... 2answers 259 views ### Maximum velocity allowed for motion profile with constant jerk I´m trying to calculate what the maximum allowed velocity for a mechanical axis traveling towards a stop. The maximum velocity must be a secure speed so the axis have is able to stop in time. I have: ... 1answer 163 views ### Mathematical derivation of $N(\lambda)d\lambda$ We all know that in Rayleigh-Jeans law, $$N(f)df ~=~ 8\pi f^2 df/c^3.$$ How do you derive $N(\lambda)d\lambda$? I am sort of confused... 3answers 169 views ### How to compute the speed necessary for an airplane to fly? I give some physics lessons to a friend. She asked me a question that I am unable to answer. Could you help me ? A plane has a weight of $2\times10^6$kg. The surface of the wing is $1200 \text{m}^2$. ... 4answers 2k views ### Calculating impact force for a falling object? Good evening, I'm trying to calculate what kind of impact force a falling object would have once it hit something. This is my attempt so far: Because $x= \frac{1}{2} at^2$, $t=\sqrt{2x/a}$ $v=at$, ... 2answers 556 views ### Electric potential of sphere (a) I am a little confused about this part. The point at A to B isn't radial. The electric field is radially outward, but if I look at the integral \int_{a}^{b}\mathbf{E}\cdot d\mathbf{s} = ... 1answer 150 views ### Condition for circular orbit I am a little confused about the condition for circular orbit. Goldstein's Classical Mechanics has the condition for circular orbit as $$f^'=0\tag1$$ where $f^'$ is the effective force. I understand ... 0answers 128 views ### Reduction in velocity in pipe I have two pipes divided from main pipe. If I add one more pipe, what would be the amount of velocity in each pipes. I know the velocity of main pipe, then diameter and radius of each pipe. Edit: 1) ... 3answers 453 views ### When I connect two charged capacitors side by side, what will be the voltage across them? Say, I have two charged capacitors, one 3mF and one 2mF. The voltage across them are 20V and 30V respectively. Now if I connect the two capacitors side by side as shown below, what will be the voltage ... 1answer 169 views ### Energy required to kick a planet orbiting the Sun from an elliptical to a parabolic path I am trying to solve the following problem from Goldstein's Classical Mechanics: A planet of mass $M$ is in orbit of eccentricity $e=1-\alpha$ where $\alpha<<1$, about the Sun. Assume that the ... 3answers 954 views ### How do I find work done by friction over a curve represented by a polynomial? I am facing a problem in Physics. Problem: What will be the work done by the frictional force over a polynomial curve if a body is sliding on this polynomial($a+bx+cx^2+dx^3+\ldots$) curve from rest ... 1answer 88 views ### Why are there two ways to solve for energy of a spring? I can find the energy of a spring using $F = -kx$, or by using the formula $e = 1/2mv^2 + 1/2I\omega^2 + mgh + 1/2kx^2$. The first way, I get $mg/k = x$, but the second way, I get $2mg/k = x$. Which ... 1answer 266 views ### Calculating the magnetic flux density of a thin, flat conductor I am currently preparing for a physics exam and I have troubles understanding how to solve the following exercise: A current that has the constant current density $\vec{j}$ flows through a thin, ... 0answers 65 views ### Kinematics equation [closed] The motors of an electric train can give it an acceleration of 1 m/ sec square and the brakes can give a negative acceleration 3 metre/ second square. Find shortest time interval in which train can ... 3answers 1k views ... 1answer 109 views ### Scattering problem: Expression for angular momentum of particle I'm reading Goldstein's Classical Mechanics, the part on "Scattering" in the "Central Force" chapter. In relation to the figure below, he says that angular momentum, $l$, is given by $$l=mv_0s$$ ... 1answer 616 views ### Calculate mass of air in a tyre from pressure How can one calculate the mass of air inside a tyre, given a particular tyre size; a pressure, in $kPa = \frac{1000kg}{m\cdot s^2}$; and assuming room temperature, and normal air composition? I can't ... 0answers 69 views ### Randomly sampling a “well-mixed” solution of Brownian particles I place $N$ Brownian particles in $V$ liters of solution, shake until I assume that the particles are "well-mixed", and sample and randomly sample an $S$ liter volume. What is the probability ... 0answers 140 views ### How to build a lab at home/school? (general purpose, sophisticated, reliable and cheap) [closed] The title includes the main objective and all the criteria; namely: A. A lab is to be built. B. It must adhere to the following criteria: Reasonably cheap for a limited-fund ... 0answers 539 views ### Capstan equation [closed] This problem is from An Introduction To Mechanics book, chapter 2, problem #24. A device called a capstan is used aboard ships in order to control a rope which is under great tension. The rope is ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9140781760215759, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/185291-monte-carlos-method-particular-equation-style-required.html
Thread: 1. Monte Carlos Method: Particular Equation Style Required I require a numerical program to generate a random equation for me with the requirements listed below. This curve is for a velocity profile of an object, must have positive velocity at all times and has limits on the maximum acceleration and deceleration. I desire to create a slightly different curve each time I generate one, still meeting all the requirements. The requirements for the function, v(t) for 0 < t < t_max, is: • global max < b • global min = 0 • v(0) = v(t_max) = 0 • v ' (t)_max <= a • v ' (t)_min >= c • integral ( v(t) ) = d The following two requirements, as seen on the image, add no new information to the problem: • v ' (0) <= a • v ' (t_max) >= c An image with this information is shown below: Any help on this problem would be appreciated. If it is very lengthy to explain how to do kindly point me in the right direction as I have some maths textbooks. For the numerical program I plan to use either MATLAB or Mathematica. Thanks again! 2. Re: Monte Carlos Method: Particular Equation Style Required An interesting problem. I have a few comments: 1. $\int_{0}^{t_{\text{max}}}v(t)\,dt = d$ is superfluous if $\min_{0\le t\le t_{\text{max}}}v(t)=0.$ Unless, as I suspect is the case, $d$ is a number you know ahead-of-time. Certainly, if $v(t)\ge 0$ on the interval in question, then $\int_{0}^{t_{\text{max}}}v(t)\,dt$ is the distance traveled, whatever that is. 2. Depending on the maximum slopes of the velocity, a and c, and the required distance traveled, d, you may not be able to produce a function satisfying all those requirements. In particular, if the required distance traveled, d, is large compared with a and c, you will not be able to exhibit the required function. 3. My solution: From the origin, start rising at the maximum slope a. Backwards from the endpoint, descend at the maximum slope c. Then fiddle with what's in-between those two lines to make the area equal to d. You could try to make the area under the extreme lines equal to half of d, and account for the other half of d by various shenanigans that allow you to change things and still keep the same area. 3. Re: Monte Carlos Method: Particular Equation Style Required Notice that if you have n functions which satisfy the constraints, then any normalised linear combinations of them will also match the constraints ie, if you have a collection of n functions $f_i(t)$ which satisfy the contraints, then the following does as well: $g(t) = \sum_{i=1}^{n} w_i f_i(t)$ with the resitrictions $0 \leq w_i \leq 1$ and $\sum w_i = 1$ So, you can generate your fluctuations by randomising the weights $w_i$. My suggested algorithm for you would be: (1) Manually find a few (say, 10) functions which satisfy your contraints using any method (eg, trial and error). This should be feasible enough, and youll only have to do it once. (2) use a random number generator to assign a weight to each function you found in (1) (3) then plot the function g(t) corresponding to those randomly chosen weights to create more realisations, you can generate a new set of random numbers for step (2). . Notes • To Generate random weights you can generate $n$ random numbers $X_i \geq 0$ by any method (all maths programs shold be able to do this), then use $w_i=\frac{X_i}{\sum_{i=1}^{n} X_i}$ • a particularly simple implementation is with n=2 since you could then use a single random number generator to get $w_1$ betgween 0 and 1, then use $w_2 = 1- w_1$ • Obviously the n functions you find will need to be dsignificantly different if you want to see significant fluctions • the types of paths you observe will be heavily dependent on the shapes of the functions you choose, which may be unrealistic for your purpose. i would try to choose functions that look different to each other. PS: Although i started this post by saying "notice that", you should check the result in case i made a mistake. 4. Re: Monte Carlos Method: Particular Equation Style Required Thank you Ackbeet and SpringFan25 for your prompt replies. Ackbeet: Unless, as I suspect is the case, $d$ is a number you know ahead-of-time. Yes, d is known and the values a, b, c are also known. Yes, since this is a velocity profile the derivative is acceleration and the integration is the displacement of the object. For example, let us choose the values of: • distance (integral) = d = 74m • max accn = a = 2m/s^2 • max dccn = c = -4m/s^2 • t_max = 20 seconds. One function which achieves this is a max acceleration for 2 seconds, holding 4m/s for 17 seconds, and max deceleration for 1 second. This generates a function in a trapezium shape with no negative values, meeting all the requirements. I wish, however, to find a function (continuous) which meets these conditions (notice I have removed the constraint for the global maximum (b) as this is taken care of by the maximum rates of change). SpringFan25: Notice that if you have n functions which satisfy the constraints, then any normalised linear combinations of them will also match the constraints Yes, your idea of normalised linear combinations is a good idea. My first question would be: how to I generate the continuous functions in the first place? I have the following ideas so far: • A numerical solver to solve y = a.x^(n) + b.x^(n-1) + ... &c. with the various constrains put into the numerical program. • Analytical method to solve for the equation. • Simple constant acceleration equations for trapezium shaped functions (not continuous but meet the conditions) • Others? 5. Re: Monte Carlos Method: Particular Equation Style Required Simple constant acceleration equations for trapezium shaped functions (not continuous but meet the conditions) Minor point: I think that trapezium functions are continuous but not differentiable. I agree that they meet the conditions as written in post #1. Major point: if you are going to allow non-differentiable functions then the problem is dramatically simplified. You can just start/finish your graph with two short lines with slope a and c respectively. For simplicity, choose the length of those lines so that the area underneath them is 1. Now all you need to do is choose some random points in the middle and connect them with straight lines. Scale up/down these middle points by a constant factor so that the area under them is d-1. This will bring the total area under the graph to $d$ as required. If re-scaling causes the global maximum constraint to be breached, discard your middle points points and randomly choose some more. Alternatively reduce the highest point and then repeat the scaling factor calculation. Edit: I think ackbeet already suggested the above in post #2. Many combinations will be discarded but a computer could make thousands/millions of attempts in a very short time. Poosible Smooth functions Notice that there is no constraint on v' except at t_max and t_min. Trigonometric functions (specifically, combinations of sine waves) look promising if you run out of simpler options (polynomials etc): • they can be combined to make many different shapes. • Because they are periodic they can be easily parameterized to meet the slope restrictions at t_min and t_max. • An upper bound on the global maximum is also easy to compute. (the global maximum cant be more than the sum of the amplitudes of your waves) • The integrals should also be tractable as long as you confine yourself to simple functions, eg combinations of sine waves with different periods/phases/amplitudes. The integral calculations will be slightly easier if you chose half-periods that fit into the period (t_max - t_min) a whole number of times. • One minor challenge will be ensuring that the global minimum is not breached but this can be overcome by trial and error. PS: I expect the above is tractable but i haven't actually tried it myself! I wont look at it further unless you actually need smooth functions. 6. Re: Monte Carlos Method: Particular Equation Style Required it was too much fun to ignore so I've attached an excel implementation of the smooth trig functions approach i suggested above. The main changes from my above post are • t_min is assumed 0 • i was wrong to say that there was no restriction on v'() except at t_min and t_max. i had misread your first post. • All sine waves complete an odd number of half periods in the time interval. This makes all graphs symettric so v(0) = v(tmax) =0. We also have v'(0) = -v(t_max). • To simplify calculations all sine waves have a phase shift of 0 or $\pi$ • Ive used a stronger gradient constraint than you need. I used |v'(t)| < min(|a|,|c|). This is useful because an upper bound on |v'(t)| can be easily calculated from the amplitude and period of the sine waves (derivation below) • There is a big "calculate me" button to generate a new random graph. it requires macros enabled to run. I dont accept responsibility if this breaks your computer/life etc. Upper bound on |v'(t)| I'll derive this. Suppose v(t) is the sum of $n$ sine waves with ammplitude $A_i$, period $B_i$, and phase-shift $C_i$ as follows: $f_i(t) = A_i \sin(\frac{2 \pi}{B_i} t + C_i)$ $v(t) = \sum f_i(t)$ Differentiate the first equation. This gives: $f'_i(t) = \frac{2 \pi A_i}{B_i} \cos (2 \pi B_i t + C_i)$ Since $|cos(x)| \leq 1$ it follows that $|f'_i(t)| \leq \frac{2 \pi A_i}{B_i}$ And we can sum as many sine waves as we want, adding up the upper bounds for each. This is handy since v(t) is the sum of the sine waves: $|v'(t)| \leq \sum |f'_i(t)| \leq 2 \pi \sum \frac{A_i}{B_i}$ (the left hand inequality is valid. Think "triangle inequality" with lots of variables.) How does this help? Note that $|v'(t)| \leq |a| \implies v'(t) \leq a ~~ \text{ since } a > 0$ and, $|v'(t)| \leq |c| \implies v'(t) \geq c ~~ \text{ since } c < 0$ Combine the above pair of results: $|v'(t)| \leq |a| \text{ and } |v'(t)| \leq |c| \implies c \leq v'(t) \leq a$ it follows that $|v'(t)| \leq min(|a|,|b|) \implies c \leq v'(t) \leq a$ So testing that $|v'(t)| \leq min(|a|,|c|)$ will ensure that the acceleration constraints are met. Instead of evaluating |v'(t)| everywhere, It's sufficient to test the upper bound of |v'(t)| that was derived in the previous part of this post. ie the test is: $2 \pi \sum \frac{A_i}{B_i} \leq \min{(|a|,|c|)}$ Meeting all the constraints for the rest of this post assume that v(t) is constructed as follows: $v(t) = \sum f_i(t)$ $f_i(t) = A_i \sin(\frac{2 \pi}{B_i} t + C_i) ~~~ \text{ with }A_i,B_i \geq 0$ v(0) = v(tmax) = 0 This is the easiest one. Consider time 0: $f_i(0) = A_i \sin(\frac{2 \pi}{B_i} \cdot 0 + C_i)$ $f_i(0) = A_i \sin(C_i)$ so a sufficient condition for $f_i(0)$ is $\sin(C_i)=0$ so the phase shift should be $0, \pi, 2\pi, 3\pi$. Since the period is $2 \pi$ anyway, so only unique phase shifts are 0 and $\pi$. What about $f_i (t_{max})$? The sin function returns to its starting point every half-period. So its suffiient to ensure that the interval $[0,t_{max}]$ has a whole number of half periods in it. Then we can use $f_i(t_{max}) = f_i(0) = 0$. (in fact, we will want an odd number of half-periods later on.). c < v'(t) < a I already showed that a sufficient condition is $2 \pi \sum \frac{A_i}{B_i} \leq \min{(|a|,|c|)}$ Total displacement = d This is the tricky one. It's very convenient to restrict the period of each sine wave so that an odd number of half periods fit in the interval $[0,t_{max}]$. This means that the number of completed oscilations will always be 1.5, 2.5, 3.5, 4.5 etc. This is important because the integral of a full oscillation is always zero. So we only need to compute the integral of the remaining half oscillation of each wave. We already established the the phase shift $C_i$ is 0 or $\pi$. For the case where the phase shift is 0, This area is equal to: $A_i \int_0^{\pi} \sin(B_i t + 0) dt = A_i \left[ \frac{-\cos(B_i t)}{B_i}\right]_0^{\pi}$ For the case where the phase shift is $\pi$ the half-period area is equal to $A_i \int_{0}^{\pi} \sin(B_i t + \pi) dt$ But remember that the sin function has this property: $sin(x + \pi) = -sin(x)$ So for the phase shifted case the area is just -1 times the area we just calculated: $A_i \int_{0}^{\pi} \sin(B_i t + \pi) = -A_i \int_{0}^{pi} \sin(B_i t) dt = A_i \left[ \frac{\cos(B_i t)}{B_i}\right]_0^{\pi}$ Finally, a single formula covering both cases: $\int_{0}^{\pi} f_i dt = A_i \left[ \frac{-\cos(B_i t)}{B_i}\right]_0^{\pi} \cdot (-1)^{\frac{C_i}{\pi}}$ (if you dont udnerstand what the -1 factor is doing at the end, it evaluates to 0 if there is no phase shift, and -1 if there is a phase shift) Hence the area under the graph $\Omega$ is: $\Omega = \int_0^{t_{max}} v(t) dt = \sum_i \left( \int_0^{t_{max}} f_i(t) dt \right)= \sum_i \left( A_i \left[ \frac{-\cos(B_i t)}{B_i}\right]_0^{\pi} \cdot (-1)^{\frac{C_i}{\pi}} \right)$ We require the area under the graph to be exactly d. This is easy (compared to all that integration!). Simply set new coefficients: $\tilde{A}_i = A_i \frac{d}{\Omega}$ This scales up the entire function so that the total area will be d. Obviously, this scale up/down may brake the acceleration constraints, since the slopes will also be scaled. The attached workbook checks this, and tries a new set of $A_i, B_i$ if there is a breach. Algorithm 1) Randomly choose $A_i, B_i.C_i$ subject to the rules derived above, ie: • $C_i$ = 0 or $\pi$ • $B_i = \frac{t_{max}}{k+0.5} ~~~ : ~ k \in \mathbb{N}$ 2) Calcualte adjusted $\tilde{A}_i = A_i \frac{d}{\Omega}$ 3) Test the adjusted coefficients meet the accelleration constraints. If they dont, discard and return to step 1. The test is $2 \pi \sum \frac{A_i}{B_i} \leq \min{(|a|,|c|)}$ The attached workbook does the above. I've chosen ranges on $A_i,B_i$ that produce reasonably varied output. Sample output here is some sample output Attached Files • Random velocities generator v7.xls (406.0 KB, 6 views) 7. Re: Monte Carlos Method: Particular Equation Style Required SpringFan25, Thank you very much for your detailed reply. I will need some time to go through it and understand it. One thing to point out, it seems that all the functions generated are even and thus symmetrical. That constraint is not required - is there a way to get around that? 8. Re: Monte Carlos Method: Particular Equation Style Required The symmetry is convenient because it makes sure that the velocity graph returns to its starting value, ie v(0) = v(tmax) =0. You can get rid of the symmetry by adding a further asymmetric function to v(t) which also satisfies v(0) = t(tmax) = 0. One way is to add another pair of sine waves which are 180 degrees out of phase, and with one having double the frequency of the other: examples of 2 sine waves and their sum are below to illustrate: plot y&#61;sin&#40;x&#41;,y&#61;0.5sin&#40;2x-pi&#41; on 0,pi - Wolfram|Alpha plot y&#61;sin&#40;x&#41; &#43; 0.5sin&#40;2x-pi&#41; on 0,pi - Wolfram|Alpha However these functions are not ideal as the low gradient at the start means that small fluctuations from any other waves you put on top can easily make the overall value of v(t) negative at low values of t.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 67, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9027208685874939, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-applied-math/37992-electromagnetic.html
# Thread: 1. ## electromagnetic Hello, i was doing my work and am unsure about some of these questions. i am hoping someone here can help me out. 1. Because Earth has an electric field of 150N/C, water droplets can remain stationary in the atmosphere if they are charges. if the avg water droplet is 0.030 mm in diameter, how many electron does it need to remain stationary? 2. it asks to draw the magnetic field lines and trajectories of the negatively charged particles. i know how the field lines go but wont the electrons go in the opposite direction? 2. Originally Posted by MarkC Hello, i was doing my work and am unsure about some of these questions. i am hoping someone here can help me out. 1. Because Earth has an electric field of 150N/C, water droplets can remain stationary in the atmosphere if they are charges. if the avg water droplet is 0.030 mm in diameter, how many electron does it need to remain stationary? 2. it asks to draw the magnetic field lines and trajectories of the negatively charged particles. i know how the field lines go but wont the electrons go in the opposite direction? The electic force is balancing the gravitational force. So find the weight of the droplet and then the size of the electric force. From that you can calculate the number of charges. For the second problem, yes, negative charges move in the opposite direction that positively charged particles move. -Dan 3. Im still having a bit of difficulty with the rain drop question. i found the mass and then set Eq=Fg Fq/q = Gm1m2/r^2 and then solved for q after simplifying. is that right? because i got 3.9E12 electrons which seems like a bit too much 4. Originally Posted by MarkC Im still having a bit of difficulty with the rain drop question. i found the mass and then set Eq=Fg Fq/q = Gm1m2/r^2 and then solved for q after simplifying. is that right? because i got 3.9E12 electrons which seems like a bit too much It is a bit too much. The radius of the water drop is 0.00150 cm. Given a density of water as 1 g/cm^3 I get a weight of $1.4137 \times 10^{-10}~N$ for the water droplet. Thus I get a charge on the water droplet of $q = \frac{mg}{E} = 9.2363 \times 10^{-13}~C$ to balance the water droplet in the E field and finally $n = \frac{q}{e} = 5772700$ electrons. -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428727626800537, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/67074-calc-word-problem.html
# Thread: 1. ## calc word problem in 1950, a research team digging near folsom, new mexico, found charred bison bones along with some leaf-shaped projectile points that had been made by a paleo-indian hunting culture. it was clear from the evidence that the bison had been coked and eaten by the makers of the points, so that carbon-14 dating of the bones made it possible for the researchers to determine when the hunters roamed North America. tests showed that the bones contained between 27% and 30% of their original carbon-14. use this information to show that the hunters lived roughly between 9000 BC and 8000 BC. i know to use the equation y(t)= C*e^(kt) where y represents the population at a given time, C represents the population, k represents the rate with respect to time, and t represents time i also know that 1950, 27% and 30%, and 9000 and 8000 BC are important but i don't know what to do with them...especially since they are ranges and not exact numbers 2. The decay constant is $k=\frac{-1}{T}ln(2)$ Since the half life of Carbon 14 is about T=5750 years, then we have: $k=\frac{-1}{5750}ln(2)=-.00012$ Now, let A be the original amount. 27% remaining: $.27A=Ae^{-.00012t}$ $t\approx 10911$ years. 30% remaining: $.30A=Ae^{-.00012t}$ $t\approx 10033$ years. 1950-10911=8961 BC 1950-10033=8083 BC 3. Thanks Galactus. The only part I didn't follow was how did you get k= (-1/5750) ln(2) I know that 5750 is the half life but where did ln2 come from? 4. That is derived from DE's. Just use it as an identity, so to speak. That is how to find k if you know half life. No need to derive it. We can do it, but I would just as soon not right now. You can porobably find its derivation in a calc book or some place. 5. Originally Posted by mathh18 Thanks Galactus. The only part I didn't follow was how did you get k= (-1/5750) ln(2) I know that 5750 is the half life but where did ln2 come from? The equation for decay is $y=y_0 e^{-kt}$. You want to find the time it takes for half of the substance to decay, or when $y=\tfrac{1}{2}y_0$. Substituting this into the equation, we have $\tfrac{1}{2}y_0=y_0e^{-kt}\implies\tfrac{1}{2}=e^{-kt}$ Taking the natural log of both sides, we end up with $\ln\left(\tfrac{1}{2}\right)=-kt\implies t=-\frac{\ln\left(\tfrac{1}{2}\right)}{k}=\frac{\ln\l eft(2\right)}{k}$. This value $t$ is the half life of the substance. It is usually denoted by $T_{\frac{1}{2}}$ or $\lambda$. As a result, the decay constant is $k=\frac{\ln(2)}{T_{\frac{1}{2}}}$. Does this make sense?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9695855379104614, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/24361?sort=newest
## Are two sheaves that are locally isomorphic globally isomorphic ? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a topological space and let $\mathcal{F}$ and $\mathcal{G}$ be two sheaves over $X$. Of course, if one has a morphism $f : \mathcal{F} \to \mathcal{G}$ such that for all $x\in X$, $f_x : \mathcal{F}_x \to \mathcal{G}_x$ is an isomorphism, then it is known that $f$ itself is an isomorphism. My question is the following: if we don't have such a morphism $f$, but if we know that for all $x\in X$, $\mathcal{F}_x$ and $\mathcal{G}_x$ are isomorphic, is it true that $\mathcal{F}$ and $\mathcal{G}$ are isomorphic ? - 3 No: think of non-isomorphic complex vector bundles over $X$. – Qfwfq May 12 2010 at 18:20 ## 7 Answers Definitely not. If $X$ is a ringed space, then a $\mathcal{O}_X$-module $F$ is called locally free of rank $1$, if $X$ is covered by open subsets $U_i$ such that $F|_{U_i}$ is free of rank $1$ over $\mathcal{O}_{U_i}$. The correspond to line bundles on $X$. Line bundles form a group, called the Picard group of $X$ and denoted by $\text{Pic}(X)$. This group does not have to vanish: If $X$ is a CW complex and we take the sheaf of continuous functions, then $\text{Pic}(X)$ is isomorphic to $H^1(X,\mathbb{Z}/2)$. For $X=S^1$, the moebius strip is the nontrivial element here. The corresponding example in algebraic geometry is $\text{Pic}(\mathbb{P}^n_k)=\mathbb{Z}$, the generator given by the Serre twist $\mathcal{O}(1)$. Of course, there are more easy counterexamples, but I wanted to indicate that there is a rich theory coming from the observation that two locally isomorphic sheaves are not isomorphic. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. No: think of non-isomorphic vector bundles over $X$. They are stalkwise isomorphic even as modules (over $\mathcal{O}_{X,x}$ at various $x \in X$, where $\mathcal{O}_X$ is the sheaf of continuous functions on $X$), hence as abelian groups. - How about a positive answer for a twist:-))? Pick an open cover $X=\cup_i U_i$ and try to reglue $F|_{U_i}$ into a brand new sheaf. To do that you need to pick automorphism $\sigma_{i,j}\in Aut(F|_{U_i \cap U_j})$ that agree on triple intersection. This is some Chech cocycle $Z^1_{U_i} (X, Aut (F))$. Two Chech cocycle will give you the same sheaf if they are different by a coboundary. Hence if $H^1_{U_i} (X, Aut (F))=1$ then any regluing will give the same sheaf. Now you have to take care of all possible covers by going to the limit. Here is your positive answer then: true if and only if $H^1 (X, Aut (F))=1$. - Do you mean dim H¹=1? – Ketil Tveiten May 12 2010 at 13:45 No, I do not. $H^1$ with coefficients in a sheaf of groups (not necessarily abelian) is a group. I mean that this group is trivial. – Bugs Bunny May 12 2010 at 17:28 1 @BugsBunny: H^1(X,F) is NOT a group for a non-abelian sheaf of groups F, it's just a pointed set. – Qfwfq May 12 2010 at 18:25 Thanks, unknown google! You are right, it is just a set if $Aut(F)$ is non-abelian. But it still parametrizes isomorphisms classes of sheaves, locally isomorphic to $F$. – Bugs Bunny May 12 2010 at 22:08 Actually, the examples given in the answers so far are even counter-examples to the weaker statement that two sheaves $F,G$ for which there exists a covering `$U_i$` such that `$F|U_i\cong G|U_i$` be isomorphic. For the original question regarding stalks there is a simpler example: Let `$X=\{\eta,s\}$` be the topological space having `$\{\eta\}$` as the only non-trivial open set. To give an abelian sheaf $F$ on $X$ is equivalent to give two groups `$F(X)=F_s$` and `$F(\{\eta\})=F_\eta$` and a restriction homomorphism `$F_s\to F_\eta$`. Taking `$F_s=F_\eta=A$` for some abelian group $A\ne0$ and choosing either `$\mathrm{id}_A$` or $0$ as restriction defines two non-isomorphic sheaves. - A very elegant minimalist example – Georges Elencwajg Nov 19 2010 at 12:02 A conceptual explanation (since the other answers have given counterexamples) for why this is untrue is as follows: A sheaf consists of local data and global data specifying how those local data fit together. Even if all of the local data of two sheaves are isomorphic, there is no reason to believe that those isomorphisms can be fit together in a compatible way. This is why we require that the isomorphisms on stalks arise from a map that is already a morphism of sheaves, since this exactly says that the data fit together in the proper way. We even encounter the same problems when we work with presheaves, for instance. Since presheaves are functors, a morphism of presheaves must be a natural transformation of functors. However, simply having isomorphisms "pointwise", as it were, is not enough. The isomorphisms must also commute with the restriction maps. - No, indeed there exist sheaves which are locally isomorphic to locally constant sheaves, without being locally constant. These are usually called local coefficient systems. It is not hard to see that, for a nice spaces $X$, e.g. a manifold, to give a sheaf locally isomorphic to the locally constant sheaf $\underline{G}$ associated to the group $G$ is the same as to give a homomorphism $\pi(X) \to \mathop{Aut}(G)$. In particular whenever there is a nontrivial homomorphism, there exist nontrivial local coefficient systems. - Since there are locally free sheaves that are not globally free, the answer is clearly no. Indeed the existence of the term "locally free" is a meta-proof of this. For a concrete example, consider the circle embedded as the central circle in a smooth Mobius strip. Its normal bundle is locally isomorphic to the constant sheaf with fibres $\mathbb{R}$ but it is not a constant sheaf as it has no global sections which are everywhere nonzero. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347373247146606, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3947800
Physics Forums ## Raymer Neutral Point Formula I am wondering if anybody understands the Raymer neutral point formula? I have attached the formula for reference. Even just an example would be good enough. Thanks, Nate Attached Files Raymer Neutral Point Formula.pdf (187.3 KB, 30 views) PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug His neutral point formula comes from the datcom method. Are you familiar with that method at all? Also it would help if you could tell me which terms you do not understand rather than explaining them all. I am not familiar with the datcom method. I have begun to read about it here: http://en.wikipedia.org/wiki/USAF_St...Control_DATCOM I am sorry this is long, and maybe this is more for myself, to clarify what I do and do not know, but here is my understanding of the Raymer neutral point formula. What I know, or at least what I think I know. - X_np The neutral point of the wing in reference to every other point. This is calculated from the Raymer formula. - C_L_alpha The coefficient of lift with respect to angle of attack of the wing. This is calculated from PLLT. - C_m_alpha_fuselage The coefficient of moment with respect to angle of attack of the fuselage. The only way I can think of to calculate this is to run a CFD program like Fluent. - eta_h The efficiency factor of the horizontal stabilizer. This is equal to the dynamic pressure at the horizontal stabilizer over the free stream dynamic pressure. Normally this is just one. I do not know how to calculate this value without the use of CFD. - S_h The planform area of the horizontal stabilizer. This is a simple projected area formula, not sure if that is the right term, but you get what I mean. - S_w The planform area of the wing. Again, this is a simple projected area formula. - C_L_alpha_h The coefficient of lift with respect to angle of attack of the horizontal stabilizer. I assume this is also calculated from PLLT. - q The dynamic pressure, using the standard formula q = 0.5*rho*v_inf^2 What I kind of know. - Xbar_acw The aerodynamic center of the wing as a fraction of the wing mean chord. I think this is calculated by this formula: (2/3)*(W_cr+W_ct-W_cr*W_ct/(W_cr+W_ct)) where, W_cr = Wing chord root W_ct = Wing chord tip - (del alpha_h)/(del alpha) This is the change in angle of attack of the horizontal stabilizer with respect to the change in angle of attack of the wing. I think to do this you have to find the change the aerodynamic center of the entire aircraft, then geometrically calculate how much a change in angle of attack changes the angle of attack of the horizontal stabilizer. - Xbar_ach The aerodynamic center of the horizontal stabilizer as a fraction of the horizontal stabilizer mean chord. I think this is calculated by this formula: (2/3)*(H_cr+H_ct-H_cr*H_ct/(H_cr+H_ct)) where, H_cr = Horizontal stabilizer chord root H_ct = Horizontal stabilizer chord tip - (del alpha_p)/(del alpha) This is the change in angle of attack of pressure on the wing with respect to the change in angle of attack of the wing. I don't have the formula on me, but I think I could find it. - Xbar_p The center of pressure on the wing as a fraction of the wing mean chord. Again I do not have the formula on me, but I think I'll be able to find it somewhere. What I definitely do not know. - F_p_alpha I think this is the force of pressure with respect to alpha on the wing, but I have no idea how to calculate it. ## Raymer Neutral Point Formula Hey, sorry I haven't gotten back to you. I've been bogged down the last week with work. This weekend I'll take out my old static stability notes and fill you in. Awesome thank you Hey I checked my old stability notes and the neutral point formula we had was slightly different. Meaning it was meant for a different geometry. However, for all the definitions you posted, you were spot on except, $$F_{p_\alpha}$$. I have no idea what that denotes. In my mind this leaves 1 of two possibilities: 1)I know that Raymer gets his information from many sources and combines certain things. So you'd have to find the source where he used this equation 2)I know that Datcom uses a component buildup method. So I am thinking this aircraft has a certain configuration and utilizes a component which was denoted with subscript 'p'. I'll have to do some snooping around to see what's going on. FPα in this formula signifies the propeller or inlet force. I know this is an old thread but for anyone looking for some clarification check out the image I attached it goes over the basics from the text Aircraft Design a Conceptual Approach by Raymer Attached Thumbnails Tags aircraft, airfoil, neutral point, raymer, wing Thread Tools | | | | |---------------------------------------------------|-------------------------------|---------| | Similar Threads for: Raymer Neutral Point Formula | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 4 | | | General Physics | 8 | | | Mechanical Engineering | 0 | | | Mechanical Engineering | 2 | | | Introductory Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384042620658875, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/118834?sort=oldest
## When does the cotangent complex vanish? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The question is already in the title. Less succinctly, let's call a map $f:X \to Y$ of schemes $L$-trivial if its cotangent complex is quasi-isomorphic to $0$. Such maps have striking deformation-theoretic consequences; for example, any deformation of $Y$ can be followed uniquely by a deformation of $X$. My primary (and probably naive) question is: Is there a classification of $L$-trivial maps? I am sure this question has been asked before, but I did not find any literature that deals with it. The three examples of $L$-trivial maps I am familiar with are: • Etale morphisms (and these are the only examples under finiteness constraints). • Any map between perfect $\mathbb{F}_p$-schemes. • The inclusion of the closed point in the spectrum of a valuation ring with divisible value group, or similar "divisible" constructions. For example, $\mathrm{Spec}(\mathbb{C}) \hookrightarrow \mathrm{Spec}(\mathbb{C}[ t^{\mathbb{Q}_{\geq 0}}])$ is $L$-trivial. [ Edit: I learnt the last one in conversation after positing the first version of this question. ] More examples can be obtained by taking filtered colimits of the above examples, but those are only slightly different. Hence, a second question is: are there other fundamentally different examples of $L$-trivial maps? Perhaps a classification is unreasonable to expect, so I am also happy to learn more about $L$-trivial maps in other geometric categories, like algebraic stacks, or derived/spectral schemes/stacks, or (complex/rigid) analytic spaces, etc.. In particular, I am especially curious to know if $L$-trivial maps can be better understood using derived algebraic geometry. - 2 Isn't this notion the same as "formally etale"? In particular, if $f$ is of finite presentation, isn't this the same as just being etale? – Piotr Achinger Jan 13 at 18:11 2 Yes for the second question about finitely presented maps (as I indicated parenthetically in the question), but I am interested in the general case. I do not think $L$-trivial maps are the same as formally etale maps; the latter only corresponds to the vanishing of the first couple of cohomology sheaves of the cotangent complex, and I see no reason why this implies vanishing of the full complex without additional strong finiteness assumptions (like Quillen's conjecture proven by Avramov), but maybe I am missing something simpler? – X-curious Jan 13 at 18:36 ## 1 Answer Edit: Here is a possible characterization. As mentioned in the comments above, the vanishing of the 1-truncated cotangent complex $\tau_{\leq 1}L_{B/A}$ of a map of rings $f \colon A \to B$ is equivalent to a lifting property with respect to square-zero extensions $T' \to T$. This follows from the fact that the space $Map(L_{T/A},M[1])$ is equivalent to the groupoid of square-zero extensions of $T$ over $A$ with kernel $M$. Here $M$ is a $T$-module. Rephrasing this, $\tau_{\leq 1} L_{B/A}$ vanishes if and only if $f$ has a lifting property with respect to morphisms of 0-truncated simplicial algebras such that the kernel is concentrated in degree 0 and squares to 0. Let's call these 0-concentrated. Then we can go on to look at morphisms of 1-truncated simplicial algebras with kernel $K$ concentrated in degree 1. The squaring-to-zero property is vacuous here, because a product of two elements in $\pi_1(K)$ will be in $\pi_2(K)$, which is zero by assumption. Let's call these 1-concentrated Then we find that $\tau_{\leq 2} L_{B/A}$ vanishes if and only if $f$ has a lifting property with respect to all 0- and 1-concentrated maps. This again holds because the cotangent complex classifies 0- and 1-concentrated maps. I think now it's clear how to go on: $\tau_{\leq n+1}L_{B/A}$ vanishes if and only if $f$ has a lifting property with respect to all $m$-concentrated maps with $m \leq n$. And the full cotangent complex vanishes if and only if $f$ has the lifting property with respect to $n$-concentrated maps for all $n$. These directions one looks at if one starts to check with respect to $n$-concentrated maps for $n \geq 1$ are sometimes called the derived directions. So Avramov's theorem might be rephrased as saying that under strong finiteness assumptions, unobstructedness in the classical directions implies unobstructedness in all derived directions. This is an anwer to your last paragraph about L-trivial maps in other geometric categories. If you are only interested in schemes it doesn't tell you anything interesting. One thing that the cotangent complex is good at is measuring connectivity of a morphism of simplicial rings. This also holds without any finiteness assumptions on the ring. Recall that a morphism $f \colon A \to B$ of simplicial rings is called n-connective if it induces isomorphisms $\pi_i (A) \to \pi_i (B)$ in degrees $< n$ and a surjection $\pi_n (A) \to \pi_n (B)$ in degree $n$. There then is a result that states that if $f$ is $n$-connective, then the homology of the relative contangent complex $L_{B/A}$ vanishes in degrees $\leq n$. (I hope I got all the indices right.) So in particular, any equivalence of simplicial rings is L-trivial. One way of intepreting your question is to ask when the converse holds. What do I know if a morphism of simiplicial rings is L-trivial? There is a partial converse to the statement above. Namely, if a morphism $f \colon A \to B$ induces an isomorphism $\pi_0(A) \to \pi_0(B)$ and is L-trivial, then it is an equivalence! I find this pretty suprprising, as L is only a linear piece of data, but still manages to detect equivalences. - 1 Ah, thanks. This observation strongly supports the (well-advertised) point of view that a simplicial commutative ring is just a ring with additional "nilpotent" data. It also raises the following question: if $A \to B$ is an $L$-trivial map of ordinary rings that is additionally an isomorphism modulo nilpotents, then is $A \simeq B$? – X-curious Jan 14 at 11:19 1 I think this true, at least with finiteness ass. By the way, I'd be surprised if your question has a complete answer in terms of only ordinary rings. As evidence, one can define something like L-finite maps having perfect rel.cot.complex. Then this notion is completely unrelated to being of finite presentation. On the contrary, if you look at maps that are homotopically of finite presentation, then you can again detect these on the cotangent complex (if you add $\pi_0(B)$ finitely presented over $\pi_0(A)$). In short, I think it's hard to say something about the cot.complex using ord. rings. – Timo Schürg Jan 14 at 14:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9266983866691589, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/72660/1-cdot-3-2-cdot-4-3-cdot-5-cdots-nn2-nn12n7-6-by-mathemati
# $1\cdot 3 + 2\cdot 4 + 3\cdot 5 + \cdots + n(n+2) = n(n+1)(2n+7)/6$ by mathematical induction I am doing mathematical induction. I am stuck with the question below. The left hand side is not getting equal to the right hand side. Please guide me how to do it further. $1\cdot 3 + 2\cdot 4 + 3\cdot 5 + \cdots + n(n+2) = \frac{1}{6}n(n+1)(2n+7)$. Sol: $P(n):\ 1\cdot 3 + 2\cdot 4 + 3\cdot 5 + \cdots + n(n+2) = \frac{1}{6}n(n+1)(2n+7)$. $P(1):\ \frac{1}{6}(2)(9) = \frac{1}{2}(2)(3)$. $P(1): 3$. Hence it is true for $n=n_0 = 1$. Let it be true for $n=k$. $P(k):\ 1\cdot 3 + 2\cdot 4 + \cdots + k(k+2) = \frac{1}{6}k(k+1)(2k+7)$. We have to prove $P(k+1):\ 1\cdot 3 + 2\cdot 4 + \cdots + (k+1)(k+3)= \frac{1}{6}(k+1)(k+2)(2k+9)$. Taking LHS: $$\begin{align*} 1\cdot 3 + 2\cdot 4+\cdots + (k+1)(k+2) &= 1\cdot 3 + 2\cdot 4 + \cdots + k(k+2) + (k+1)(k+3)\\ &= \frac{1}{6}(k+1)(k+2)(2k+9) + (k+1)(k+3)\\ &= \frac{1}{6}(k+1)\left[(k+2)(2k+9) + 6k+18\right]\\ &= \frac{1}{6}(k+1)\left[2k^2 + 13k + 18 + 6k + 18\right]\\ &= \frac{1}{6}(k+1)\left[2k^2 + 19k + 36\right]. \end{align*}$$ - could you please type it as a text? – Ilya Oct 14 '11 at 18:31 1 @Gortaur: Sorry,I don't know LaTeX. Pardon me. – Fahad Uddin Oct 14 '11 at 18:41 2 @Akito: If you right-click on the LaTeX that I've been putting into your questions, and then click on "show source", you can see how to write the equations in LaTeX. While some of the fancier things (like `align*`) may be somewhat beyond you, it should be fairly easy for you to learn how to do at least some LaTeX-ing of text, instead of relying on cumbersome image or on other people typing them out. While your image occupies more than one screenful on my end, the typed-out formulas are easy to see at a glance (less than half the screen), making it easier to see what you are doing. – Arturo Magidin Oct 14 '11 at 18:52 ## 3 Answers As I understand it you are trying to show that for $n\geq 1$: $$1\cdot 3+2\cdot 4+\cdots +n(n+2)=\frac{1}{6}n(n+1)(2n+7).$$ You first showed it for $n=1$: $$3=1\cdot 3=\frac{1}{6}(1)(2)(9)=3$$ Now we assume that the formula holds for some $n\geq 1$ and we have the statement for $n+1$ $$3\cdot 1+\cdots +n(n+1)+(n+1)(n+3).$$ By induction hypothesis the sum of the first $n$ terms is $\frac{1}{6}n(n+1)(2n+7)$. So $$3\cdot 1+\cdots +n(n+1)+(n+1)(n+2)=\frac{1}{6}n(n+1)(2n+7)+(n+1)(n+3).$$ We may factor out an $n+1$ to get $$\frac{1}{6}n(n+1)(2n+7)+(n+1)(n+3)=(n+1)\left(\frac{1}{6}n(2n+7)+n+3\right).$$ Then factor out a $\frac{1}{6}$ to get $$\frac{1}{6}(n+1)(n(2n+7)+6n+18)=\frac{1}{6}(n+1)(2n^2+7n+6n+18).$$ Finally, $$\frac{1}{6}(n+1)(2n^2+7n+6n+18)=\frac{1}{6}(n+1)(n+2)(2(n+1)+7).$$ - Just out of curiosity, does your username refer to the NBA All-Star player, and his ridiculous \$126 million dollar contract? – Ragib Zaman Oct 15 '11 at 12:31 1 @Ragib: No. When I went to get my first e-mail account way back when, I tried all sorts of names that were taken. So, I simply typed in 'joe'. It suggested I take the name 'joe126', presumably because there were 125 others named Joe. I liked it and started using it for other things. When I registered here I wanted to use 'Joe Johnson', but that was under use. – Joe Johnson 126 Oct 15 '11 at 12:38 O wow, I really didn't expect that. Quite a big coincidence then! Sorry to bother you. – Ragib Zaman Oct 15 '11 at 12:45 @JoeJohnson126: I think as joe 126 had become a popular keyword so they had shown it to you. – Fahad Uddin Oct 22 '11 at 10:24 @Ragib Zaman: No bother. – Joe Johnson 126 Nov 11 '11 at 17:55 HINT $\:$ First trivially inductively prove the Fundamental Theorem of Difference Calculus $$\rm\ F(n)\ =\ \sum_{i\: =\: 1}^n\:\ f(i)\ \ \iff\ \ \ F(n) - F(n-1)\ =\ f(n),\quad\ F(0) = 0$$ Your special case now follows immediately by noting that $$\rm\ F(n)\ =\ \dfrac{n\ (n+1)\ (2\:n+7)}{6}\ \ \Rightarrow\ \ F(n)-F(n-1)\ =\: n\ (n+2)\:.\$$ Note that by employing the Fundamental Theorem we have reduced the proof to the trivial mechanical verification of a polynomial equation. Absolutely no ingenuity is required. Note that the proof of the Fundamental Theorem is much more obvious than that for your special case because the telescopic cancellation is obvious at this level of generality, whereas it is usually obfuscated in most specific instances. Namely, the proof of the Fundamental Theorem is just a rigorous inductive proof of the following telescopic cancellation $$\rm - F(0)\!+\!F(1) -F(1)\!+\!F(2) - F(2)\!+\!F(3)-\:\cdots - F(n-1)\!+\!F(n)\ =\:\: -F(0) + F(n)$$ where all but the end terms cancel out. For further discussion see my many posts on telescopy. - Why do you need the telescopic cancellation? Shouldn't it be enough to note that $\sum_{i=1}^n f(i) = \sum_{i=1}^{n-1}f(i) + f(n)$? From that, $F(n)-F(n-1)$ is just a normal cancellation. – celtschk Jul 5 '12 at 7:44 You used the wrong formula in your induction hypothesis. You are assuming that $$1\cdot 3 + 2\cdot 4 + \cdots + k(k+2) = \frac{1}{6}k(k+1)(2k+7)$$ but in your inductive argument, you wrote $$1\cdot 3 + 2\cdot 4 + \cdots + k(k+2) + (k+1)(k+3) = \frac{1}{6}(k+1)(k+2)(2k+9) + (k+1)(k+3).$$ That is, you substituted $1\cdot 3 + 2\cdot 4 + \cdots + k(k+2)$ with the formula for $k+1$, not the formula for $k$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423536658287048, "perplexity_flag": "middle"}
http://mathoverflow.net/users/4720?tab=recent
# Henry Cohn 6,539 Reputation 3035 views ## Registered User Name Henry Cohn Member for 3 years Seen 6 hours ago Website Location Cambridge, MA Age 38 | | | | |-------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 19h | comment | help with math homework, just need a push in right direction | | 1d | comment | Is there any proof that you feel you do not “understand”?The diagonal argument that fails is the explanation from pages 36-37 of Soare's book Recursively enumerable sets and degrees, right? I like that explanation, but I find the way Sipser approaches it in Introduction to the theory of computation even more compelling. | | 2d | comment | Could the Jacobian conjecture be undecidable?@Steven Landsburg: For each fixed $n$ it is, but you can't quantify over $n$ in the first-order language of fields. | | May14 | comment | Research on the structure of a non-Goldbach number?I think the issue is what "structure" means. Loosely interpreted, all research on the Goldbach conjecture deals with the structure of a hypothetical non-Goldbach number, where existence/non-existence is considered the most basic structural property of all. Of course that doesn't sound like what you're looking for, but saying more requires pinning down what really counts as structure. I agree with Johan that the answer to your question is probably "no", and structure of specific numbers probably just isn't relevant. However, it's not clear whether we're talking about the same thing. | | May14 | comment | Reference for original paper (but translated to English) of Matiyasevich’s proof of Fibonacci relation being Diophantine?The English translation was published in Soviet Math. Dokl. 11 (1970), 354–358. I don't know of an online version, but if it's not in your library you can ask a librarian how to get it by interlibrary loan. | | May5 | comment | Modern Mathematical Achievements Accessible to UndergraduatesRegarding the 290-theorem, the proof is not accessible to undergraduates. (As written, it uses the Ramanujan conjecture for weight 2 cusp forms. There might be an undergraduate-accessible proof, but the one Bhargava and Hanke wrote up isn't it.) For the 15-theorem, I don't recall anything nearly as high-powered, so I think you could teach it to undergraduates, but you'd have to spend some time teaching them about genera of quadratic forms first. | | May5 | comment | Collision resistance of hash functions after permuting one hash digestI can't think of anything so far that isn't shallow. Certainly given $H$ you can construct a $\pi$ for which you can find a permuted collision, and given any $\pi$ other than the identity you can modify $H$ (by composing it with another permutation) to produce a collision-resistant hash function that has a $\pi$-collision. So if $H$ or $\pi$ is chosen adversarially, you're in trouble. Presumably for most hash functions this works in practice if $H$ and $\pi$ are chosen independently in some sense, but I don't see how to formalize this offhand. For example, what if $\pi$ is chosen at random? | | May4 | comment | the position of stringsMathoverflow is intended for research questions, so this question would be more appropriate for another site (see the FAQ list for suggestions). | | May3 | comment | Irrationality of pi*e, pi^pi and e^(pi^2)Irrationality proofs generally aren't useful in any practical sense, but they can certainly be enlightening. | | May3 | comment | Diagonalize the simultaneous matrices and its background | | May3 | comment | Quantum algorithms for dummiesP.S. Jordan also links to several survey papers in the navigation bar on the right. | | May3 | comment | expectation of ln(1+e^x) | | May3 | answered | Quantum algorithms for dummies | | May2 | comment | What fields can be used for an inner product space?It's also worth keeping in mind that people often look at analogues of symmetric and even Hermitian inner products in cases without positive definiteness. (For example, to define unitary groups over finite fields.) This is certainly somewhat different from the real/complex/quaternionic case, but it's not like there's a sharp dividing line conceptually. Instead, you just give up more properties. | | May2 | comment | What fields can be used for an inner product space?In the Hermitian case "field" is overly restrictive (the skew field of quaternions is important too). Even in the symmetric case, it can be convenient to look at rings: for example, integral lattices are often thought of as free abelian groups with integer-valued, positive-definite, symmetric bilinear forms. (Of course integers are real numbers, so this can be thought of as a special case of real inner products, but the restriction to a subring changes how things feel.) | | May2 | comment | Why is it hard to prove that the Euler Mascheroni constant is irrational?I feel like this question points in the wrong direction. As far as I understand (I'm not an expert), there's nothing special about $\gamma$ that makes it particularly hard. Instead, just about all irrationality questions are hard by default, and it's $e$ and $\pi$ that are special in being unusually tractable. | | May1 | comment | What are zeros of certain entire functions?What do you mean by "describe"? There are certainly some basic things you can say, for example because you've got an entire function of exponential type. How useful this might be depends on what you need to know. | | Apr22 | comment | Is rigour just a ritual that most mathematicians wish to get rid of if they could? What's room101.mathoverflow.net? | | Apr21 | comment | Is rigour just a ritual that most mathematicians wish to get rid of if they could? | | Apr20 | awarded | ● Enlightened | | Apr20 | awarded | ● Nice Answer | | Apr20 | comment | Optimal 8-vertex isoperimetric polyhedron?@joriki: That's great! | | Apr20 | accepted | Which hard mathematical problems do you have to solve to earn bitcoins ? | | Apr20 | comment | What does a mathematician expect from mathematics education? What would you suggest as a good starting place for learning about Bass's ideas on granularity? | | Apr20 | answered | Which hard mathematical problems do you have to solve to earn bitcoins ? | | Apr8 | comment | How long can this string of digits be extended?@Will: I wrote a pari program to compute this. I'm on my cell phone right now but would be happy to send it to you (or anyone else) a bit later this evening. Send me an email if you're interested. | | Apr8 | comment | How long can this string of digits be extended?Incidentally, 25 is pretty close to $10e$, in accordance with the heuristic Will Sawin gives. | | Apr8 | comment | How long can this string of digits be extended?It's surprisingly quick and easy to find the longest such n by brute force (the number of possibilities of each length doesn't grow much, and for base 10 it never exceeds 2492). For example, for base 10 the longest is 3608528850368400786036725, which has length 25. However, this doesn't give a lot of insight into how $N(b)$ depends on $b$. | | Apr7 | comment | Can we find a Groebner Basis?The leading term ideal $LT(I)$ does not generally determine the original ideal $I$ (think about principal ideals in one variable), while a Groebner basis does, so you can't find a Groebner basis for $I$ given just $LT(I)$. But maybe I am misunderstanding your question - what are you trying to do and why might it be possible? | | Apr6 | awarded | ● Good Answer | | Apr5 | accepted | Is there a “mathematical” definition of “simplify”? | | Apr5 | comment | How to refer to a theorem that you have shown to be wrongThis might be confusing to someone stumbling across it in a paper (it might be read as "this theorem was proved between 1983 and 1987 but published later"), but I love it. | | Apr5 | comment | A way to get translations of foreign papers? | | Apr5 | comment | A way to get translations of foreign papers?Both books say quite a bit about packing and covering on the 2-sphere. If I remember right, there is some stuff in "Lagerungen..." that didn't make it into "Regular Figures" (for example, the optimal 7-point code in S^2), but the material dealing with regular polyhedra is in both. So if you mean a bound in terms of trig functions that is sharp for a few regular polyhedra, I think you'll find it there. | | Apr5 | comment | List of Charlatans in MathematicsArtie Prendergast-Smith makes an excellent point about the word "charlatan". Just to be clear, that word implies intentional dishonesty. | | Apr5 | comment | A way to get translations of foreign papers?Regarding the specific case you mention, I don't know exactly what's in this paper, but Fejes Tóth discusses covering problems on spheres in his book "Regular Figures" (which was translated into English); see section II.2.5. If you read German you might also find information in "Lagerungen in der Ebene, auf der Kugel, und im Raum", but it sounds from your last sentence like you don't. I doubt you'll find a translation of this paper, so your best bet is if he included the contents in one of these books. | | Apr5 | comment | List of Charlatans in MathematicsIt's a tricky subject that needs to be handled very tactfully, but your article could make a valuable contribution and I would be interested in reading it. On the other hand, compiling a list like this is not a good fit for MO (it is likely to lead to extended discussion and controversy). | | Apr5 | awarded | ● Nice Answer | | Apr5 | revised | Is there a “mathematical” definition of “simplify”?fixing wrong description of second function $f$ | | Apr5 | revised | Is there a “mathematical” definition of “simplify”?Oops, adding a "computable" | | Apr4 | revised | Is there a “mathematical” definition of “simplify”?added 247 characters in body | | Apr4 | answered | Is there a “mathematical” definition of “simplify”? | | Mar20 | awarded | ● Nice Answer | | Mar18 | awarded | ● Yearling | | Mar2 | accepted | Kissing Number of Spheres in Non-Euclidean Geometry | | Mar1 | answered | Kissing Number of Spheres in Non-Euclidean Geometry | | Feb27 | revised | How many idempotent relations are there on an $n$-element set?fixed typo in title | | Feb3 | awarded | ● Good Answer | | Jan29 | awarded | ● Enlightened | | Jan29 | awarded | ● Nice Answer |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329999685287476, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/74875-integral-thought-process.html
# Thread: 1. ## integral thought process? what should my thought process be when evaluating this integtral, where I have a fairly large x in front of a third root? Where do I start? $\int x^{11}\sqrt[3]{2x^4+5}$ 2. Think a bit: $x^{11}\sqrt[3]{2x^{4}+5}=x^{8}\cdot \sqrt[3]{2x^{4}+5}\cdot x^{3},$ what is the proper substitution? 3. did the x=8 come from taking x^11 and subtracting it from the 3 outside the root? next, can I get rid of the third root by multiplying (1/3) by x^3 4. Tell me what's the proper substitution first. 5. well 2x^4+5 will be the u and 8x^3 will be du 6. That's not actually the proper substitution, but it's a good start. What I call as a "proper substitution," it's such that you can get rid of the cube root. How to do that? Easy, put $u^3=2x^4+5$ and $3u^2\,du=8x^3\,dx$ and $x^4=\frac{u^3-5}2$ so from there you can get the $x^8.$ Finally $\sqrt[3]{2x^4+5}$ becomes $u.$ 7. The only part I do not understand is where this came from $x^4=\frac{u^3-5}2$ 8. Because of $u^3=2x^4+5$!! 9. and how does this give us x^8?? 10. $x^{8}=\left( x^{4} \right)^{2}=\left( \frac{u^{3}-5}{2} \right)^{2}.$ 11. sorry for all the questions, but why are you squaring those terms?? 12. I want you to please tell me how does the integral like before putting $u^3=2x^4+5.$ 13. it was the third root? 14. Just in case a picture helps... ...where straight continuous lines differentiate down / integrate up with respect to the explicit variable, and the staight dashed line with respect to the dashed balloon expression, so that the triangular network satisfies the chain rule. Integrating up the dashed is like finding F(u), and for this we can do... This still contains many of the algebra connections that were mystifying you, but you may find it helpful to get this kind of overview. (Krizalid - respect to the policy of dealing with the cube root in the substitution - but does it really help in this case? The OP's sub of $u = 2x^4 + 5$ seems to work fine. I did a similar picture for $u = (2x^4 + 5)^{\frac{1}{3}}$ and it's alot messier. My written version, too, though maybe not your's. Anyway, hope you don't mind me giving the pot another stir.) Don't integrate - balloontegrate! Balloon Calculus: worked examples from past papers 15. Originally Posted by gammaman what should my thought process be when evaluating this integtral, where I have a fairly large x in front of a third root? Where do I start? $\int x^{11}\sqrt[3]{2x^4+5}$ My "thought process" is "I sure would like to get rid of that $2x^4+ 5$ inside the cube root. What would happen if I tried $u= 2x^4+ 4$". Then I would find that $du= 8x^3 dx$ or $(1/8)du= x^3dx$. The "1/8" is a constant so I can alway put in $8(1/8)$ but where am I going to get that $x^3$ from? That's when I look at the $x^{11}$ and write it as [tex](x^8)(x^3}= (x^4)^2 (x^3)[tex]. Now I recognize that since $u= 2x^4+ 5$, $x^4= \frac{u- 5}{2}$ and so $(x^4)^2= \left(\frac{u-5}{2}\right)^2$. Putting that all together, $\int x^11\sqrt[3]{2x^4+ 5}dx= \int (x^4)^2\sqrt{2x^4+ 5}(x^3dx)= \int\frac{(u-5)^2}{4}\sqrt[3]{u} (\frac{1}{8}du)$ Now write that $\sqrt[3]{u}$ as $u^{1/3}$ and you have $\frac{1}{32}\int (u^2- 10u+ 25)u^{1/3} du= \frac{1}{32}\int (u^{7/3}- 10u^{4/3}+ 25u^{1/3})du$ which should be easy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9596972465515137, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/4474/why-does-hat-epsilon-hat-epsilon-of-a-factor-model-measure-risk?answertab=oldest
# Why does $\hat{\epsilon}'\hat{\epsilon}$ of a factor model measure risk? $\hat{\epsilon}'\hat{\epsilon}$ from the market model: $R_{it} - \hat{\alpha} - \hat{\beta}R_{mt} = \hat{\epsilon}$, or from a factor model such as the Fama-French 3 factor model, is often used in the literature to capture the idiosyncratic risk of stock $i$. What risk is this measuring? Who cares if the disturbances are high variance? All this means is that you've excluded relevant factors from your specification of the returns generating process, right? Or does its interpretation as "risk" come from an a priori assumption that the returns generating process has been correctly specified? - In the multivariate case, that term is proportional to the covariance matrix of the error. It follows that in the univariate case it is proportional to the variance of the error. – John Nov 4 '12 at 4:20 That's a nice insight, but I'm still scratching my head as to why this is considered a legitimate view of the risk of a security from any perspective? Who cares if the trace of the variance covariance matrix is large? – Jase Nov 4 '12 at 4:22 It is simply the variance that cannot be explained by the market or whatever factors you happen to looking at (hence, idiosyncratic risk). I think there are a lot of reasons to care about it, but recently there's been a lot of focus since people have found that stocks with high idiosyncratic variance tend to underperform those with the low. Not sure where you're going about the trace. – John Nov 4 '12 at 14:02 @John Since this statistic is is measuring something so wildly different to the historical standard deviation of returns, why does some literature treat the two statistics as almost the same thing? E.G. `http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1881503` does a review of the literature and doesn't even separate the papers that deal with idiosyncratic volatility and standard deviation of past returns as the volatility estimate? The authors just bunch them together and use the word "volatility". – Jase Nov 4 '12 at 14:13 1 I do not think that is the case generally. I often cannot explain the motivations of writers, but that paper specifically says they do not see much difference the returns of strategies investing based on idiosyncratic vs. total. – John Nov 4 '12 at 14:54 show 1 more comment ## 1 Answer I think one should look at the problem from two different angles to get an answer to this. Firstly, you can look (as you said you did) look at $\hat{\epsilon}$ in terms of a disturbance like you said, meaning the returns $R_{it}$ are depending linearly on the $R_{mt}$ - the market or factor returns. Then you can figure there is some regression involved an the theory of linear regression assumes the model like you stated it above where $\hat{\epsilon}$ is some disturbance with a normal distribution with mean $0$. So in order to find your true parameters $\hat{\alpha}$ and $\hat{\beta}$ you take a look at the disturbed data and fit a line through it so that the vector of the remaining disturbances (residuals) is minimized with respect to its sum of squares ($\ell^2$-norm). So the more your returns $R_i$ resemble your market returns $R_m$, the smaller the disturbances are according to your model. Secondly, you can look at the problem from a more practical viewpoint. We say that the asset returns $R_{it}$ are some returns of a markets assets. Take a stock which is a constituent of a stock index with the stock index returns being $R_{mt}$. Now one wants to know which part of the variance corresponds to the market risk and which part of the variance corresponds to the stocks individual properties (idiosyncratic risk caused by earnings quality, debt ratios or whatelse you can think of - just not your other factors ;-) ). Since one often assumes market risk and idiosyncratic risk to be uncorrelated you can decompose the stocks variance: $$\sigma_{i}^2 = \sigma_{m}^2 + \sigma_{id}^2$$ where $\sigma_{id}^2=\hat{\epsilon}^\prime\hat{\epsilon}$. The more the $R_i$'s resemble your market ($R_m$'s), the smaller the idiosyncratic risk will be. One speaks of the idiosyncratic risk as being diversified away when this happens. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9566164016723633, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=567806
Physics Forums ## (another)interesting number theory problem a and b are real numbers such that the sequence{c}n=1--->{infinity} defined by c_n=a^n-b^n contains only integers. Prove that a and b are integers. Sincerely, Mathguy PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Quote by Mathguy15 a and b are real numbers such that the sequence{c}n=1--->{infinity} defined by c_n=a^n-b^n contains only integers. Prove that a and b are integers. Sincerely, Mathguy $c_n \ = \ a^n - b^n$ What about any real numbers a and b, such that a = b, so that $c_n = 0 ?$ Here, and b don't have to be integers. Do I have your problem understood, and/or are there more restrictions on a and b? I assume you mean a≠b. Since a-b and a2-b2=(a-b)(a+b) are both integers, a+b is rational, and we get a and b are rational. We can write b=m/t and a=(m+kt)/t with (m,t)=1. Assume t≠1, then there is an integer s such that k is divisible by ts but not by ts+1. Let p be a prime larger than t and 2s+2. cp=ap-bp=(pktmp-1+k2t2(...))/tp Both the second term and the denominator are divisible by t2s+2, while the first term is not, so the fraction is not an integer. It follows that t=1 and we are done. ## (another)interesting number theory problem Quote by Norwegian a+b is rational, and we get a and b are rational. Sorry, Norwegian, but why? For example, sqrt(2) and 3-sqrt(2) are both irrational, and they add up to 3. Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Quote by Dodo Sorry, Norwegian, but why? For example, sqrt(2) and 3-sqrt(2) are both irrational, and they add up to 3. Both a+b and a-b are rational. So (a+b)+(a-b)=2a is rational. Ahhh, thanks, Micromass. By the way, this is a beautiful proof, and I'm still trying to figure out how did you come to it, Norwegian. I presume you started from both ends. At the finishing end, you needed a^n-b^n to be a rational but not an integer. At the starting end, the way you expressed a=b+k suggests the use of the binomial theorem to evaluate powers of b+k (or powers of the numerator of it). If a and b are rational, then a^n and b^n (with a=b+k) were going to end up having a common denominator, so you concentrated in making the numerator of a^n-b^n a non-integer. Then divisibility / factorization issues enter; though I still don't see in which order did (1) finding the largest power of t dividing k, (2) coprimality conditions, and (3) finding a prime p that does not divide most of the things around, in which order these three came to be, and what suggested them. I always find instructive to see the genesis of proofs; it adds to the inventory of ways of constructing new ones. Quote by checkitagain $c_n \ = \ a^n - b^n$ What about any real numbers a and b, such that a = b, so that $c_n = 0 ?$ Here, and b don't have to be integers. Do I have your problem understood, and/or are there more restrictions on a and b? Oh sorry! Yes, a and b had to be distinct. Quote by Norwegian I assume you mean a≠b. Since a-b and a2-b2=(a-b)(a+b) are both integers, a+b is rational, and we get a and b are rational. We can write b=m/t and a=(m+kt)/t with (m,t)=1. Assume t≠1, then there is an integer s such that k is divisible by ts but not by ts+1. Let p be a prime larger than t and 2s+2. cp=ap-bp=(pktmp-1+k2t2(...))/tp Both the second term and the denominator are divisible by t2s+2, while the first term is not, so the fraction is not an integer. It follows that t=1 and we are done. That is an interesting proof, and I will take the time to digest it later! Thanks Thread Tools | | | | |-----------------------------------------------------------------|----------------------------------|---------| | Similar Threads for: (another)interesting number theory problem | | | | Thread | Forum | Replies | | | Linear & Abstract Algebra | 10 | | | Calculus & Beyond Homework | 1 | | | Calculus & Beyond Homework | 1 | | | Linear & Abstract Algebra | 31 | | | Precalculus Mathematics Homework | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303436279296875, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=391527
Physics Forums ## Faraday's Law - Spatial Varying Magnetic Field Problem 1. The problem statement, all variables and given/known data The figure attached shows a rod of length L = 10 cm that is forced to move a a constant speed of v = 5.00 m/sec along horizontal rails. The rod, rails, and connecting strip at the right form a conducting loop. The rod has a resistance of 0.4 $$\Omega$$ and the rest of the loop has negligible resistance. A current of 100 A through the long straight wire at a distance a = 10.0 mm from the loop sets up a nonuniform magnetic field through the loop. Find the emf and current induced in the loop. Note: sorry about the shotty diagram... paint is a cruel mistress... the x's are the B field going into the page and the *'s are coming out of the page.... the point is that the field is NOT uniform. 2. Relevant equations 1. $$\Phi_B = \int{\vec{B} \bullet d\vec{A}}$$ 2. $$\epsilon = B l v$$ 3. $$B = \frac{\mu_0}{2\pi} \frac{I}{r}$$ 4. $$\epsilon = -\frac{d \Phi_B}{dt}$$ 5. $$I = \frac{\epsilon}{R}$$ 3. The attempt at a solution I feel like finding the current is pretty simple once I've managed to calculate the emf by using eqn # 5. The emf is where I am really struggling. I know that I can calculate the flux (and be able to deduct the emf) using eqn #3 as a function for B. But that requires I know the area of the loop and I'm pretty sure I have insufficient information for that.... Equation 2 gives me the B field at a point (again, subing eqn 3 for B) but that doesn't really help me... I've been looking through my book and the internet for hours.. but I am missing something.. A push in the right direction would be much appreciated.... Attached Thumbnails PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 7 Recognitions: Gold Member Homework Help Quote by scoldham The emf is where I am really struggling. I know that I can calculate the flux (and be able to deduct the emf) using eqn #3 as a function for B. Can you show us how to calculate the flux? But that requires I know the area of the loop and I'm pretty sure I have insufficient information for that.... Equation 2 gives me the B field at a point (again, subing eqn 3 for B) but that doesn't really help me... Maybe you need the area, maybe not. What you really need is an expression for dΦ/dt, the rate of change of flux with respect to time, which brings me back to the original question, how will you calculate the flux? It clearly is not "magnetic field times area" because, as you very correctly pointed out, the field is not uniform over the area of the loop. The only direct equation in the book is #1 from my first post involving both B and dA which I only have one of... Trying to use some imagination here.. but what about this: $$\int \epsilon dt = \Phi_B$$ $$\int IR dt = \Phi_B$$ using I from the long wire... and R from the loop (?) but it really doesn't seem to make since to integrate wrt time... right direction? Blog Entries: 7 Recognitions: Gold Member Homework Help ## Faraday's Law - Spatial Varying Magnetic Field Problem The first equation that you posted is the one to use. Note that the field is variable over the area of the rectangle. Consider a strip at distance r from the wire. It has width dr and length y (y is the distance from the sliding rod to the end of the loop segment that is parallel to it). Can you find the magnetic flux element dΦ through this strip? If yes, then you need to add all such contributions (i.e. integrate) to find the total flux through the rectangle. I appreciate your help so far... If I'm understanding correctly, The B field at distance r from the wire is equal to $$\frac{\mu_0 I}{2\pi r}$$ The flux for a horizontal strip in the loop is $$d \Phi = B y dr = \frac{\mu_0 I}{2\pi r} y dr$$. If I integrate over the height of the loop, then I am given the flux, with which, I can calculate the emf.... by taking the derivative wrt time? Am I getting closer? Blog Entries: 7 Recognitions: Gold Member Homework Help You are understanding correctly and you are getting closer. Do the integral to find the flux, see what you get and if you can't figure out the next step, post what you have done and you will get another hint. So when I integrate (using limits from a to L), I get $$\Phi = \frac{\mu_0 I y}{2 \pi} \left[ ln(L) - ln(a) \right]$$ I still do not know what y is really. Can I substitute $$y = \vec{v} = \frac{dy}{dt}$$. If I do that... how do I take the dt back out? It kind of seems to throw off what I'm working towards - calculating the flux. Blog Entries: 7 Recognitions: Gold Member Homework Help First off the flux should be $$\Phi = \frac{\mu_0 I y}{2 \pi} \left[ ln(L+a) - ln(a) \right]$$ because the far side is at distance L + a if the rod has length L. You can rewrite this as $$\Phi = \frac{\mu_0 I y}{2 \pi} \left[ ln \right(\frac{L+a}{a} \left) \right]$$ You have calculated the flux, but that is not what the problem is asking. The problem is asking you to find the induced emf which is the negative time derivative of the flux. What do you get when you take the time derivative of the above expression? It all makes since now. When I take d/dt of the equation, y becomes dy/dt which is velocity and adding a minus sign to both sides of the equation gives me the emf. I can get the current using $$I = \frac{\epsilon}{R}$$ Thank you SO much! However, I do still have a question. There were several other parts to this question which I've got worked out for the most part. But one is throwing me off. "What is the magnitude of the force which must be applied to the rod to make it move at a constant speed?" v doesn't appear to vary in the equation... I first though F = qVB = ItVB then setting F = 0 for no acceleration... but if that tells me anything (IF), all I get is the time when V gets to whatever I assign it to be... Blog Entries: 7 Recognitions: Gold Member Homework Help Consider a vector element of length dr in the same direction as the current. It will experience a magnetic force dF = i dr x B where B is the local magnetic field vector generated by the wire. You already know the current, so you can integrate to find the net magnetic force. If the rod is to move at constant velocity, then the net force on the rod must be zero. So with what external force must the rod be pulled to get a net force of zero? Thanks... I believe this give me just what I needed! The assignment is due shortly so I will work out what I think and hopefully I won't hit any more roadblocks. Thank you for all of your help. Thread Tools | | | | |-----------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Faraday's Law - Spatial Varying Magnetic Field Problem | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 10 | | | Advanced Physics Homework | 2 | | | Advanced Physics Homework | 2 | | | Advanced Physics Homework | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426590204238892, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/145571-normal-curve.html
# Thread: 1. ## the normal curve I got halfway through this problem but am stumped on D and E. The amount of active ingredient in a certain type of cold remedy tablet has mean 400.0 milligrams and standard deviation 4.5 milligrams. If a sample of 72 tablets is randomly selected, what is the probability that the mean amount of active ingredients in this sample is more than 401 milligrams? A) Let x denote the amount of active ingredient in the cold remedy tablets. What’s known about the shape of the distribution of X? It is a bell curve B) Let X denote the mean amount of active ingredient in a sample of 72 of these cold remedy tablets. What is known about the shape of X-bar and why? i. It is a normal distribution because the sample size, n, is greater than 30. C) What is the mean of X-bar? Mean of x-bar equals 400.0 because M of x-bar equals the M of x D) What is the standard deviation of X-bar? E) Solve the problem 2. D) If the population has standard deviation $\sigma$, then the standard deviation of the mean of a sample of size n is $\sigma / \sqrt{n}$. 3. Thanks, awkward. I calculated the standard deviation as standard deviation of x-bar = standard deviation of x divided by the square root of n, or 4.5 divided by the square root of 72, and got .530 I also calculated the Probability by converting the 401 milligrams to a z-score (1.89) and the calculating the normalcdf on my calculator (1.89, 1000) to come up with .0293, or probability of about 3 % Does this sound accurate? 4. Correct!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046441912651062, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/190601-prove-equality-real-numbers-print.html
# Prove equality (real numbers) Printable View • October 17th 2011, 10:55 AM Fabio010 Prove equality (real numbers) a is a real number. Demonstrate the following equality. $\surd(a^2) = |a|$ I tried in this way but i think it is wrong. $\surd(a^2) = a~~~(a\geq0)$ $a\geq0~~so~~a = |a|$ Is it correct? :) • October 17th 2011, 01:45 PM emakarov Re: Prove equality (real numbers) You considered only the case $a\ge 0$. Also, since the problem is so basic, the proof has to be really detailed, and I am not sure you sufficiently explained why $\sqrt{a^2}=a$ when $a\ge 0$. So, what happen when $a < 0$? Recall that by definition $\sqrt{x}$ is a nonnegative number $y$ such that $y^2=x$. • October 18th 2011, 10:58 AM Fabio010 Re: Prove equality (real numbers) Quote: Originally Posted by emakarov You considered only the case $a\ge 0$. Also, since the problem is so basic, the proof has to be really detailed, and I am not sure you sufficiently explained why $\sqrt{a^2}=a$ when $a\ge 0$. So, what happen when $a < 0$? Recall that by definition $\sqrt{x}$ is a nonnegative number $y$ such that $y^2=x$. Thats the point the exercise is so basic. But hard to do. • October 18th 2011, 11:07 AM Plato Re: Prove equality (real numbers) Quote: Originally Posted by Fabio010 Thats the point the exercise is so basic. But hard to do. Any proof of this depends on what definitions you have and what other properties of real numbers you know. If could be this simple: 1) if $a\ge 0$ then $\sqrt{a^2}=a=|a|$ 2) if $a< 0$ then $\sqrt{a^2}=-a=|a|$. • October 18th 2011, 11:42 AM Fabio010 Re: Prove equality (real numbers) Quote: Originally Posted by Plato Any proof of this depends on what definitions you have and what other properties of real numbers you know. If could be this simple: 1) if $a\ge 0$ then $\sqrt{a^2}=a=|a|$ 2) if $a< 0$ then $\sqrt{a^2}=-a=|a|$. I found some properties in internet , and i tried: $\sqrt{a^2}=b~~b\geq0~~so~~b^2=a^2~so~(b-a)(b+a)=0$ $b = a,~in~this~case~a\geq0$ $b = -a, in~this~case~a\leq0$ so $\sqrt{a^2}=b=|a|$ All times are GMT -8. The time now is 05:25 AM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9247053861618042, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/245118/set-boundary-preserved-by-an-infinite-union?answertab=votes
# Set boundary preserved by an infinite union Suppose I have a subset $U\subset\mathbb R^2$ and a real number $r>1$ with the following properties: 1. $U$ is compact; 2. $U\subset rU$ (self-similarity); 3. $0\in U$; 4. there exists an open set $H\subset \mathbb R^2$ such that $0\in\partial H$ (boundary) and $H\cap U=\emptyset$. Let $V:=\bigcup_{n\geq0} r^n U$. Question: Whether exists $H_1$ open such that $0\in\partial H_1$ and $H_1\cap V=\emptyset$. Of course, if the answer is "yes", I would like to see a way how to prove it (it needn't be a complete proof, just some crucial hint). - Of course, $H$ (if it exists) can always be taken to be $\mathbb R^2\setminus U$. – Hagen von Eitzen Nov 26 '12 at 18:39 Of course, and the question is: is still $0\in\partial H\setminus V$ and $H\setminus V$ open? – tohecz Nov 26 '12 at 18:40 ## 1 Answer No. Consider $U=\{(x,y)\mid x^2+(y-1)^2\le 1\lor x^2+(y+1)^2\le 1\}$ (union of two touching closed disks) and $r=2$. Then $V$ is dense in $\mathbb R^2$. - damn, a good one! Now I have to find what other good properties does my set $U$ have. – tohecz Nov 26 '12 at 18:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255321621894836, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/240640/a-closed-subspace-of-a-banach-space-is-a-banach-space/240650
# A closed subspace of a Banach space is a Banach space How to prove that a closed subspace of a Banach space is a Banach space? A subspace is closed if it contains all of its limit points. But in the proof of the above question how can use this idea to get a Cauchy sequence and show that it is convergent in the subspace? - You don't need to get a Cauchy sequence. You just assume a sequence is Cauchy and prove it converges and the limit point is in the subspace. – Hui Yu Nov 19 '12 at 16:15 ## 3 Answers Note: you don't need to "get a Cauchy sequence" -- you assume you have one and then you need to show that its limit is in the subspace. Let $C$ be a closed subspace of a Banach space $B$, let $x_n$ be a Cauchy sequence in $C$ (with respect to the norm on $B$). Then the limit $x$ of $x_n$ exists in $B$ since $B$ is complete. Also, if it is not already in $C$, it must be a limit point of $C$. But $C$ is closed and hence contains all its limit points. Hence $C$ is complete, too. - In general, a closed subset of a complete metric space is also a complete metric space. In this case, the metric is given by the prescribed norm on the given Banach space. Hence, a closed subspace of a Banach space is a normed vector space that is complete with respect to the metric induced by the norm. By definition, this makes it a Banach space. - Let Y be closed subspace of banach space X. Now consider a sequence(cauchy) in Y. As X is banach the series converges in X. But Y contains all of it limit points. By uniqueness of limits we'll get for any cauchy sequence in Y, it is convergent in Y and so it is Banach space. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.954044759273529, "perplexity_flag": "head"}
http://understandinguncertainty.org/node/222
# What was the probability that Barack Obama would win the US election? Submitted by david on Mon, 12/01/2008 - 12:20 On the face of it this seems an odd question. After all, he won. But before the election it was uncertain whether Obama would win, and probability is the way that uncertainty is quantified, so maybe it is reasonable to ask what that probability was. We know that there were betting odds – a betting exchange such as Intrade allows people both to accept or make bets and so converges, at any point in time, to a certain set of odds at which people are willing to be either the better or the bookmaker. This prediction market provides a ‘probability’ on Obama winning that kept changing for the year before the election – this is shown in Figure 1 with some of the main events of the year marked in. 'Probabilities' of Obama and McCain winning the 2008 US Presidential Election We can see how the betting odds changed in reponse to events. Before getting the nomination by the Democratic party, Obama had to compete with other candidates, primarily Hillary Clinton in each state. In the early stages of the campaign Clinton was the firm favourite, although the probability of Obama winning briefly touched 40% after winning important states. His odds went above 50:50 in February after more successes against Clinton, and rose again when Clinton admitted defeat in May. His main competitor now was McCain, and Obama stayed the favourite until the financial crisis became very dramatic in September: from when the major mortgage organisations were taken over the government on 5th September, through to the failure of Lehman Bros bank and the AIG insurance company on the 15th September, Obama's odds fell steadily until McCain was briefly the favourite. However from 15th September the odds on Obama steadily increased until election day of November 4th. A ‘probability’ of 20% means that people were willing to take and make bets at ‘4 to 1 against’, which means that if someone put £1 on Obama winning, they would receive back £4 winnings plus their stake to make £5 in total. [I don’t usually gamble, but in January 2008 I did place an online bet of £1 on Obama at 4 to 1 odds , in the middle of a lecture to a post-graduate class in Cambridge in order to illustrate how odds transform to probabilities: this choice was treated with some derision at the time. Gambling during a lecture probably also broke some medieval statute of the University!]. But do these betting odds really represent the probability? How does this fit with how probability is taught in school? In fact what does probability mean? Does it even exist? These are reasonable questions to ask, and we quickly get into some tricky philosophical issues. #### Assessing probability in a non-repeatable situation Let’s look at the different ways we might think about probability, and whether each way might tell us what the probability of Obama winning might have been. 1. The Classical idea of probability is based on equally likely outcomes: if there are $N$ possible outcomes the probability of each one is $1/N$. This is what is generally taught in schools, and works fine for coins, dice and other physical objects where it may be reasonable to assume some symmetry between the outcomes. In the election there were two main candidates, so would the probability of Obama winning be 50%? Just because there are two alternatives does not mean they are equally likely. And in fact there were up to 14 other candidates including names that could be written in, depending on the state, but a probability of 1/15 seems even more unreasonable. So the classical view does not work here. 2. The frequency interpretation is based on what proportion of events happen in the long run: for example, we might say the probability of a dropped piece of toast landing butter-side down is 80% if that is the proportion of times it happens if we keep droping toast (in controlled conditions) for millions of times. But this interpretation is difficult to apply to specific events such as Obama becoming President since they are essentially unique. We would need to place the event in a class of repeatable opportunities that stretch into the future, such as ‘black men becoming president’. Looking into the past we note that there have been 43 Presidents of the USA of whom precisely 0 have been black, so the current observed proportion is 0/43. Does this mean the best estimate of the probability that Obama becoming President was 0%? Clearly this would be ridiculous. Even if we were misguided enough to place Obama within this class of events, there is a better way of working out a probability which will be explained below. 3. Another possibility that has been suggested is that there is some true underlying ‘degree of belief’ in the statement ‘Obama will be the next President’ that, given the knowledge that we have, it would be logical to hold. This proposal leaves open the question of how to estimate this quantity and does not seem helpful in this situation. 4. Rather more attractive is the idea that there is some true underlying propensity for an event to happen – this is an objective state of the world but needs to be estimated from what we know. One mental picture for this is to consider all the possible ways in which things might turn out, and then think of what proportion of these possible futures end up with Obama being President. This approach seems a bit shaky from a philosophical point of view (can we really think of the set of ‘possible futures’?), but means that we can think of probability as a frequency without having to think of some class into which to embed the event we are interested in. 5. The final way of thinking of this probability comes back to the betting: the probability is essentially the odds you are willing to accept in a bet based on your own subjective judgement. The betting exchange probabilities plotted above provides a kind of group assessment. Such probabilities could be interpreted as your best current estimate of the ‘true’ probability (which is not directly measurable). Alternatively, and a view that I prefer, we can say that the probability is not an estimate of any actual quantity in the outside world, but simply an assessment of the odds that You are willing to take. You don't have to actually place any bets: Your probability that Obama will be President is, say, 50% if you are indifferent between the following two options: either (a) obtaining a reward if Obama becomes President, and (b) obtaining the same reward if a flipped coin comes up heads. The attractive thing about this interpretation of probability is that it does not matter whether the event is truly unpredictable, or whether it is pre-ordained and you just don’t happen to know how things will turn out. For example, before I flip a coin, you may say your odds are 50:50 on heads. If I flip the coin but cover up the result, your odds should not change, even though the uncertainty is now due to our ignorance rather than any essential unpredictability. If I then look at the coin but don't show you, then Your probability should still be 50%, even though mine is either 0 or 100%. So this view says that probability does not exist, but is simply a numerical expression of Your personal uncertainty, given the current information. Rather strangely, it means that probabilities can be quantified but not measured, rather like the value of anything, whether a painting or a loaf of bread, does not objectively exist but depends on what people are willing to pay. #### Assessing probabilities in repeatable situations Things are are made easier if we see the current event as part of a sequence stretching back into the past and forward into the future, and we have no reason to think that any member of the sequence is systematically different from any other. We call the events ‘exchangeable’, and if we are willing to assume this (which we would not for Obama), there is a neat way of assessing the probability of the next event. An Italian actuary called de Finetti showed that if we are willing to assume exchangeability, then it is as if there is some true underlying chance for the event to happen, we just don’t know what it is. In the long run the proportion of events that occur will tend to this underlying chance. Suppose we have observed $n$ possibilities for an event to occur, and it has actually occurred in $r$ of them? What is the chance it will occur at the next opportunity? This is a classic problem, dealt with by the clergymen Thomas Bayes in an article published in 1763 and also by Laplace in 1814. Laplace provided the simple rule: If before we observed any events we thought all values of $p$ were equally likely, then after observing $r$ events out of $n$ opportunities, we would assess the chance of the next event as $\hat{p} = (r+1)/(n+2)$ (where the little 'hat' on $p$ indicates it is an estimate). This is a major statement, and we should look at some examples. Before we start making any observations, $r =0$ and $n=0$, and so the rule says that $\hat{p} =1/2$ - when we claim entire ignorance, then our odds are 50:50. Suppose we then observed the event at the first opportunity: then $r=1$, $n=1$, and $\hat{p} = 2/3$. After the second event in succession, $r=2$, $n=2$ and $\hat{p}=3/4,$ and so on, and so after 1,000,000 events in 1,000,000 opportunities then $\hat{p} = 1,000,001/1,000,002$. We note it never gets to 1, so we can never be completely sure that the event will happen next time. This is an example of Laplace’s Law of succession, which he applied to the question of whether the sun will rise tomorrow. By working out how many days $n$ it had risen, we can estimate the chance of it rising tomorrow as $(n+1)/(n+2)$ - it seems that Laplace was rather mischievous in using this as an example, as he also pointed out: But this number [the odds of the sun coming up tomorrow] is far greater for him who, seeing in the totality of phenomena the principle regulating the days and seasons, realizes that nothing at present moment can arrest the course of it. showing that applying his formula in an unthinking manner can be absurd. So suppose you are told that a bag contains a mixture of black and white balls, but you are not told the proportion of each. You draw out 10 balls, putting each back after you have drawn it, and 3 of them are black. What is the chance that the next is black? Well, assuming that the balls are well-mixed, and that you thought before you started that all proportions were equally likely, then the chance is 4/12 = 1/3. Laplace’s law of succession can be obtained with some basic integral calculus: have a look at an explanation of Laplace's analysis. So now you know how to use past evidence to assess the chances of future events, but only if you think that the future is going to carry on just like the past, and that can be a very dangerous assumption: if you want an example, think of the free-range turkey on Christmas Eve, happily looking forward to the next day of food and shelter, just as he has always known in the past. Levels: AttachmentSize 142.03 KB ## Comments Anonymous (not verified) Thu, 10/28/2010 - 20:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9719837307929993, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/09/17/properties-of-ab-categories/?like=1&_wpnonce=274c59a080
# The Unapologetic Mathematician ## Properties of Ab-Categories There are a number of things we can say right off about the $\mathbf{Ab}$-categories we defined last time. As is common practice, we’ll blur the distinction between an abelian group and its underlying set. First of all, any $\mathbf{Ab}$-category $\mathcal{C}$ has zero morphisms. That is, there’s a special morphism between any two objects that when composed with any other morphism gives the special morphism in the appropriate hom-set. In fact, since each hom-set is an abelian group it has an additive identity $0\in\hom_\mathcal{C}(A,B)$. Then for any $f\in\hom_\mathcal{C}(B,C)$ we have $f\otimes0=0\in\hom_\mathcal{C}(B,C)\otimes\hom_\mathcal{C}(A,B)$, which composition must send to $0\in\hom_\mathcal{C}(A,C)$. The zero morphisms are exactly the zero morphisms! Given any object $C\in\mathcal{C}$ the hom-set $\hom_\mathcal{C}(C,C)$ is already an abelian group. But the composition puts the structure of a monoid onto this set as well, and the linearity condition says these two are compatible, making the endomorphism monoid into an endomorphism ring. In fact, every ring is an endomorphism ring. Way back when we first defined categories we noted that a category with one object was the exact same thing as a monoid. And a ring is just an abelian group with a compatible monoid structure on it. So an $\mathbf{Ab}$-category with a single object is the exact same thing as a ring! In fact, a lot of the study of $\mathbf{Ab}$-categories can be seen as extending ring theory from that special case to the more general one. Incidentally, you should see right off that when we consider rings $R$ and $S$ as categories like this, a ring homomorphism from $R$ to $S$ is the same thing as an $\mathbf{Ab}$-functor between the categories. Remember when we talked about direct sums of modules over a given ring? Well the same thing happens here. We define the “biproduct” $\bigoplus\limits_{i=1}^n A_i$ of the finite collection of objects $A_i$ to be an object along with two families of arrows: • $\pi_i:\bigoplus\limits_{i=1}^n A_i\rightarrow A_i$ • $\iota_i:A_i\rightarrow\bigoplus\limits_{i=1}^n A_i$ satisfying the relations • $\pi_i\circ\iota_j=0:A_j\rightarrow A_i$ if $i\neq j$ • $\pi_i\circ\iota_i=1_{A_i}:A_i\rightarrow A_i$ • $\sum\limits_{i=1}^n\iota_i\circ\pi_i=1_{\bigoplus\limits_{i=1}^n A_i}$ From the same arguments as in our coverage of direct sums we see that a biproduct satisfies the universal properties of both a categorical product and coproduct, and conversely that a categorical product or coproduct implies the existence of the biproduct arrows. Note that we’re making no statement whatsoever that such a biproduct actually exists in our category, but when it does it’s both a product and a coproduct. As a special case, we can consider the biproduct of an empty collection of objects. This will be both a product and a coproduct of an empty collection of objects, if it exists, and will thus be a zero object. Of course, it may or may not exist. Even if there is no zero object in our category, we still have the above zero morphisms, and so we can still talk about kernels and cokernels. The kernel $\mathrm{Ker}(f)$ of a morphism $f:A\rightarrow B$ is the equalizer $\mathrm{Equ}(f,0)$, and its cokernel $\mathrm{Cok}(f)$ is the coequalizer $\mathrm{Coequ}(f,0)$. In fact, life is even better now that we’re enriched over $\mathbf{Ab}$: every equalizer is a kernel and every coequalizer is a cokernel. Indeed, $\mathrm{Equ}(f,g)=\mathrm{Ker}(f-g)$ and similarly for coequalizers. Again, we’re saying nothing about whether such kernels or cokernels actually exist. Together, these facts say a lot about the behavior of limits in $\mathbf{Ab}$-categories. Biproducts tell us about finite products and coproducts, while kernels of morphisms tell us about all different equalizers. And then The Existence Theorem for Limits tells us that every finite limit can be constructed from finite products and equalizers, while every finite colimit can be constructed from finite coproducts and coequalizers. So if our $\mathbf{Ab}$-category has all biproducts, all kernels, and all cokernels, then it has all finite limits whatsoever! Let’s add one more little property that will simplify our life. We know that kernels are monomorphisms, and that cokernels are epimorphisms. If we assume on top of having all biproducts, kernels, and cokernels that every monomorphism is actually the kernel of some arrow in our category, and that every epimorphism is actually the cokernel of some arrow, then we will call our $\mathbf{Ab}$-category an abelian category. You should verify that given any ring $R$ the category $R\mathbf{-mod}$ of all left $R$-modules satisfies all these properties, and thus is an abelian category. These are the abelian categories that started the whole theory of homological algebra, which is to a large extent the study of general abelian categories. ### Like this: Posted by John Armstrong | Category theory ## 10 Comments » 1. [...] on Kernels and Cokernels The best-known abelian categories are categories of modules over various rings. And as modules, these objects are structured sets. [...] Pingback by | September 18, 2007 | Reply 2. [...] And it turns out we already know a lot about these sorts of categories! Specifically, they’re abelian categories. In fact, since we’re working over a field (which is a commutative ring) the properties of [...] Pingback by | May 19, 2008 | Reply 3. [...] Splitting Lemma Evidently I never did this one when I was talking about abelian categories. Looks like I have to go back and patch this [...] Pingback by | June 25, 2008 | Reply 4. [...] already said that (also ) is a abelian category. Now in any abelian category we have the first isomorphism [...] Pingback by | June 27, 2008 | Reply 5. [...] remember that in abelian categories we’ve got diagrams to chase and exact sequences to play with. And these have something to say [...] Pingback by | July 23, 2008 | Reply 6. [...] so we have a direct sum of representations. Is it a biproduct? Luckily, we don’t have to bother with universal conditions here, because a biproduct can be [...] Pingback by | December 9, 2008 | Reply 7. [...] The Category of Representations is Abelian We’ve been considering the category of representations of an algebra , and we’re just about done showing that is abelian. [...] Pingback by | December 15, 2008 | Reply 8. [...] Of course, given any inner product space we can “forget” the inner product and get the underlying vector space. This is a forgetful functor, and the usual abstract nonsense can be used to show that it creates limits. And from there it’s straightforward to check that the category of inner product spaces is abelian. [...] Pingback by | May 6, 2009 | Reply 9. [...] like to see that the category of Lie algebras is Abelian. Unfortunately, it isn’t, but we can come close. It should be clear that it’s an [...] Pingback by | August 14, 2012 | Reply 10. [...] as the quotient of the range by the image. All we need to see that the category of -modules is abelian is to show that every epimorphism is actually a quotient, but we know this is already true for the [...] Pingback by | September 12, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273382425308228, "perplexity_flag": "head"}
http://www.mathclique.com/2011/11/age-distribution-for-motor-vehicle.html
## Linkbar ### Age Distribution for Motor Vehicle Department Data from the Motor Vehicle Department indicate that 80% of all licensed drivers are older than age 25. 1. In a sample of n = 60 people who recently received speeding tickets, 38 were older than 25 years and the other 22 were age 25 or younger. Is the age distribution for this sample significantly different from the distribution for the population of licensed drivers? Use α = 0.05. 2. In a sample of n = 60 people who recently received parking tickets, 43 were older than 25 years and the other 17 were age 25 or younger. Is the age distribution for this sample significantly different from the distribution for the population of licensed drivers? Use α = 0.05. SOLUTIONS AND ANSWERS (1) SOLUTION: $$df = k-1 = 2-1 =1$$ $$f_e = 30, 30$$ $$f_o = 38, 22$$ $$\chi^2 critical$$ with df = 1 degrees of freedom at 0.05 significance level DECISION: Since the calculated $$\chi^2$$ is greater than the critical $$\chi^2$$, we can conclude that age distribution for this sample is significantly different from the distribution for the population of licensed drivers. (2) SOLUTION: $$df = k-1 = 2-1 =1$$ $$f_e = 30, 30$$ $$f_o = 43, 17$$ $$\chi^2 critical$$ with df = 1 degrees of freedom at 0.05 significance level DECISION: Since the calculated $$\chi^2$$is greater than the critical $$\chi^2$$, we can conclude that age distribution for this sample significantly different from the distribution for the population of licensed drivers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9669355154037476, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/33542/why-are-radians-more-natural-than-any-other-angle-unit/33567
# Why are radians more natural than any other angle unit? I'm convinced that radians are, at the very least, the most convenient unit for angles in mathematics and physics. In addition to this I suspect that they are the most fundamentally natural unit for angles. What I want to know is why this is so (or why not). I understand that using radians is useful in calculus involving trigonometric functions because there are no messy factors like $\pi/180$. I also understand that this is because $\sin(x) / x \rightarrow 1$ as $x \rightarrow 0$ when $x$ is in radians. But why does this mean radians are fundamentally more natural? What is mathematically wrong with these messy factors? So maybe it's nice and clean to pick a unit which makes $\frac{d}{dx} \sin x = \cos x$. But why not choose to swap it around, by putting the 'nice and clean' bit at the unit of angle measurement itself? Why not define 1 Angle as a full turn, then measure angles as a fraction of this full turn (in a similar way to measuring velocities as a fraction of the speed of light $c = 1$). Sure, you would have messy factors of $2 \pi$ in calculus but what's wrong with this mathematically? I think part of what I'm looking for is an explanation why the radius is the most important part of a circle. Could you not define another angle unit in a similar way to the radian, but with using the diameter instead of the radius? Also, if radians are the fundamentally natural unit, does this mean that not only $\pi \,\textrm{rad} = 180 ^\circ$, but also $\pi = 180 ^\circ$, that is $1\,\textrm{rad}=1$? - – dmckee♦ Aug 6 '12 at 14:52 – Philip Oakley May 11 at 20:26 ## 7 Answers Angles are defined as the ratio of arc-length to radius multiplied by some constant $k$ which equals one in the case of radians, $360/2\pi$ for degrees. What you're effectively asking is what's natural about setting $k$ = 1? Again it's tidyness as pointed out in dmckee's alternative answer. - Consider the Taylor series for the trigonometric function. For instance sine $$\sin \alpha = \alpha - \frac{\alpha^3}{3!} + \dots = \sum_{n=0}^\infty (-1)^{n}\frac{\alpha^{2n+1}}{(2n+1)!},$$ or cosine $$\cos \alpha = 1 - \frac{\alpha^2}{2!} + \dots =\sum_{n=0}^\infty (-1)^n \frac{\alpha^{2n}}{(2n)!}.$$ If you were to choose some other unit or angle these very tidy series would pick up some additional factors in every term. That kind of thing is "unnatural" to mathematicians. - 2 Or, if Taylor series feel too esoteric for you, just consider the approximation $\sin\alpha\approx\alpha$ for small angles $\alpha$, which only holds if $\alpha$ is measured in radians. (Formally, of course, that approximation simply arises from truncating the Taylor series after the first-order term, so in a sense it's the same thing.) – Ilmari Karonen Aug 6 '12 at 19:00 Nice........... – Mike Dunlavey Aug 8 '12 at 17:39 The formula is 'wrong' in that it already presumes that alpha is in radians, while we all know sin 90 is 1.0 ;-) The trigonometric relationships (e.g. sin(A+B)=s(A).c(B)+s(B).c(A) ) stand no matter what unit of measure is used. The choice of radians is an [in]convenience. The underlying problem is that SI Length is a norm in a 3d space (no name). Here endeth the proof that 1=3. – Philip Oakley May 11 at 20:36 People call things "natural" when they simplify formulas. Example, if there is a spinning wheel, the velocity $v$ of a point on the periphery is intuitively proportional to rotational speed $\omega$ and radius $r$. If the rotational speed is measured in radians per second, then the exact formula and the intuitive one are identical: $$v = r \omega$$ rather than something ugly like $r\omega(\pi/180)$. - Most importantly $$e^{i x} = \cos x + i \sin x$$ only holds (in this form) in radians. So now you might ask why $e$ is more natural than any other number ;-) - I think part of what I'm looking for is an explanation why the radius is the most important part of a circle. The most important part of a circle is the locus of points that comprise it. Without that, you don't have a circle. Radius is important in the definition of "circle" but the definition of "circle" is not identical with any circle. The radian is defined as "the ratio between the length of an arc and its radius". $\theta = s/r$ It is more "natural" than other angular measures for this reason: the angle in radians is the normalized arc length, i.e., the radian measure of angle is the arc length for unit radius. EDIT: to address the numerous comments Zendmailer has made to other answers. What I'm asking now is, if they are indeed natural , how does the claim that 1 radian = 1 fit in? For any angular measure $\alpha$, we have the almost trivial result: 1 $\alpha = 1$ So, the fact that 1 radian = 1 has nothing to do with the question of naturalness. As I explained in a comment to another answer, the justification for the naturalness of the radian as an angular measure is geometric. One can construct a circle with a length of string fixed at one end, the center of the circle, and a pencil. Holding the string taunt, the pencil traces out the locus of points that comprise the circle. The radius of the circle is the length of the string. Having done that, what is the most natural way to measure length along the circle? Lay the string along the circumference. The arc length is precisely 1 radius. The angle subtended by that arc length is a natural measure of angle, the radian. The angle is the arc length divided by the radius so the radian measure of angle directly gives the arc length as a multiple of the radius. - – Zendmailer Aug 8 '12 at 17:49 1 @Zendmailer, 1 radian = 1 is true always for the same reason that $\frac{1km}{1km} = 1$ is true always, regardless of social convention, regardless of what system one is in. It's true by inspection. – Alfred Centauri Aug 8 '12 at 18:28 – Zendmailer Aug 8 '12 at 18:56 @Zendmailer, if we decide to subdivide the circumference of a circle into $n$ units, the arc length of each unit, normalized to the radius, is $\frac{2 \pi}{n}$. Note that this is a dimensionless number. The angle associated with this arc length is 1 angular unit and there are n angular units in a circle . Note that this is true regardless of our choice for $n$. Our choice for $n$ affects the normalized arc length associated with our angular unit but 1 angular unit = 1 always. – Alfred Centauri Aug 8 '12 at 21:19 When you say 1 angular unit = 1 always, this means 1 degree = 1 always ($n=360$), right? And when $n=2\pi$ we have 1 radian = 1 always. Why isn't it valid to compare them and say 1 radian = 1 = 1 degree, if we have used the word 'always'? – Zendmailer Aug 9 '12 at 8:41 show 1 more comment The reason radian was adopted was that it was easy to relate with the circumference of a circle as 2*Pi if the radius was one unit. There is no such thing as 360 degree(it was a misconception in early times that one year is made up of 360 days so they took it 360). From the present day statistics it shall be 365 1/4 but it doesn't change calculations and results gets adjusted automatically on calculation. Calculations were easy to manipulate with Pi rather than Degree,minutes,second and they are both interchangeable. So, a comfort became a tradition. - I seem to remember when it was considered to divide the circle into 400 "degrees", to make navigation a little less error-prone. As it is, pilots have to just "know" that 7 and 25 are opposite ends of a runway, and be really really careful not to mix up 13 and 31. – Mike Dunlavey Aug 8 '12 at 17:48 The Babalonyons started the 360 things because they didn't have a completely general system for dealing with non-integers and liked bases that divided neatly a lot of different ways. They also had a fairly advanced astronomy and certainly knew that the year was not 360 days. – dmckee♦ Aug 8 '12 at 18:07 @dmckee: thanks for correcting me with this link, but I am sharing something that I read in a differential calculus book once some time back. Book is "Introduction to Differential Calculus: Systematic Studies with Engineering By Ulrich L. Rohde, G. C. Jain, Ajay K. Poddar, A. K. Ghosh" page no-99 at the last footnote !! – rafiki Aug 10 '12 at 12:29 Let me state some background facts which might be related to your questions and I hope they will help you understand the answers posted by others. 1-There is a difference between units and dimensions. Every quantity that carries dimensions must carry units. 2-The opposite to the previous statement is not always true, for example angles have no dimensions at all because by definition they are length/length, but they have units. The unit in this case are used to identify the quantity as an angle. 3-Angles can be measured in degrees and can be measured in radians, just in the same way that distances can be measured in centimeters or in inches. Consequently there must be a conversion factor between the 2 units. 4-Using $\pi$ radians = 180 degrees, you can see that $1 ~rad= 180^\circ/\pi=180^\circ/3.14\simeq 57.3^\circ$. That is to say 1 rad = 57.3 degrees (to put it in a form similar to something like 1 inch = 2.54 cm). 5-By definition $\displaystyle \theta=\frac{s}{r}$ rad, where $s$ is the length of the arc subtended the angle and $r$ is the radius of the circle. Note that the previous expression for the angle gives you the angle in radians. If you want it in degrees then it will look like this, $\displaystyle \theta = \frac{s}{r}\frac{180^\circ}{\pi}$. As you see the expression in radian is much simpler hence natural as pointed out by Mike Dunlavey. 6-If you have a particle that is rotating around a circle of constant radius $r$, then from the equation $\displaystyle \theta=\frac{s}{r}$ rad you can see that we can get $\displaystyle \omega=\frac{v}{r}$ rad per unit time (where, by definition, $\omega = \frac{d\theta}{dt}$ and $v=\frac{ds}{dt}$). Again, as pointed out by Mike, the equation for the angular velocity will have an extra factor of $180^\circ/\pi$ had we wanted the angular velocity be expressed in degrees per unit time instead of rad per unit time. 7-When an angle, expressed in radians or degrees, multiplies a unit of distance say, the surviving unit is that of the distance. For example: given $\omega = 2 rad/s$ and $r=1 cm$, hence $r\omega = 2 cm/s$. This is why in this case you can say 1 rad =1. - The difficulty in point 2 is that the two lengths are in independent dimensions (as in 3d space). One has just cancelled Lx/Ly and lost information for one's dimensional analysis (this is a Physics question;-). If one did the same with Charge/Temperature it would be a gross error, but we tolerate it for length. Dimensional analysis is newer than the cubit, so the old inconsistency remains. – Philip Oakley May 11 at 20:52 @PhilipOakley If we are dividing 2 lengths nobody care about whether they are along the same direction or not, units are not attached to directions – Revo May 14 at 0:04 Anybody working in optics definitely cares. There are many measurements that have Angle(radians) as an integral part of their value, and it is a very common error, not spotted by dimension checking, for the angle part to be omitted, double counted, or wrongly applied. – Philip Oakley May 14 at 7:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9472492337226868, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/14323/3d-quantum-harmonic-oscillator
3D Quantum harmonic oscillator For an isotropic 3D QHO in a potential $V(x,y,z)={1\over 2}m\omega^2(x^2+y^2+z^2)$. I can see by independence of the potential in the $x,y,z$ coordinates that the solution to the Schrodinger equation would be of the form $\psi(x,y,z)=f(x)g(y)h(z)$. Explicitly, what would it be? Is it $\psi(x,y,z) = cH_{n_x}H_{n_y}H_{n_z}e^{-{m\omega\over2\hbar}(x^2+y^2+z^2)}$, where $H_{n_i}$ are the $ith$ Hermite polynomial? (A side query, surely since the potential is radial, there is a polar coordinate form of solution which might be better? But this is not asked for in the question. Also, does isotropic just mean that the potential is spherically symmetric?) How many linearly independent states have energy $E=\hbar\omega({3\over2}+N)$? Am I supposed to be counting the number of combinations of $n_x,n_y,n_z$ s.t. $n_x+n_y+n_z = N$? I vaguely remember some notion $(n,l)$ mentioned once, but I can't remember what it is nor find the bit of notes on this. Thanks in advance. - @J.M.: If you see it as a question about the Ornstein-Uhlenbeck operator, it fits here ;-). – Jonas Teuwen Sep 3 '11 at 12:22 1 – joriki Sep 3 '11 at 12:36 @joriki: Thanks for the link! – Walter W. Sep 3 '11 at 14:02 1 – Tomáš Brauner Sep 4 '11 at 8:50 Thanks, Tomas. There is still something I don't quite understand. Does the ground state of the system correspond to $N=1$ in $E=\hbar\omega\left({3\over2}+N\right)=\hbar\omega\left({3\over2}+n_x+n_y+n_z\ri‌​ght)$? But then I think the $n_i$'s must be $\geq1$? And I don't quite get what linearly independent states mean in this context. What do I have to check to show that they are L.I.? – walter w Sep 4 '11 at 11:18 show 2 more comments 1 Answer 1. Your solution is correct (multiplication of 1D QHO solutions). 2. Since the potential is radially symmetric - it commutes with with angular momentum operator (L^2 and Lz for instance). Hence you may build a solution of the form |nlm> where n states for the radial state description and lm - the angular. Is it better? Depends on the problem. It's just the other basis in which you may represent the solution. 3. Isotropic - probably means what you suggest - the potential is spherically symmetric. Depends on the context. 4. Yes, you have to count the number of combinations where nx+ny+nz=N. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276124835014343, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/80384/list
## Return to Answer 3 added 11 characters in body Following the idea of Felipe Voloch, I try to give a simple proof based on Puiseux series expansion. Let $C$ be a real algebraic curve at the origin. Look at the Puiseux series expansion (say in terms of $x$) of $C$ near $O$. By assumption one of the branches (over $\mathbb{C}$), call it $C_1$, has the form $$y = a_1x^{r_1} + a_2x^{r_2} + \cdots \quad\quad\quad (1)$$ for $a_i \in \mathbb{R}$. Let $q$ be the least common multiple of the denominators of $r_i$'s. If $q$ is odd, then the branch expands to both sides of the origin and therefore $C_1$ does not end abruptly. So assume $q$ is even. Let $\zeta := e^{2\pi i/q}$. For each $j$, $1 \leq j \leq q$, the complex curve corresponding to $C_1$ has a Puiseux expansion of the form $y = \sum_i a_i \zeta^{jp_i}x^{r_i}$, where $p_i = qr_i$. In particular, taking $j =q/2$ (so that $\zeta^j = -1$), we see that the complex curve corresponding to $C_1$ has an expansion of the form $$y = \sum_i a_i (-1)^{p_i}x^{r_i}. \quad\quad\quad (2)$$ It follows by the minimality of assumption on $q$ that there is $i$ such that $a_i\neq 0$ and $p_i$ is odd , and consequently, $(1)$ and $(2)$ give different real curves, and it follows that $C_1$ does not end abruptly. PS: The above arguments only show that $C_1$ has at least two end points on the boundary of a small enough disk centered at $O$. But it can not have more than two, because for all $j \not\in {q/2, q}$\lbrace q/2, q\rbrace$,$\zeta^j\$ is non-real, so the corresponding parametrization does not give any real points. 2 added 300 characters in body Following the idea of Felipe Voloch, I try to give a simple proof based on Puiseux series expansion. Let $C$ be a real algebraic curve at the origin. Look at the Puiseux series expansion (say in terms of $x$) of $C$ near $O$. By assumption one of the branches (over $\mathbb{C}$), call it $C_1$, has the form $$y = a_1x^{r_1} + a_2x^{r_2} + \cdots \quad\quad\quad (1)$$ for $a_i \in \mathbb{R}$. Let $q$ be the least common multiple of the denominators of $r_i$'s. If $q$ is odd, then the branch expands to both sides of the origin and therefore $C$ C_1$does not end abruptly. So assume$q$is even. Let$\zeta := e^{2\pi i/q}$. For each$j$,$1 \leq j \leq q$, the complex curve corresponding to$C_1$has a Puiseux expansion of the form$y = \sum_i a_i \zeta^{jp_i}x^{r_i}$, where$p_i = qr_i$. In particular, taking$j =q/2$(so that$\zeta^j = -1$), we see that the complex curve corresponding to$C_1$has an expansion of the form $$y = \sum_i a_i (-1)^{p_i}x^{r_i}. \quad\quad\quad (2)$$ It follows by the minimality of assumption on$q$that there is$i$such that$a_i\neq 0$and$p_i$is odd , and consequently,$(1)$and$(2)$give different real curves, and it follows that$C_1\$ does not end abruptly. PS: The above arguments only show that $C_1$ has at least two end points on the boundary of a small enough disk centered at $O$. But it can not have more than two, because for all $j \not\in {q/2, q}$, $\zeta^j$ is non-real, so the corresponding parametrization does not give any real points. 1 Following the idea of Felipe Voloch, I try to give a simple proof based on Puiseux series expansion. Let $C$ be a real algebraic curve at the origin. Look at the Puiseux series expansion (say in terms of $x$) of $C$ near $O$. By assumption one of the branches (over $\mathbb{C}$), call it $C_1$, has the form $$y = a_1x^{r_1} + a_2x^{r_2} + \cdots \quad\quad\quad (1)$$ for $a_i \in \mathbb{R}$. Let $q$ be the least common multiple of the denominators of $r_i$'s. If $q$ is odd, then the branch expands to both sides of the origin and therefore $C$ does not end abruptly. So assume $q$ is even. Let $\zeta := e^{2\pi i/q}$. For each $j$, $1 \leq j \leq q$, the complex curve corresponding to $C_1$ has a Puiseux expansion of the form $y = \sum_i a_i \zeta^{jp_i}x^{r_i}$, where $p_i = qr_i$. In particular, taking $j =q/2$ (so that $\zeta^j = -1$), we see that the complex curve corresponding to $C_1$ has an expansion of the form $$y = \sum_i a_i (-1)^{p_i}x^{r_i}. \quad\quad\quad (2)$$ It follows by the minimality of assumption on $q$ that there is $i$ such that $a_i\neq 0$ and $p_i$ is odd , and consequently, $(1)$ and $(2)$ give different real curves, and it follows that $C_1$ does not end abruptly.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 92, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323887228965759, "perplexity_flag": "head"}
http://thecodeabode.com/2012/07/05/average-exponents-and-the-divisor-function/
Average Exponents and the Divisor Function Here’s something I’ve been thinking about lately: patterns in the prime factorizations of natural numbers. Every number can be written in exactly one way as a product of powers of primes: $\displaystyle n = p_1^{a_1}p_2^{a_2}\dotsm p_k^{a_k}$ This factorization is also linked to the number of proper divisors of n. If we want to know how many divisors n has, we are really asking how many numbers can be made from the same primes as n. If some d can be written as: $\displaystyle d = p_1^{b_1}p_2^{b_2}\dotsm p_k^{b_k}, \hspace{10 mm} 0 \le b_i \le a_i$ then d clearly divides n, and every divisor of n can be written this way. So how many such d can there be? The only things to change are the exponents  $\displaystyle b_i$ , and these can each be any number from  $\displaystyle \{0, 1, 2, \ldots , a_i\}$  . This gives  $\displaystyle a_i + 1$  choices for each exponent, so the total number of divisors of n is $\displaystyle \prod_{i=1}^k \left(a_i + 1 \right)$ Before I go on, I should warn you that the following are just some observations on the behavior of factorizations and numbers of divisors. Perhaps someone reading this will have some insight, or has already investigated this. I have not managed to say anything meaningful about the divisor function, but I do find it linked to an expression that I think is interesting. Because there are patterns in the values that these exponents tend to take. For instance, if I pick a number at random and ask you to guess the exponent of some prime in its factorization, what should you guess? It turns out that there is an elegant form for the expected value of these exponents. Start with 2, and let’s say we’re picking from all natural numbers. Let  $\displaystyle \mu(p)$  be the expected value of the exponent of p. Even numbers are divisible by 2, so the exponent of 2 for half of the numbers is at least 1. That gives  $\displaystyle \mu(2) \ge 1/2$. One fourth of numbers are divisible by 4, so they each contribute a 2 to the average. Think of them as contributing 1 along with the general even numbers, then an extra 1. Then one eighth of all numbers contribute again, and one sixteenth contribute a fourth time, etc. In the end we have $\displaystyle \mu(2) = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \dotsb = 1$ This can be done just the same with the other primes $\displaystyle \mu(p) = \sum_{k=1}^\infty \frac{1}{p^k}$ This can be simplified in a cute way if you’ll trust me that it’s convergent. Multiply the sum by p to get $\displaystyle p\mu(p) = 1 + \frac{1}{p} + \frac{1}{p^2} + \dotsb = 1 + \mu(p)$ $\displaystyle (p - 1)\mu(p) = 1$ $\displaystyle \mu(p) = \frac{1}{p-1}$ So the average exponent for each prime p is  $\displaystyle \frac{1}{p-1}$ . I would end that sentence with an exclamation point, but I have gotten in trouble before with the whole excitement vs. factorial issue. Well what can you do with these exponents? Back to the divisor function. Since we’re picking over all numbers, we can treat the exponents of prime terms like independent variables (if we were picking under say 100, an exponent of 2 like 6 would certainly affect the possibilities for 3). That means that expected values are preserved over addition and multiplication. So we can plug our expected exponents into the divisor formula to get an expected value for the number of divisors: $\displaystyle \prod_{p \in \mathbb{P}} \left(1 + \frac{1}{p-1} \right)$ Wouldn’t it be a kick if it were convergent? Well, it’s divergent. In fact, the expanded sum contains every term of the form  $\displaystyle \frac{1}{p-1}$  , which is certainly greater than the sum of all terms of the form  $\displaystyle \frac{1}{p}$  , which by no proof of my own is divergent. So this is most definitely divergent. So what does that get us? The average value of the divisor function grows with n , very slowly but without limit. And it is related to a very pretty product over primes. If you have some insight on the topic, leave a reply!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928186297416687, "perplexity_flag": "head"}
http://mathoverflow.net/questions/20882/most-unintuitive-application-of-the-axiom-of-choice/70387
## Most ‘unintuitive’ application of the Axiom of Choice? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is well-known that the axiom of choice is equivalent to many other assumptions, such as the well-ordering principle, Tychonoff's theorem, and the fact that every vector space has a basis. Even though all these formulations are equivalent, I have heard many people say that they 'believe' the axiom of choice, but they don't 'believe' the well-ordering principle. So, my question is what do you consider to be the most unintuitive application of choice? Here is the sort of answer that I have in mind. An infinite number of people are about to play the following game. In a moment, they will go into a room and each put on a different hat. On each hat there will be a real number. Each player will be able to see the real numbers on all the hats (except their own). After all the hats are placed on, the players have to simultaneously shout out what real number they think is on their own hat. The players win if only a finite number of them guess incorrectly. Otherwise, they are all executed. They are not allowed to communicate once they enter the room, but beforehand they are allowed to talk and come up with a strategy (with infinite resources). The very unintuitive fact is that the players have a strategy whereby they can always win. Indeed, it is hard to come up with a strategy where at least one player is guaranteed to answer correctly, let alone a co-finite set. Hint: the solution uses the axiom of choice. - 15 I would guess that Banach-Tarski is about as unintuitive a result as there is. – Steve Huntsman Apr 10 2010 at 1:24 15 The Banach-Tarski paradox loses some its counterintuitive appeal once you know how it works, but then the fact that it doesn't work in for disks in the plane becomes a little shocking. – François G. Dorais♦ Apr 10 2010 at 1:40 3 True, but I could say that any apparently counterintuitive theorem loses either the property of counterintuitiveness or of truth once you know how it either works or doesn't, respectively. – Steve Huntsman Apr 10 2010 at 1:49 3 Not Banach-Tarski, its ramifications never stopped surprising me. – François G. Dorais♦ Apr 10 2010 at 1:57 5 This blog post is somewhat relevant: cornellmath.wordpress.com/2007/09/13/…. I especially like Terence Tao's comment. – Qiaochu Yuan Apr 10 2010 at 2:06 show 9 more comments ## 15 Answers I have enjoyed the other answers very much. But perhaps it would be desirable to balance the discussion somewhat with a counterpoint, by mentioning a few of the counter-intuitive situations that can occur when the axiom of choice fails. For although mathematicians often point to what are perceived as strange consequences of AC, many of the situations that can arise when one drops the axiom are also quite bizarre. • There can be a nonempty tree $T$, with no leaves, but which has no infinite path. That is, every finite path in the tree can be extended one more step, but there is no path that goes forever. • A real number can be in the closure of a set $X\subset\mathbb{R}$, but not the limit of any sequence from $X$. • A function $f:\mathbb{R}\to\mathbb{R}$ can be continuous in the sense that $x_n\to x\Rightarrow f(x_n)\to f(x)$, but not in the $\epsilon\ \delta$ sense. • A set can be infinite, but have no countably infinite subset. • Thus, it can be incorrect to say that $\aleph_0$ is the smallest infinite cardinality, since there can be infinite sets of incomparable size with $\aleph_0$. (see this MO answer.) • There can be an equivalence relation on $\mathbb{R}$, such that the number of equivalence classes is strictly greater than the size of $\mathbb{R}$. (See François's excellent answer.) This is a consequence of AD, and thus relatively consistent with DC and countable AC. • There can be a field with no algebraic closure. • The rational field $\mathbb{Q}$ can have different nonisomorphic algebraic closures (due to Läuchli, see Timothy Chow's comment below). Indeed, $\mathbb{Q}$ can have an uncountable algebraic closure, as well as a countable one. • There can be a vector space with no basis. • There can be a vector space with bases of different cardinalities. • The reals can be a countable union of countable sets. • Consequently, the theory of Lebesgue measure can fail totally. • The first uncountable ordinal $\omega_1$ can be singular. • More generally, it can be that every uncountable $\aleph_\alpha$ is singular. Hence, there are no infinite regular uncountable well-ordered cardinals. - 2 My personal favorite is the partition of $\mathbb{R}$ described in this MO post - mathoverflow.net/questions/22927/… – François G. Dorais♦ Jul 15 2011 at 13:54 1 Counterpoint is appreciated. – Jim Conant Jul 15 2011 at 14:14 1 Asaf, I was referring to the distinction between "minimal" and "smallest" in a partial order (without AC the natural order on cardinalities may not be linear). In this terminology, it can be incorrect to say that $\aleph_0$ is the (or even a) smallest infinity, since when there are infinite Dedekind finite sets, then $\aleph_0$ is merely minimal as opposed to smallest among infinite cardinalities. – Joel David Hamkins Jul 15 2011 at 14:44 6 Another example, due to Läuchli, is that $\mathbb Q$ can have non-isomorphic algebraic closures. Here, an algebraic closure of a field $K$ is defined to be an algebraically closed extension of $K$ that contains no algebraically closed subfield. In the absence of choice, you can have an algebraic closure of $\mathbb Q$ that is a countable union of finite sets but is not itself countable. – Timothy Chow Jul 15 2011 at 18:35 4 Resurrecting an old answer here: It should be noted that many of these can be ruled out by resorting to countable AC or dependent choice, which avoid many of the strange consequences of full AC. For example, "A set can be infinite, but have no countably infinite subset", is ruled out by countable AC. – Chad Groft Oct 14 2011 at 1:49 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I highly recommend reading this paper by Chris Hardin and Al Taylor, A Peculiar Connection Between the Axiom of Choice and Predicting the Future, as well as this shorter piece by Mike O'Connor Set Theory and Weather Prediction. - NB. The O'Connor piece covers what Tony was talking about with hats. – Steve Huntsman Apr 10 2010 at 1:52 (So do Hardin and Taylor.) – François G. Dorais♦ Apr 10 2010 at 1:59 1 Thanks Francois, the weather prediction angle is quite interesting. – Tony Huynh Apr 10 2010 at 3:01 Maybe this is not the kind of application you have in mind, but a well-ordering of the reals seems highly counterintuitive to me. I would argue that well-ordering of $\mathbb{R}$ is the essence of many of the other counterintuitive results that have been mentioned. - 2 Some years ago I also thought that well-ordering the reals is unintuitive, but now I know better: When you speak of the reals, you think of the structure of a complete ordered field. But well-ordering only refers to the size of the set. It's merely that first it's hard to imagine a uncountable well-ordering at all. – Martin Brandenburg Apr 10 2010 at 9:33 12 I'm not concerned with the algebraic structure of the reals, only the size of the set. Nor do I have trouble with uncountable well-orderings per se; $\omega_1$ is fairly intuitive. I just find it hard to claim intuition about well-ordering the continuum when the known independence results for ZF are so stacked against it. How can its well-ordering be intuitive when we can't say what its ordinal number is? And when it is consistent for the continuum to have no well-ordering? – John Stillwell Apr 10 2010 at 11:45 John, does this mean that you find the continuum hypothesis highly counterintuitive? And does the Banach-Tarski paradox follow from the well-orderability of the reals? – Timothy Chow Jul 16 2011 at 23:11 Timothy, I'm torn about CH. I'd like it to be true, for the sake of simplicity, but it seems too simple to be true. As for Banach-Tarski, I think it follows from well-ordering of $\mathbb{R}$. One has to choose from subsets of $\mathbb{S}^2$, but I presume there is a definable bijection between $\mathbb{R}$ and $\mathbb{S}^2$. – John Stillwell Jul 17 2011 at 1:00 I don't claim to have much intuition about what a well-ordering of $\mathbb{R}$ would look like (whatever that means), but the existence (even in the absence of AC) of a whole range of well-orderable uncountable sets makes it fairly believable that there's a bijection from $\mathbb{R}$ to one of them. Banach–Tarski, on the other hand, offers me no intuition even while I stare at its proof. – Z Norwood Nov 13 2011 at 16:46 The fact that there exist non-measurable sets is highly counter-intuitive; the reason we don't find it so is that we've all been conditioned from day 1 to do measure theory very carefully, and define Borel sets, measurable sets, etc, so we all know that non-measurable sets exist because what would be the point of doing it all so carefully otherwise. At high school we were all taught that the probability of an event occurring was "do it a million times, count how often it happened, divide by a million, and now let a million tend to infinity". And no-one thought to ask "what if this process doesn't tend to a limit?". I bet if anyone asked their teacher they'd say "well it always tends to a limit, that's intuitively clear". But am I right in thinking the following: if we take a subset $X$ of [0,1] with inner measure 0 and outer measure 1, and we keep choosing random reals uniformly in [0,1] and asking whether they land in $X$, and keep a careful table of the result, then the number of times we land in $X$ divided by the number of times we tried just oscillates around between 0 and 1 without converging? That is fundamentally counterintuitive and in some sense completely goes against the informal (non-rigorous) training that we all got in probability at high school. [if I've got this right!] - 2 Suppose you construct a non-measurable set of 01-sequences as follows. Call two sequences equivalent if they differ in finitely many places. Choose a sequence from each equivalence class, and then let your set be the set of all sequences that have even symmetric difference with the chosen representative of its equivalence class. It feels as though a random sequence should have probability 1/2 of belonging to the set. And if you change the definition to "symmetric difference has size congruent to 0 mod 100, it feels as though the probability should be 1/100. I'm confused. – gowers Apr 10 2010 at 16:45 4 Another point I wanted to make was that if your set is chosen using AC, then it's not really clear what it means to choose a random real and ask whether it lands in the set. After all, you haven't said what the set is. So I'm not sure we would get this oscillatory behaviour because I'm not sure it's possible to make mathematical sense of the experiment in the first place. I do like it though ... – gowers Apr 10 2010 at 17:25 2 The fact that a nonprincipal ultrafilter on $\omega$ is weaker than a well-ordering of $\mathbb{R}$ has been confirmed by Simon Thomas here: mathoverflow.net/questions/21031/… – John Stillwell Apr 12 2010 at 23:19 4 I agree with Ron that the answer is wrong, but I don't agree with his reason. First, one can adjoin random reals (in the sense of Solovay) to models that satisfy choice, and the resulting models will still satisfy choice. What one cannot reasonably do is to ask whether such a random real belongs to a set $X$ from the ground model. If one takes that question literally, the answer is no; ground-model sets have only ground-model members. Some ground-model sets, Borel sets, have canonical extensions $X'$ to the forcing model, and one can ask (see next comment) – Andreas Blass Jul 14 2011 at 18:39 4 whether a random real is in $X′$, but that doesn't work for "wild" sets $X$. Second, to say that a random real has some property (like being in X) doesn't require Solovay forcing. (People talked sensibly about properties of random reals before Solovay came along). In particular, Kevin's answer can be interpreted as saying that the set $Z$ of those sequences $z$ of reals for which the proportion among the first $n$ terms in $z$ that are in $X$ has liminf=0 and limsup=1 as $n\to\infty$ has measure 1. But I'm fairly confident that this is not the case; $Z$ is also non-measurable. – Andreas Blass Jul 14 2011 at 18:45 show 8 more comments There can be graphs all of whose cycles have even length and whose chromatic number is greater than two. In fact, let $G$ be the graph whose vertices are the real numbers, with $x$ and $y$ adjacent if $|x-y|=\sqrt{2}+r$, where $r$ is rational. Then $G$ has only even length cycles. Assuming that every subset of $\mathbb{R}$ is measurable (which is consistent with ZF), then the chromatic number of $G$ is uncountable. This is a result of Shelah and Soifer. If we assume the Axiom of Choice, then the chromatic number of $G$ is two. - 2 A graph with zero edges is bipartite but its chromatic number is not 2 :-) That nitpick aside, it's a neat example, though we must be careful to define "bipartite" as "no odd cycles", not the definition suggested by "bipartite = two parts" which is immediately equivalent to "chromatic number at most 2". – Noam D. Elkies Jan 3 2012 at 5:00 @Noam, yes I realized this also and have changed the wording. – Richard Stanley Jan 3 2012 at 13:46 I find this result quite intuitive. What intuition is it supposed to violate? “Every nice property of finite graphs must fail for infinite graphs?” Anyway, the wording is misleading: since the question asks for unintuitive consequences of the axiom of choice, the first sentence should read “There is no graph ...”, otherwise it looks like the pathological example is constructed using choice. – Emil Jeřábek Jan 3 2012 at 15:29 One counterintuitive aspect of the axiom of choice is a theorem of Diaconescu and independently Goodman and Myhill that, in some constructive set theories that don't begin with the law of the excluded middle, the axiom of choice implies the law of the excluded middle. But in other systems such as Martin-Löf type theory, the corresponding form of the axiom of choice is completely constructive and does not imply the law of the excluded middle. - The Axiom of Determinacy (AD) fails. What that means: Partition the set ωω into two sets S and T, and think of this partition as a game (S, T) with two players. To play, player 1 picks a natural number a0, then player 2 picks b0 (as a function of a0), then player 1 picks a1 (as a function of b0), then player 2 picks b1 (as a function of a0 and a1), and so on until an and bn are selected for all n ∈ ω. Then the sequence a0, b0, a1, b1, … is either in S (in which case player 1 wins), or in T (in which case player 2 wins). The game (S, T) is determined if either player 1 or player 2 has a winning strategy, i.e., if there are functions fn: nω → ω where choosing an = fn( b0, …, bn–1 ) guaranteed player 1 victory, or similarly for player 2. (We can't have both.) AD is just the statement that every such game is determined, which is false in ZFC. As with most of the weird examples, the undetermined game is constructed with a well-ordering of R. What makes this so unintuitive to me is that both AC and AD are generalizations of statements that are easily seen for finite objects. (Any finite game, or even any game with finite depth, is determined, by an easy induction on the depth.) There are apparently many set theorists that agree with this assessment, since they try to rescue AD as relativized to L(R). That the relative consistency strength of this statement is equivalent to that of large cardinals is considered good evidence that those large cardinals are, in fact, consistent. More precisely, ZF + AD is consistent iff ZFC + "there are infinitely many Woodin cardinals" is consistent, and ADL(R) is outright provable in ZFC + "there is a measurable cardinal which is greater than infinitely many Woodin cardinals". - 1 I am unfamiliar with the notation `${}^n\omega$`. What does it mean? – Harald Hanche-Olsen Apr 10 2010 at 13:51 1 Just the set n_-tuples in _&omega;. It's usually written as &omega;<sup>n</sup>, but that notation could also refer to ordinal exponentiation, which is not quite the same. – Chad Groft Apr 10 2010 at 16:12 2 We want to have the cake and eat it too :D. – Tran Chieu Minh Apr 10 2010 at 16:13 7 @Tran: No problem, just apply the Banach-Tarski theorem to your cake. – Andrej Bauer Jul 14 2011 at 21:43 1 Chad, to support your case for the naturality of AD a bit more: AD is precisely the statement $\neg\forall x_0\exists x_1\forall x_2\exists x_3\cdots A(\vec x)\iff \exists x_0\forall x_1\exists x_2\forall x_3\cdots \neg A(\vec x)$, an infinitary deMorgan law. The truth of the finitary deMorgan law constitutes a proof of determinacy for finite games. – Joel David Hamkins Jul 15 2011 at 13:24 show 1 more comment The most destructive aspect of uncountable choice is that it conflicts with random choice. With uncountable choice, any object which is constructed using randomness, like a random walk, a random field, or even a randomly picked real number, cannot exist, because there are sets which it cannot consistently be assigned membership to. In order to define what it means to have a random walk, or a random graph, or a random infinite Ising model configuration, or whatever, you need to define what it means to have an infinite sequence of random coin flips. The result can be encoded as a real number, the binary digits of which are the results of the coin flip, and if this real number really exists, as an actual mathematical object, then this object either belongs to any given set S, or it doesn't. It is so intuitive to think of random objects this way, that they are often illustrated with pictures, showing us what they look like (see http://en.wikipedia.org/wiki/Wiener_process for a picture of a "realization" of a random walk). These pictures do not signify anything when the axiom of choice is present. The reason is that once you have actual random objects, for which you can assign membership to any set S, then you can define the probability of landing in S by choosing random objects again and again, and asking what fraction of the time you land in S. This always converges, because given any long finite sequence of 1's and 0's which represent independent random events, any permutation of the 1's and 0's has the same likelihood. This means that it is probability 0 that the seqeunce will oscillate in any way, and with certainty it will converge to a unique answer. This answer is the measure of the set S, and every set is measurable in this universe. This makes analysis much easier, because everything is integrable, measurable, etc. This is so intuitive, that if you look at any probability paper, they will illustrate with random objects without hesistation, implicitly denying choice. (I realize that this answer overlaps with a previous one, but it corrects a serious central mistake in the former.) - 2 Ron, I don't understand how you conclude that "this means that it is probability 0 that the sequence will oscillate in any way." Could you expand on this? – François G. Dorais♦ Jul 14 2011 at 16:02 If you have a sequence of independent 0-1 events which are in every way identical, then any permutation of the 0s and 1s is just as likely to occur as any other. In order for the limit not to exist, there must be long segregated 0's followed by segregated 1's. But any segregated sequence of length N is segregated in only a negligible fraction of all permutations of this sequence, and the 0s and 1s can come in any permutation. There is no way in the world that an independent sequence of identical probabilistic 0-1's can fail to converge. – Ron Maimon Jul 14 2011 at 23:09 1 François, I think Ron is saying that classically, if $S$ is a non-measurable subset of $[0,1]$ and $X$ is a random variable that is uniformly distributed in $[0,1]$, then we can't sensibly speak of "the event $X \in S$"; $a fortiori$, any attempt to formulate a strong law of large numbers for $S$ (by taking, as Kevin Buzzard suggested, a sequence $(X_n)$ of i.i.d. random variables, noting when "the event $X_n \in S$ happens", and seeing what almost surely happens to the proportion of successes as $n\to\infty$) fails. But... – Timothy Chow Jul 15 2011 at 0:51 1 ...if we discard the axiom of choice in favor of "all sets are Lebesgue measurable," then this sort of reasoning becomes legitimate for arbitrary sets $S$. Attempts to construct a counterexample by finding "bad" sequences of successes and failures whose proportion of successes oscillates wildly won't work, because such bad sequences will occur with probability 0. – Timothy Chow Jul 15 2011 at 0:55 1 @Chow-- Thank you for stating it so clearly. I wanted to also emphasize that this is really the only counterintuitive aspect of uncountable choice, all the other examples are special cases. For example, the "predict the future/hats" business is counterintuitive because our intuition suggests that we can choose an infinite sequence of future-events/hat-colors at random. The axiom of choice forbids us from choosing at random. We must choose in an L-like model where choice holds. Similarly, random picks forbid Hamel bases for R over Q and for well-ordering of R, so this is the serious conflict. – Ron Maimon Jul 15 2011 at 3:09 I think that it might not be the most unintuitive but the fact there exists sets which intersect with every perfect subset (but contains none of them!) of the reals is fairly bizarre. - Using AC you can construct a (non-continuous) function that intersects any continuous function on any open interval (or even on any set with positive measure). - Not an answer but I think that AC itself is itself not intuitive if we look at it closely enough. The reason we think that AC is intuitive is because we have its counterpart for a finite collection, and we assume that an infinite collection should behave in the same way a finite collection does. That later assumption seems to require some faith, if we don't want to say entirely baseless. - Every Vector Space has a Hamel basis. This is something that follows from the Axiom of Choice (though it is usually proved by Zorn's Lemma, which is equivalent to the AC). From this follows that $(\mathbb{C},+)$ is isomorphic to $(\mathbb{R},+)$, by considering $\mathbb{C}$ and $\mathbb{R}$ as $\mathbb{Q}$-Vectorspaces. I found this quite unintuitive. - 2 This doesn't strike me as any more counter-intuitive than the fact that [0, 1) and (0,1) have the same cardinality... – Simon Rose Jul 16 2011 at 17:18 Lebesgue measure exists for every Borel set, and is countably additive. I've always found it more surprising that our fuzzy intuitive ideas of area and volume can be pushed as far as they can than that they break when you push even further. - Since this question has been resurrected... One of my favorite things about the hat-guessing problem in the question is what happens when you think about it probabilistically. Let's say the hats are labeled by an adversary, whose goal is to make the players lose. Let's also say the number of players is countable, which should only make the game easier to win. A natural strategy for the adversary would be to choose the numbers randomly: say the $i$th hat is labeled with a random value $X_i$, where the $X_i$ are independently chosen from some continuous probability distribution. Let $Y_i$ be the $i$th player's guess. $Y_i$ can depend on all the $X_j$, $j \ne i$, but not on $X_i$, so clearly $Y_i$ is independent of $X_i$. Thus for each $i$, $P(Y_i = X_i) = 0$ since $X_i$ has a continuous distribution. So each player guesses correctly with probability 0. By countable additivity, almost surely, no player guesses correctly. So this is a "proof" that there can't be a strategy that guarantees that even one player guesses correctly. Can you spot the flaw? Rot13: Hfvat gur nkvbz bs pubvpr fgengrtl, gur qrcraqrapr bs gur Lf ba gur Kf vf abg zrnfhenoyr, fb va snpg gur Lf ner abg enaqbz inevnoyrf naq bar pnaabg qb cebonovyvgl jvgu gurz. - 2 Rather than say "flaw", as if the proof is inherently wrong, you can ask "can you spot where choice ruins this probablistic argument?". Proofs like these are perfectly OK in a Solovay universe, and probability working correctly can be argued to be vastly more important than ineffable Hamel bases/well-orderings. – Ron Maimon Jul 15 2011 at 3:32 To me, the only "unintuitive" applications of uncountable choice is when it turns up in physics. The only case I know where this happens is in the maximal-extension theorem of Choquet-Bruhat (QM does not use uncountable choice). This uses local extension properties of solutions to General Relativity to prove, using Zorn's lemma, that there exists a maximal extension. The use of axiom of choice is, I think, essential. I couldn't see how to sidestep it when I read the paper a long time ago (somebody please correct me if I am wrong). What is the axiom of choice doing in physics? I believe that it is entirely due to the issue of double-sided maximally extended black holes. A maximal extension of General Relativity can contain "wormhole" like solutions (for example, a charged black holes with two patches connected by an interior region), and there can be countably many such bridges in any asymptotically flat patch. But each of these branches can connect you to another different asymptotically flat region, which might have its own countably infinite collection of bridges to other flat regions. The resulting spacetime is like a tree with countably many branches at each node, where each node represents an asymptotically flat spacetime, and each edge is a double-sided maximally extended black hole bridging the two nodes. Such a tree can have infinite depth, and you must extend the solution to the whole tree. It seems intuitive that to patch the solutions together you need to extend the local solutions over continuum many nodes, and since GR is hyperbolic, you will get to make some arbitrary choices at each extension step. The dependence on choice then simply shows how unreasonable the maximally extended model of General Relativity is for physics. - Since this got downvoted, I started to worry that it might be incorrect. In particular, if one considers only regions which are a bounded geodesic distance from a given point, can the tree have infinitely many branchings? But it can, because the scale invariance allows ever smaller maximally extended black holes along a finite length geodesic to accumulate with nothing going wrong at the accumulation point. – Ron Maimon Jul 16 2011 at 23:35 Perhaps this should be a question: is the maximal extension of Chonquet Bruhat always separable? The answer I give says no, but I am doubting now. – Ron Maimon Jul 17 2011 at 13:40 The use of Zorn's lemma is not so shocking once you realise that causal inclusion gives a partial order on the set of all past sets. BTW, you seem to be talking about two different notions of maximal extensions here. The existence theorem of Choquet-Bruhat discusses maximally Globally Hyperbolic extensions, whereas your examples about black holes are talking about Analytic extensions. They are quite different beasts. – Willie Wong Jan 3 2012 at 16:20 @WillyWong: They are not different--- the black holes are not analytic extensions, this is just a mischaracterization, they are globally hyperbolic extensions when you have a collapse, and there is no analyticity required (although people say so a lot for no good reason). – Ron Maimon Oct 8 at 18:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 85, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508792161941528, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/15666/list
Return to Question 4 added 479 characters in body If K is a finite extension of $\mathbb Q_p$ for some prime number $p$, (possibly need $p \neq 2$), $L_1$ and $L_2$ are totally ramified abelian extension of $K$, $\pi_1, \pi_2$ are respectively the uniformizer that generates each field. Is it true that $L_1 L_2$ is totally ramified iff $Nm_{L_1 / K}(\pi_1)$, $Nm_{L_2 / K}(\pi_2)$ differs by an element in the intersection of the two norm groups. Is there a proof of this result (or the correct version of the result) without employing big tools? Add at 6:49 pm 18th Feb: From Class Field Theory we know that there are one maximal totally ramified abelian extension of a local number field $K$ corresponding to each uniformizer, so I would expect that some version of the above statement is true. At least when $Nm_{L_1 / K}(\pi_1)=Nm_{L_2 / K}(\pi_2)=x$, $L_1 L_2$ is totally ramified, because both are contained in $K^{ram}_x$. 3 Fixed title 2 edited tags 1 When does the composition of two totally ramified extension totally ramified? If $L_1$ and $L_2$ are totally ramified abelian extension of $K$, $\pi_1, \pi_2$ are respectively the uniformizer that generates each field. Is it true that $L_1 L_2$ is totally ramified iff $Nm_{L_1 / K}(\pi_1)$, $Nm_{L_2 / K}(\pi_2)$ differs by an element in the intersection of the two norm groups. Is there a proof of this result (or the correct version of the result) without employing big tools?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365319609642029, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/163700/regular-local-ring-and-a-prime-ideal-generated-by-a-regular-sequence-up-to-radic/177684
# Regular local ring and a prime ideal generated by a regular sequence up to radical Let $R$ be a regular local ring of dimension $n$ and let $P$ be a height $i$ prime ideal of $R$, where $1< i\leq n-1$. Can we find elements $x_1,\dots,x_i$ such that $P$ is the only minimal prime containing $x_1,\dots,x_i$? - I've given this a fair bit of thought, and the best I can do is give you an upvote. – Alex Becker Jul 7 '12 at 8:21 I also thought about it, i had asked a similar question before for which i got a counterexample, so i thought there should be a counterexample for this too, but i cant find it, so now i am not even sure if this is true or false, perhaps it should be false, but i dont know. Thanks Alex for trying. – messi Jul 7 '12 at 10:55 – YACP Apr 28 at 14:26 ## 1 Answer Since $P$ has height $i$, the elements $x_1,...,x_i$ must be a regular sequence. Thus what you are asking is whether $V(P)$ is a set-theoretic complete intersection. This is a notoriously difficult question in general. For example, it is not known for curves in $A_\mathbb C^3$. In general, the answer is NO. The simplest example is perhaps $R=\mathbb C[x_{ij}]_{{1\leq i\leq 3},{1\leq j\leq 2}}$ and $P$ is generated by the $2$ by $2$ minors. To prove that $P$ is not generated up to radical by $2$ elements one has to show that the local cohomology module $H_P^3(R)$ is nonzero (basic properties of local cohomology dictates that $H_I^n(R)=0$ if $n$ is bigger than the number of elements that generate $I$ up to radical. That is because local cohomology can be computed with Cech complex on these generators). Even then, the cleanest way to show $H_P^3(R)\neq 0$ involves a topological argument (the non-vanishing is not true in characteristic $p>0$ by the way). If you want to know more, the key words are: set-theoretic complete intersection, analytic spread, local cohomology. - Thank you, i just noticed the answer right now. – messi Aug 2 '12 at 5:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9403169751167297, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/tagged/random-number-generator?sort=faq&pagesize=30
# Tagged Questions creation of (real or pseudo) random numbers (or bits). 4answers 644 views ### How to fairly select a random number for a game without trusting a third party? Several people are playing a game with random events and require a way to produce a random number. (Such as dice rolls or a lottery.) Can this be done such that each player has the power to be ... 2answers 304 views ### creating a small number from a cryptographically secure random string i'm trying to figure out the best way to generate a cryptographically secure random number between 0 and 200 from a cryptographically random string of bytes (ie. read from /dev/urandom or some such) ... 2answers 1k views ### Predicting values from a Linear Congruential Generator I have learnt that Linear Congruential Random Number Generators are not cryptographically secure - my understanding is that given an LCG of the form: ... 1answer 121 views ### Correct way to map random number to defined range? Say that we have a secure random number generation that outputs 32 bit random numbers, so it's output is a true random number between 0 and a MAX. What is the best way to map this random number to a ... 2answers 534 views ### Blum Blum Shub vs. AES-CTR or other CSPRNGs Following on from D.W.'s comments on a previous question, what properties does Blum Blum Shub have that make it better / worse than other PRNGs? Are there significant implementation difficulties or ... 4answers 608 views ### Properties of PRNG / Hashes There are a lot of quite elaborate PRNG's out there (e.g. Mersenne Twister et.al.), and they have some important properties, especially when it comes to crypto applications. So, I was wondering how ... 3answers 381 views ### What tests can I do to ensure my PRNG is working correctly? In the past I have used the Chi-squared test to check the statistical randomness of my generator. Is this a good test to use? Are there other tests? 3answers 248 views ### Is there some way to generate a non-predictable random number in a decentralised network? Is there a way to generate a random number with given restrictions: It will be used in a decentralised network with a big number of peers (no central authority to generate it) Its generation should ... 2answers 899 views ### Is the following statement about PRG true or false? Is the following statement true? If $G: \{0,1\}^k \rightarrow \{0,1\}^n$ is a PRG, then so is $G':\{0,1\}^{k+l} \rightarrow \{0,1\}^{n+l}$ defined as $$G'(x.x')=G(x).x'$$ where $x \in \{0,1\}^k$ and ... 1answer 180 views ### Are there secure stream ciphers that cannot be parallelized? Are there any stream ciphers (or a deterministic random number generators, that should work as well I guess?) that cannot be parallelized? So for example if I seed it with a specific value, and then ... 2answers 194 views ### Feedback on rolling my own entropy gatherer First of all, I don't recommend doing this. This was something I created when I didn't know better and didn't have a solution available to me. Long ago I created my own entropy gather for a ... 3answers 225 views ### PRNG taking advantage of very large seed Can anyone suggest a good (CS)PRNG algorithm that takes advantage of having a very large (ideally arbitrarily large) seed? I'd like to use several kilobytes, perhaps several hundred kilobytes, of ... 2answers 210 views ### Avalanche noise RNG for one-time pad use I came across this little HRNG widget and was really intrigued as I have been looking for a decent but afordable source for truly random bits to use in a one-time pad. The question is, would a HRNG ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248200058937073, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/123140/artin-approximation-theorems-over-non-regular-rings-non-noetherian-rings
Artin approximation theorems over non-regular rings/non-Noetherian rings Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) 1. In Artin1968 he considers $\underline{analytic}$ equations, but over the ring `$R=k\{x_1,..,x_n\}$`. In Artin1969 he works with `$R=k\{x_1,..,x_n\}/I$`, not necessarily regular, but considers $\underline{polynomial}$ equations. Is there some version like this: "Let $R$ be a local Noetherian Henselian ring(not necessarily regular), over a normed field. Given an arbitrary (possibly countable) system of analytic equations over $R$, with a solution over the completion of $R$, there exists also a solution in $R$, sufficiently close to the formal solution" ?? 2. What is known for non-Noetherian rings? e.g. for $C^\infty$, $C^r$? (Actually, for $C^\infty$ I learned about one approximation theorem, unpublished in the old USSR times..) - For (1), it still seems reasonable only to consider finitely many analytic equations (since the local ring is noetherian, after all), and at least in the non-archimedean case I believe there is a paper of Siegfried Bosch on this generalization of Artin's result (i.e., considering analytic equations over $R$, not just polynomial equations over $R$). I don't remember the exact title, but if you search for papers of Bosch with "Artin" or "approximation" in the title then you should find it. – kreck Feb 27 at 22:36 Probably I miss smth, but in the paper "A rigid analytic version of M. Artin's theorem on analytic equations" he seems to consider polynomial equations. At least this is the statement on page 1. – Dmitry Kerner Feb 28 at 8:37 @Dmitry: My memory was a bit faulty, sorry. Looking back at that paper (which is indeed the one I had in mind), the 2nd paragraph of section 2 indicates that one can establish the analogues of what Artin proved in his earlier paper(s), but not something stronger. – kreck Feb 28 at 12:02 1 Answer Concerning (2), here are some references: For certain subrings of $R[[T_1,\dots,T_N]]$ where $R$ is a complete valuation ring of rank 1, see: H. Schoutens: Approximation properties for some non-Noetherian local rings. Pac. J. Math. 131(2), 331–359 (1988). For any henselian valuation ring, with fraction field $K$, such that the completion $\widehat{K}$ is separable over $K$, see my paper: An extension of Greenberg’s theorem to general valuation rings, Manuscripta Math. 139, 153–166 (2012). The case of a henselian valuation ring of rank 1 is already mentioned in Elkik's thesis: R. Elkik: Solutions d’équations à coefficients dans un anneau hensélien. Ann. Sci. École Norm. Sup. (4) 6, 553–603 (1974) (see Remarque 2, p. 587), and treated in more detail in chapter 1 of A. Abbes: Éléments de géométrie rigide I. Progress in Mathematics. Birkhäuser, Boston (2011). For rings of differentiable functions, perhaps you should look at Tougeron's papers. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8833547830581665, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?s=ced53e6aaaac6d14bde5446fb130a935&p=3796232
Physics Forums ## Statistical mechanics: multiplicity 1. The problem statement, all variables and given/known data We have a surface that can adsorb identical atoms. There are N possible adsorption positions on this surface and only 1 atom can adsorb on each of those. An adsorbed atom is bound to the surface with negative energy $-\epsilon$ (so $\epsilon > 0$). The adsorption positions are far enough away to not influence each other. a) Give the multiplicity of this system for $n$ adsorbed atoms, with $0 \leq n \leq N$. b) Calculate the entropy of the macrostate of n adsorbed atoms. Simplify this expression by assuming N >> 1 and n >> 1. c) If the temperature of the system is T, calculate the average number of adsorbed atoms. 3. The attempt at a solution a) $\Omega(n) = \frac{N!}{n! (N - n)!}$ b) $S = k_b \ln \Omega(n) = k_b \ln \left(\frac{N!}{n! (N - n)!}\right)$ Using Stirling's approximation: $S \approx k_B ( N \ln N - N - n \ln n - n - (N - n) \ln (N - n) - (N - n) = k_B ( N \ln N - n \ln n - (N - n) \ln (N - n)$ A Taylor expansion around n = 0 then gives: $S \approx k_B (- \frac{n^2}{2N} + ...)\approx -\frac{k_b n^2}{2N}$ c) I'm not even sure if the previous stuff is correct, but I have no idea how to do this one. Any hints? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Quote by SoggyBottoms 1. The problem statement, all variables and given/known data We have a surface that can adsorb identical atoms. There are N possible adsorption positions on this surface and only 1 atom can adsorb on each of those. An adsorbed atom is bound to the surface with negative energy $-\epsilon$ (so $\epsilon > 0$). The adsorption positions are far enough away to not influence each other. a) Give the multiplicity of this system for $n$ adsorbed atoms, with $0 \leq n \leq N$. b) Calculate the entropy of the macrostate of n adsorbed atoms. Simplify this expression by assuming N >> 1 and n >> 1. c) If the temperature of the system is T, calculate the average number of adsorbed atoms. 3. The attempt at a solution a) $\Omega(n) = \frac{N!}{n! (N - n)!}$ b) $S = k_b \ln \Omega(n) = k_b \ln \left(\frac{N!}{n! (N - n)!}\right)$ Using Stirling's approximation: $S \approx k_B ( N \ln N - N - n \ln n - n - (N - n) \ln (N - n) - (N - n) = k_B ( N \ln N - n \ln n - (N - n) \ln (N - n)$ A Taylor expansion around n = 0 then gives: $S \approx k_B (- \frac{n^2}{2N} + ...)\approx -\frac{k_b n^2}{2N}$ c) I'm not even sure if the previous stuff is correct, but I have no idea how to do this one. Any hints? Part (a) looks good, but you made a couple of sign mistakes in part (b). In the terms in the denominator you forgot to distribute the negative sign to the second term in n ln n - n and (N-n) ln(N-n). I've corrected the signs here: $$S \approx k_B ( N \ln N - N - n \ln n + n - (N - n) \ln (N - n) + (N - n) = k_B ( N \ln N - n \ln n - (N - n) \ln (N - n)$$ This change will result in some cancellations that help simplify your expression. Rewriting $$(N-n)\ln(N-n) = \left(1-\frac{n}{N}\right)N\ln N + (N-n)\ln\left(1-\frac{n}{N}\right)$$ Another mistake you made in your original attempt was that you expanded around n = 0, but you are told n is much greater than 1, so you can't do that expansion. What you can do, however, is assume that while n is much greater than 1, it is still much less that N, such that n/N is small, and you can expand the above logarithms in n/N. This will give you a simple expression for the entropy. To get the temperature, you need to write the entropy as a function of the total energy. Right now your entropy is a function of number. However, you are told how much energy there is per site, so you can figure out what the total energy is for n adsorbed atoms. Use this to rewrite the entropy in terms of the total energy. Thread Tools | | | | |----------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Statistical mechanics: multiplicity | | | | Thread | Forum | Replies | | | Quantum Physics | 2 | | | Introductory Physics Homework | 3 | | | Academic Guidance | 5 | | | Classical Physics | 0 | | | Classical Physics | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9191608428955078, "perplexity_flag": "middle"}
http://www.digplanet.com/wiki/Angular_displacement
digplanet beta 1: Athena Share digplanet: # Angular displacement Angle > Angular displacement Sections Agriculture Applied sciences Arts Belief Chronology Culture Education Environment Geography Health History Humanities Language Law Life Mathematics Nature People Politics Science Society Technology Classical mechanics Branches Formulations Fundamental concepts Core topics Scientists Rotation of a rigid object P about a fixed object about a fixed axis O. Angular displacement of a body is the angle in radians (degrees, revolutions) through which a point or line has been rotated in a specified sense about a specified axis. When an object rotates about its axis, the motion cannot simply be analyzed as a particle, since in circular motion it undergoes a changing velocity and acceleration at any time (t). When dealing with the rotation of an object, it becomes simpler to consider the body itself rigid. A body is generally considered rigid when the separations between all the particles remains constant throughout the objects motion, so for example parts of its mass are not flying off. In a realistic sense, all things can be deformable, however this impact is minimal and negligible. Thus the rotation of a rigid body over a fixed axis is referred to as rotational motion. In the example illustrated to the right, a particle on object P at a fixed distance r from the origin, O, rotating counterclockwise. It becomes important to then represent the position of particle P in terms of its polar coordinates (r, θ). In this particular example, the value of θ is changing, while the value of the radius remains the same. (In rectangular coordinates (x, y) both x and y vary with time). As the particle moves along the circle, it travels an arc length s, which becomes related to the angular position through the relationship: $s=r\theta \,$ Angular displacement is measured in radians rather than degrees. This is because it provides a very simple relationship between distance traveled around the circle and the distance r from the centre. $\theta=\frac sr$ For example if an object rotates 360 degrees around a circle of radius r, the angular displacement is given by the distance traveled around the circumference - which is 2πr divided by the radius: $\theta= \frac{2\pi r}r$ which easily simplifies to $\theta=2\pi$. Therefore 1 revolution is $2\pi$ radians. When object travels from point P to point Q, as it does in the illustration to the left, over $\delta t$ the radius of the circle goes around a change in angle. $\Delta \theta = \Delta \theta_2 - \Delta \theta_1$ which equals the Angular Displacement. ## Three dimensions In three dimensions, angular displacement is an entity with a direction and a magnitude. The direction specifies the axis of rotation, which always exists by virtue of the Euler's rotation theorem; the magnitude specifies the rotation in radians about that axis (using the right-hand rule to determine direction). Despite having direction and magnitude, angular displacement is not a vector because it does not obey the commutative law for addition.[1] ### Matrix notation Given that any frame in the space can be described by a rotation matrix, the displacement among them can also be described by a rotation matrix. Being $A_0$ and $A_f$ two matrices, the angular displacement matrix between them can be obtained as $dA = A_f . A_0^{-1}$ ## References 1. Kleppner, Daniel; Kolenkow, Robert (1973). An Introduction to Mechanics. McGraw-Hill. pp. 288–89. ## See also Original courtesy of Wikipedia: http://en.wikipedia.org/wiki/Angular_displacement — Please support Wikipedia. A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia. 3761 videos foundNext > AP Physics 10 | Lesson 10A | Angular DisplacementFirst video lesson on rotation. It all starts with the basic quantity angular displacement. Physics: Angular Displacement, Velocity, Accel.http://www.mindbites.com/lesson/4546 for full video. Check out http://www.mindbites.com/series/280-physics for a comprehensive video-based physics course or ... Angular Displacementprior to going further into functions in calculus it'd be good to review a few basics. Physics: Rotational kinematics and torque (1)Physics: Rotational kinematics. Angular displacement (Δθ); angular velocity (ω); angular acceleration (α). Torque This is a recording of a tutoring session, ... Introduction to angular velocityAngular velocity or how fast something is spinning. Angular displacement, speed and acceleration Rotational Motion, Circular Motion, Angular Displacement, Angular Velocity, Angular AccelerationFor further reading about Rotational and Circular Motion, Please click on the link given below.... http://vedupro.blogspot.in/2013/01/rotational-motion-circu... Circular motion radius vector, angular displacement Phase DisplacementDennis Merchant video on transformer connection phase displacement. This video goes with 3rd year transformer connections units of instruction in the Merchan... Circular Motion Theory Angular Position Displacement Distance by visual-ink.inDemo of Circular Motion Theory by visual-ink.in. 3761 videos foundNext > 2 news items ThomasNet Industrial News Room (press release) Wed, 15 May 2013 05:37:48 -0700 To characterize the performance of even a simple electric motor requires both its output torque and angular displacement to be related in time, allowing the derivation of meaningful results such as power and angular velocity. Such a measurement can be ... Targeted News Service (subscription) Thu, 02 May 2013 11:34:33 -0700 An alignment detector or search engine determining a desired vehicular heading as a preferential angular displacement associated with a generally maximum spatial correlation between the defined search space and the windrow pixels. An offset calculator ... Limit to books that you can completely read online Include partial books (book previews) .gsc-branding { display:block; } Oops, we seem to be having trouble contacting Twitter # Talk About Angular displacement You can talk about Angular displacement with people all over the world in our discussions. #### Support Wikipedia A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia. Please add your support for Wikipedia!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8291025161743164, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/92554/hypercohomology-of-logarithmic-de-rham-complex-of-complement-of-smooth-divisor-in/92559
## hypercohomology of logarithmic de Rham complex of complement of smooth divisor in smooth variety ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $V$ be a complex manifold and $D \subset V$ a smooth divisor. Question 1 Is $H^i(V \setminus D, \mathbb{C}) \simeq \mathbb{H}^i ( V, \Omega^{\bullet}_V(\log D))$ ? Question 2(Edited) Ok, 1 is true. Is it possible to define naturally a homomorphism $H^2(V \setminus D, \mathbb{C}) \rightarrow H^1(V, \Omega_V^1 (\log D))$? (In my case $V$ is of the form $U \setminus p$ where $U$ is a $3$-dimensional smooth Stein space and $D$ is of the form $\Delta \setminus p$ where $\Delta \subset U$ is a divosor with an isolated singularity at $p$. Then $H^1(V, \Omega_V^1 (\log D))$ is the set of 1st order deformations of the pair $(U, \Delta)$. Since $\Delta$ has only isolated singularities, this is finite dimensional.) I think it is true when $V$ is compact. How about non-compact case? - 3 Q1: yes. See Deligne Theorie de Hodge II for the proof. Q2: I don't understand this. What maps of complexes??? Anyway, in some cases, e.g. if $V$ is compact Kahler, you can get something along the lines of what you're suggesting, but for more complicated reasons. – Donu Arapura Mar 29 2012 at 12:33 Thank you for the comment. I edited Question 2. I realized that the map I considered is not a complex homomorphism. If you have comment for a new question, please let me know. – – tarosano Mar 29 2012 at 13:07 Thanks for the comment. Do you mean $\ker d: H^1(\Omega^1_V(\log D)) \rightarrow \ldots$? Is it really $H^0$? And I want to ask you one more. Is Question 2 treatable if $H^2( V, \mathcal{O}_V) =0$? – tarosano Mar 29 2012 at 14:20 Thanks. If you have comment and time, please let me know later. – tarosano Mar 29 2012 at 16:04 I deleted my earlier comments which were not quite accurate. Anyway, it looks you have what you need modulo the assumption of vanishing of $H^i(X,\mathcal{O})$. – Donu Arapura Mar 30 2012 at 13:04 show 1 more comment ## 1 Answer Question 1: Sure, this is true. Another reference (beyond what Donu pointed out) is chapter 8 of Claire Voisin's Hodge theory of .... But the point is $\Omega_X^{\bullet}(\text{log} D)$ is quasi-isomorphic to the pushforward of a resolution of $\mathbb{C}$ on $X \setminus D$. By the way, this holds not just for smooth $D$, but also for normal crossing $D$. Question 2: I don't think you have maps of complexes as you describe. For example, why do we have the map $\Omega_V^1(\text{log} D) \to \Omega^{\bullet}_V(\text{log} D)$? If I had a map of complexes, then the image of $d : \Omega_V^0(\text{log} D) \to \Omega_V^1(\text{log} D)$ would be zero (ie, the diagram would commute). EDIT: Whoops, it looks like Donu beat me to this in the comments. Revised Question 2: I don't see why this should hold in general. However, if you write down the relevant spectral sequence, and enough terms vanish (maybe the spectral sequence degenerates), you can be ok. EDIT (Response to the comment below): No, there isn't a map in general. Even for a projective variety and $D = 0$, we only have an $E^1$ degeneration of the spectral sequence. This means that in some sense, $H^2(X, O_X)$, $H^1(X, \Omega_X^1)$ and $H^0(X, \Omega_X^2)$ make up ${H}^2(X, \mathbb{C})$ (there is a filtration of the latter such that these terms make up the filtration). But we have maps: $$H^2(X, \mathbb{C}) \to H^2(X, O_X), \text{ and } H^0(X, \Omega_X^2) \to H^2(X, \mathbb{C}).$$ There isn't going to be a map to the $H^1(X, \Omega_X^1)$ term in general, unless for some reason $H^2(X, O_X) = 0$ (in the non-compact/non-Kahler setting, things get more complicated as Donu mentioned above). Anyways, if you read a little about spectral sequences and do a couple examples from that perspective, I'm sure you'll see what's going on. - Thank you for the comment. If you some comments on a new question, please let me know. – tarosano Mar 29 2012 at 13:07 Thank you for the revise. Actually, what I really want to know is the construction of the homomorphism. I don't mind about the direct summand question. Is that homomorphism not natural? – tarosano Mar 29 2012 at 13:49 Thanks. In my case, I can assume $H^2(V, \mathcal{O}_V) =0$. So it's OK for me. – tarosano Mar 29 2012 at 20:47 1 Great! Just notice that I need some conditions in order to use this (to know that the spectral sequence degenerates). – Karl Schwede Mar 30 2012 at 0:20 Moreover, I can assume that $H^1(V, \mathcal{O}_V) =0$. Then there is a map $H^2(V \setminus D, \mathbb{C}) \rightarrow {\rm Gr}_F^1 H^2 \subset H^1(V, \Omega^1_V(\log D))$. $\subset$ is implied by $H^1(\mathcal{O})=0$. Do you mean such a condition? – tarosano Mar 30 2012 at 10:06 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305346012115479, "perplexity_flag": "head"}
http://www.scholarpedia.org/article/Lyapunov_exponent
# Lyapunov exponent From Scholarpedia Antonio Politi (2013), Scholarpedia, 8(3):2722. Curator and Contributors 1.00 - Antonio Politi ## Contents ### Historical remarks As soon as scientists realized that the evolution of physical systems can be described in terms of mathematical equations, the stability of the various dynamical regimes was recognized as a matter of primary importance. The interest for this question was not only motivated by general curiosity, but also by the need to know, in the XIX century, to what extent the behavior of suitable mechanical devices remains unchanged, once their configuration has been perturbed. As a result, illustrious scientists such as Lagrange, Poisson, Maxwell and others deeply thought about ways of quantifying the stability both in general and specific contexts. The first exact definition of stability was given by the Russian mathematician Alexandr Lyapunov who addressed the problem in his PhD Thesis in 1892, where he introduced two methods, the first of which is based on the linearization of the equations of motion and has originated what have been later termed Lyapunov exponents (LE). (Lyapunov 1992) ### Definition LEs measure the growth rates of generic perturbations, in a regime where their evolution is ruled by linear equations, $$\tag{1} \dot {\bf u} = {\bf J}(t) {\bf u}$$ where $$\bf u$$ is an $$N$$ dimensional vector and $${\bf J}$$ is a (time-dependent) $$N\times N$$ matrix. In some contexts, such as that of linear stochastic differential equations, $${\bf J}$$ fluctuates because of the presence of disorder or multiplicative noise (Arnold, 1986). More commonly, in the context of deterministic dynamical systems, $${\bf J}$$ is the Jacobian of a suitable velocity field $$\bf F$$, computed along a trajectory $${\bf x}(t)$$ that satisfies the ordinary differential equation, $$\tag{2} \dot {\bf x} = {\bf F} ({\bf x}) \quad .$$ If $${\bf x}(t)={\bf x}_0$$ is a solution (i.e. if $${\bf F}({\bf x}_0)=0$$), then, the stability of this fixed point is quantified by the eigenvalues of the (constant) operator $${\bf J}$$. In this simple case, the LEs $$\lambda_i$$ are the real parts of the eigenvalues. They measure the exponential contraction/expansion rate of infinitesimal perturbations. A slightly more complicated example is that of a periodic orbit $${\bf x}(t+T) = {\bf x}(t)$$. In this case, it is necessary to integrate Eq. (1) over a time $$T$$, to obtain the discrete time evolution operator $$\bf M$$, $${\bf u}(t+T)= {\bf M} {\bf u}(t) \quad .$$ From the eigenvalues $$m_i$$ of $$\bf M$$, one can thereby determine the Floquet exponents $$\mu_i=(\ln m_i)/T$$; the LE $$\lambda_i$$ are their real parts. Since trajectories are not, in general, periodic, a different approach is required. The most general definition involves the computation of the eigenvalues $$\alpha_i$$ of yet another matrix, namely $${\bf M}(t){\bf M}^T(t)$$. A typical instance of the behavior of $$\alpha_i$$ is illustrated in the upper part of Figure 1. From the knowledge of $$\alpha_i$$, one naturally introduces the finite-time LE as $$\tag{3} \lambda_i(t) = \frac{\ln \alpha_i(t)}{2t} .$$ Since $$\lambda_i(t)$$ is, in general, a fluctuating quantity (see the lower part of Figure 1), it is necessary to consider the infinite time limit, to determine the asymptotic (in time) behaviour. This leads to the following defintion of LE, $$\tag{4} \lambda_i = \limsup_{t \to\infty} \lambda_i(t)$$ where the $$\limsup$$ is considered to account for the worst possible fluctuations: this is important whenever the stability of a given regime must be assessed. The Oseledets multiplicative ergodic theorem guarantees that LEs are independent of the initial condition (Oseledets 1968). Figure 1: Time dependence of a generic perturbation amplitude It is interesting to notice that while it makes sense to determine the imaginary part of the Lyapunov exponents for fixed points and periodic orbits, this question cannot, in general, be addressed for an aperiodic motion. In fact the $$\alpha_i$$'s are, by definition, real quantities and there is no way to extend the definition to include rotations. One can at most introduce a rotation number, to characterize the rotation of a generic perturbation around the reference trajectory (Ruelle 1985). In practice, Lyapunov exponents can be computed by exploiting the natural tendency of an $$n$$-dimensional volume to align along the $$n$$ most expanding subspace. From the expansion rate of an $$n$$-dimensional volume, one obtains the sum of the $$n$$ largest Lyapunov exponents. Altogether, the procedure requires evolving $$n$$ linearly independent perturbations and one is faced with the problem that all vectors tend to align along the same direction. However, as shown in the late '70s, this numerical instability can be counterbalanced by orthonormalizing the vectors with the help of the Gram-Schimdt procedure (Benettin et al. 1980, Shimada and Nagashima 1979) (or, equivalently with a QR decomposition). As a result, the LE $$\lambda_i$$, naturally ordered from the largest to the most negative one, can be computed: they are altogether referred to as the Lyapunov spectrum. ### Properties • The LEs are independent of both the metric used to determine the distance between perturbations and the choice of variables. This property implies they are dynamical invariants and thereby provide an objective characterization of the corresponding dynamics. • A strictly positive maximal Lyapunov exponent is synonimous of exponential instability, but one should be warned that in some special cases, this may not be true (see, e.g., the so-called Perron effect) (Leonov and Kuznetsov 2006) • A strictly positive maximal Lyapunov exponent is often considered as a definition of deterministic chaos. This makes sense only when the corresponding unstable manifold folds back remaining confined within a bounded domain (an unstable fixed point is NOT chaotic). • Typical trajectories are characterized by the same LEs, but there exists a zero-measure subset with different stability properties. The infinitely many periodic orbits embedded in a chaotic attractor are one such example. • One-dimensional maps $$x_{n+1}= G(x_n)$$ are characterized by just one LE, which is equal to the average value of $$\ln |dG/dx|$$. In other words, the LE can be determined as an ensemble average, rather than a time average. In principle, it is possible to extend the idea to higher dimensions, but it would turn out in a rather impracticle methods, because of the difficulty of reconstructing the invariant measure together with the need to identify the local directions of the various vectors. • The sum $$\Sigma$$ of all LEs measures the contraction rate of volumes in the whole phase space. In the so-called dissipative systems, $$\Sigma<0$$, meaning that volumes visited by generic trajectories shrink exponentially to zero. In Hamiltonian systems, $$\Sigma=0$$, i.e. volumes are preserved (see Liouville theorem). • In symplectic systems, LEs come in pairs ($$\lambda_i,\lambda_{2N-i+1}$$) such that their sum is equal to zero. This means that the Lyapunov spectrum is symmetric. It is a way of emphasizing the invariance of Hamiltonian dynamics under change of the time arrow. • Any (bounded) infinite trajectory that does not converge towards a fixed point is characterized by at least one zero LE: it corresponds to a perturbation of the phase point along its own trajectory. Other vanishing exponents may signal the existence of constants of motion. Zero exponents may also (non generically) occur in correspondence of bifurcation points, where some direction is marginally stable. In such cases, it is necessary to go beyond the linear approach to determine the stability. ### Characterization of deterministic chaos The knowledge of the LEs allows determining additional invariants such as the fractal dimension of the underlying attractor and its dynamical entropy. The Kaplan-Yorke formula provides an upper bound for the information dimension of the attractor, (Kaplan and Yorke 1979) $$\tag{5} D_{KY}= J + \frac{\Lambda_J}{|\lambda_{J+1}|}$$ where $$\Lambda_j\equiv \sum_{i=1}^j\lambda_i$$ and $$J$$ is the largest $$j$$-value such that $$\Lambda_j>0$$. This equation can be understood in the following way. A strictly positive $$\Lambda_j$$ implies that the hyper-volume of a generic $$j$$-dimensional box diverges while spreading over the attractor. This implies that the dimension is larger than $$j$$, since it is like asking to measure the "length" of a square: the length of a line covering the square is obviously infinite! For the same reason, $$\Lambda_j<0$$ signals that the dimension is smaller than $$j$$. Altogether, one can view the Kaplan-Yorke formula as a linear interpolation between the largest $$j$$ such that $$\Lambda_j>0$$ and the smallest such that the opposite is true (the procedure is schematically reproduced in Figure 2). In general, $$D_{KY}$$ provides an upper bound to the information dimension, but in three dimensional flows (two-dimensional maps) and in random dynamical systems it has been proved to coincide with it. The Kaplan-Yorke formula provides also approximate information on the number of the active degrees of freedom. In fact, in typical dissipative models, the phase-space dimension is infinite, but the number of independent variables that are necessary to uniquely identify the different points of the attractors is finite and sometimes even small. Another dynamical invariant that is connected with the LE is the Kolmogorov-Sinai entropy $$H_{KS}$$ which measures the growth rate of the entropy due to the exponential instability of the chaotic motion. In this case, the relationship is expressed by the Pesin formula (Pesin 1977) $$\tag{6} H_P \equiv \Lambda_j > H_{KS}$$ where the sum in $$\Lambda_j$$ is restricted to the strictly expanding directions (see Figure 2 for a schematic representation). In order to take into account the possible fractal structure along the unstable directions (this happens in the case of repellors, i.e. transient chaos) this formula must be extended to, $$\tag{7} H_P = \sum_{\lambda_i>0} d_i \lambda_i$$ where $$d_i$$ represents the fractal dimension along the ith direction (in standard chaotic attractors $$d_i=1$$). ### Fluctuations As schematically illustrated in Figure 1, the finite-time LE fluctuates. The central limit theorem guarantees that such fluctuations vanish when time goes to infinity. However, the so-called generalized LE (Fujisaka 1983, Benzi et al. 1985) $$\mathcal{L}(q)$$ $$\tag{8} \mathcal{L}(q) = \limsup_{t\to\infty} \frac{1}{q} \ln \left\langle {\rm e}^{q\lambda(t)} \right\rangle$$ (in this section, for simplicity, we drop the dependence on the index $$i$$) is sensitive to such fluctuations. It is easy to see that in the limit $$q\to 0$$, the usual LE definition is recovered. The same problem can be approached in a more transparent way, by expressing the probability $$P(\lambda,t)$$ that a trajectory of length $$t$$ is characterized by an exponent $$\lambda$$ (in the limit of finite but large enough $$t$$) in terms of the large-deviation function $$g(\lambda)$$, $$\tag{9} P(\lambda,t) \simeq {\rm e}^{-g(\lambda)t} .$$ Figure 2: Lyapunov spectrum and its integrated version $$g(\lambda)$$ is a nonnegative function with a typically quadratic maximum in correspondence of the usual LE $$\overline \lambda$$, where $$g(\overline \lambda) = 0$$. This condition implies that the probability of observing $$\lambda=\overline \lambda$$ does not vanish (exponentially) for increasing time. $$g(\lambda)$$ and $$\mathcal{L}(q)$$ are related to one another by a Legendre transform. The large-deviation function $$g$$ is a powerful tool to detect deviations from a perfectly hyperbolic behaviour (for instance, discovering that the domain of definition of for a positive exponent extends to negative values as well, as a result of homoclinic tangencies.) Generalized LE are important to establish the connection with different definitions of fractal dimensions: for instance, the correlation dimension, that is measured by implementing the Grassberger-Procaccia algorithm (Grassberger and Procaccia 1983), is connected with $$\mathcal{L}(1)$$. ### Spatially extended systems For simplicity we refer to one-dimensional lattices of length $$N$$ and assume that a single variable $$x_i$$ is defined on each lattice site. As a result, the phase-space dimension is $$N$$. There are two natural limits that one wishes to consider: thermodynamic and continuum limit. In the former case, we let $$N$$ go to infinity, by increasing the number of sites and leaving their mutual distance constant. In the latter case, $$N$$ is increased by reducing the spatial separation. In the thermodynamic limit, it has been observed and proven that the LEs come closer to each other in such a way that it makes sense to speak of a Lyapunov spectrum (Ruelle 2004, Grassberger 1989) $$\tag{10} \lambda(\rho=i/N) = \lambda_i$$ The existence of a Lyapunov spectrum can be interpreted as the evidence of the extensive character of space-time chaos. In fact, this means that the entropy $$H_P$$ and the fractal dimension $$D_{KY}$$ are proportional to the system size. In other words, the dynamics in sufficiently separated regions (of the physical space) are independent of one another. In the continuum limit, additional (negative) exponents appear, which characterize the fast relaxation phenomena occurring on short spatial scales. ### Chronotopic approach Lyapunov exponents have been introduced with the goal of characterizing the time evolution of perturbations of lumped dynamical systems. However in spatially extended systems, it is important to describe the spatial evolution as well. A first generalization of the LE is obtained by introducing the convective exponent, to describe the growth of an initially localized perturbation (Deissler and Kaneko 1987) $$\tag{11} u(x,t) = {\rm e}^{ L(v)t} u(x,0)$$ where $$v=i/t$$ is the world line along which the evolution is measured and $$u(x,0)$$ is restricted to some finite interval around $$x=0$$. Figure 3: Two different instances of the convective Lyapunov spectrum Figure 4: Geometric construction to determine the convective exponent In chaotic systems with left-right symmetry, $$L(v)$$ is symmetric too and attains its maximum value for zero velocity; $$L(0)$$ coincides with the standard maximum LE (see Figure 3, left panel). As the velocity increases (in absolute value), $$L(v)$$ decreases to eventually become negative, beyond some critical value $$v_0$$ which can be interpreted as the maximal propagation of (infinitesimal) perturbations. Whenever there is no left-right symmetry, it may happen that only perturbations propagating with some finite velocity do expand. In such cases, one speaks of convective instabilities (see the right panel in Figure 3. If the system is open, it locally relaxes back to the previous equilibrium state, once the perturbation has travelled away. Convective exponents are an example of the additional information that can be extracted by implementing the so-called chronotopic approach (Lepri et al. 1996), which is based on the definition of the growth rate of exponentially distributed perturbations $$u(x) = {\rm e}^{\mu x}u_\mu(x)$$ (standard LE are obtained by assuming $$\mu =0$$). By assuming a generic $$\mu$$-value in the original evolution equations in tangent space, one can determine the generalized temporal Lyapunov spectrum $$\lambda(\rho,\mu)$$. The convective exponents can be obtained by Legendre transforming $$\lambda(0,\mu)$$, i.e. $$L(v) = \lambda(0,\mu) - \lambda \mu \qquad v = \frac{d\lambda}{d\mu}$$ The corresponding geometrical construction is presented in Figure 4. Notice that one can equivalently proceed from $$L(v)$$ to $$\lambda(0,\mu)$$, in which case $$\mu$$ is determined as $$\mu = dL/dv$$. By exchanging the role of space and time variables, one can define the complementary spatial Lyapunov exponents $$\mu(\lambda,\rho)$$. In one dimensional systems, it has been conjectured that the two kinds of spectra are related to one another and follow from the existence of a superinvariant (as it is independent of the space-time parametrization) entropy potential (Lepri et al. 1997). ### Lyapunov vectors While the LEs correspond to the limit eigenvalues of a suitable product of matrices, there is no corresponding unique set of eigenvectors, as they depend on the current position of the phase point. In fact this dependence reflects the typically nonlinear shape of both stable and unstable manifolds. However, one cannot directly invoke the vectors $${\bf V}_i$$ arising from the Gram-Schmidt orthogonalization procedure, as they are not covariant, i.e. the vector $${\bf V}_i({\bf x})$$ defined in $$\bf x$$ is not transformed into $${\bf V}_i({\bf y})$$ when $$\bf x$$ is mapped onto $$\bf y$$. A proper definition requires to generalize the concept of eigenvectors of linear operators (Eckmann and Ruelle 1985). Roughly speaking, the covariant vectors can be obtained by iterating forward and backward along the same trajectory to identify the $$i$$th vector $${\bf W}_i$$ as the (backward) most expanding direction within the (forward) most expanding subspace of dimension $$i$$. Effective algorithms for the determination of the covariant vectors have been proposed only recently (Wolfe and Samelson 2007, Ginelli et al. 2007) ### Finite amplitude Lyapunov exponents Figure 5: Growth of a generic finite-amplitude perturbation In some cases it is useful, if not even necessary, to consider finite-amplitude perturbations. Apart from experimental time series, where, in the absence of a model, one is forced to consider finite distances, it is useful to extend the concept of Lyapunov exponents to regimes where nonlinearities are possibly relevant. Finite-amplitude exponents may be defined in the following way. Given any two nearby trajectories, let $$\Delta(t)$$ denote their mutual distance and measure the times $$t_n$$ when $$|\Delta(t_n)|$$ crosses (for the first time) a series of exponentially spaced thresholds $$\theta_n$$ ($$\theta_n = r \theta_{n-1}$$ - see Figure 5). By averaging the time separation between consecutive crossings over different pairs of trajectories, one obtains the finite-amplitude Lyapunov exponent (Aurell et al. 1996) $$\ell = \frac{r}{\langle t_n-t_{n-1}\rangle}$$ For small enough thresholds, one recovers the usual (maximum) Lyapunov exponent, while for large amplitudes, $$\ell$$ saturates to zero, since a perturbation cannot be larger than the size of the accessible phase-space. In the intermediate range, $$\ell$$ tells us how the growth of a perturbation is affected by nonlinearities. As the definition of finite-amplitude LE does neither involve an infinite-time limit nor that of infinitesimal perturbations , is not mathematically well posed, as the result will depend on the selection of the variables. Nevertheless, it may profitably be used to extract useful information on the presence of collective dynamics, where one would like to distinguish between the stability of microscopic and macroscopic perturbations, or in the presence of different time scales, when some directions saturate very rapidly. ### Applications LEs prove useful in various contexts. Within dynamical systems, LEs, besides providing a detailed characterization of chaotic dynamics, can help to assess various forms of synchronization (Pikovsky 2007). Another context where LEs help to clarify the underlying dynamics is chaotic advection, i.e. the evolution of particles transported by a (possibly time-dependent) velocity field, $$\dot {\bf x} = {\bf F}({\bf x},t)$$ where $${\bf x}(t)$$ denotes the Lagrangian trajectory of a generic particle in the physical space. In this case, the existence of a positive Lyapunov exponent is synonymous of chaotic mixing (Ottino, 1989). Another prominent example is Anderson localization of the eigenfunctions $$\psi(x)$$ of the Schroedinger equation in the presence of disorder. In this case, the object of study is the spatial dependence of $$\psi(x)$$ (see also the section on the chronotopic approach). In one-dimensional systems, in the tight-binding approximation, $$x$$ is an integer variable and the spatial evolution corresponds to multiplying by a $$2\times2$$ random matrix. This is the so-called transfer matrix approach: the invariance under spatial reversal implies that the two (spatial) LEs are opposite of each other. The most important result is that the positive LE coincides withe the inverse of the localization length $$\ell_c$$ (Borland 1963, Furstenberg 1963). The transfer-matrix approach can be applied also in higher-dimensional spaces, in which case, the inverse localization length coincides with the minimal positive LE. ### References • L. Arnold, Lyapunov exponents, Lecture Notes in Mathematics, 1186 (1986) • E. Aurell, G. Boffetta, A. Crisanti, G. Paladin, and A. Vulpiani, Growth of Noninfinitesimal Perturbations in Turbulence, Phys. Rev. Lett. 77:1262 (1996). • G. Benettin, L. Galgani, A. Giorgilli, and J. M. Strelcyn, Lyapunov characteristic exponents for smooth dynamical systems: a method for computing all of them, Meccanica 15 9:15, 9:21 (1980). • R. Benzi, G. Paladin, G. Parisi, and A. Vulpiani, Characterisation of intermittency in chaotic systems, J. Phys A 18:2157 (1985). • R.E. Borland, The nature of the electronic states in disordered one-dimensional systems, Proc. R. Soc. London, A274:529 (1963). • R. J. Deissler and K. Kaneko, Velocity-Dependent Liapunov Exponents as a Measure of Chaos for Open Flow Systems, Phys. Lett.119A:397 (1987). • J.-P. Eckmann and D. Ruelle, Ergodic theory of chaos and strange attractors, Rev. Mod. Phys. 57:617 (1985). • H. Fujisaka, Statistical Dynamics Generated by Fluctuations of Local Lyapunov Exponents, Prog. Theor. Phys, 70:1264 (1983). • H. Furstenberg, Noncommuting random products, Trans. Amer. Math. Soc., 108:377 (1963). • F. Ginelli, P. Poggi, A. Turchi, H. Chat\'e, R. Livi, A. Politi, Characterizing dynamics with covariant Lyapunov vectors, Phys. Rev. Lett. 99, 130601 (2007). • P. Grassberger and I. Procaccia, Characterization of strange attractors, Phys. Rev. Lett. 50:346 (1983). • P. Grassberger, Information Content and Predictability of Lumped and Distributed Dynamical Systems, Physca Scripta 40:346 (1989). • J.L. Kaplan and J.A. Yorke, In Functional Differential Equations and Approximations of Fixed Points, ed. H.-O. Peitgen and H.-O. Walther, 2049 (Berlin, Springer-Verlag, 1979). • G.A. Leonov and N.V. Kuznetsov, Time-varying linearization and the Perron effects International Journal of Bifurcation and Chaos 17:1079 (2006) • S. Lepri, A. Politi, A. Torcini, Chronotopic Lyapunov analysis: (I) A comprehensive characterization of 1D systems, J. Stat. Phys. 82, 1429 (1996). • S. Lepri, A. Politi, A. Torcini, Entropy potential and Lyapunov exponents, CHAOS 7, 701 (1997). • A.M. Lyapunov The General Problem of the Stability of Motion, Taylor & Francis, London 1992. • V.I. Oseledets, A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems, Trans. Moscow Math. Soc. 19:197 (1968). • J.M. Ottino, The Kinematics of Mixing: Stretching, Chaos and Transport (Cambridge University Press 1989). • Y. Pesin, Characteristic Lyapunov exponents and smooth ergodic theory Russian Math. Surveys, 32:55 (1977). • D. Ruelle, Rotation numbers for diffeomorphisms and flows, Annales de l'IHP sec. 4, 42:109 (1985). • D. Ruelle, Thermodynamic formalism (Cambridge University Press, 2004). • I. Shimada and T. Nagashima, A numerical approach to ergodic problem of dissipative dynamical systems, Prog. Theor. Phys. 61:1605 (1979) • C.L. Wolfe, R.M. Samelson, An efficient method for recovering Lyapunov vectors from singular vectors, Tellus, 59A:355 (2007). ### Internal references • Shmuel Fishman, Scholarpedia, 5(8):9816 (2010). • Edward Ott, Scholarpedia, 3(3):2110 (2008). • Arkady Pikovsky, Misha Rosenblum, Scholarpedia, 2(12):1459 (2007). • Yakov Sinai, Scholarpedia, 4(3):2034 (2009).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 131, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8861029744148254, "perplexity_flag": "head"}