url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://homotopical.wordpress.com/2009/05/20/homotopical-categories-and-simplicial-sheaves/
# Motivic stuff ## Homotopical categories and simplicial sheaves Posted by Andreas Holmstrom on May 20, 2009 (This is an expanded version of the 2nd part of a talk I gave last month. For the first part, see this post.) Homotopical categories The topic for this post is “homotopical categories”, and their role in algebraic geometry. I want to emphasize that I am very much in the process of learning about all these things, so this post is based more on interest and enthusiasm than actual knowledge. I hope to convey some of the main ideas and why they could be interesting, and come back to the details in many future posts, after having learned more. I apologize for not defining everything carefully, and for brushing the “stable” aspects of the theory, i.e. spectra and sheaves of spectra, under the carpet. There are many different ways to speak of “homotopical categories”, and I only use this expression because I don’t know of a better thing to call them. The most well-known approach is the language of model categories, invented by Quillen and developed by many others. There are many excellent online introductions, for example Dwyer-Spalinski, Goerss-Schemmerhorn, and appendix A2 of Jacob Lurie’s book on higher topos theory, available on his webpage. Other languages are given by the many different approaches to higher categories; see the nLab page and the survey of Bergner. Still other languages include Segal categories, A-infinity categories, infinity-stacks, and homotopical categories in the precise sense of Dwyer-Hirschhorn-Kan-Smith. Although I don’t want to go into the details of all these different homotopical/higher-categorical subtleties, I will try to list some of the basic features that “homotopical” categories typically have. • A homotopical category should behave like a nice category of topological spaces. • In particular, there should be a class of morphisms called weak equivalences, and: • To any homotopical category $M$, one should be able to associate a “homotopy category” $H$ and a functor $M \to H$ which is universal among functors sending weak equivalences to isomorphisms. Morally, $H$ is obtained from $M$ by “formally inverting the weak equivalences”. • A homotopical category should admit all limits and colimits, and also homotopy limits and homotopy colimits. • A homotopical category should be enriched over some kind of spaces, i.e. for any two objects $A,B$, the set $Hom(A,B)$ should be a “space” in some sense, for example a simplicial set, a topological space, or a chain complex of abelian groups. Simplicial objects Before talking about algebraic geometry, we need to recall some “simplicial language”. The category $\Delta$ is defined as follows. Objects are the finite ordered sets of the form $[n]:= \{ 0,1,2, \ldots , n \}$. Morphisms are order-preserving functions $[m] \to [n]$, i.e. functions such that $x \leq y \implies f(x) \leq f(y)$. If $C$ is any category, we define the category $sC$of simplicial $C$-objects to be the category in which the objects are the contravariant functors from $\Delta$ to $C$, and the morphisms are the natural transformations of functors. There is a functor from $C$ to $sC$ given by sending an object $X$ of $C$ to the corresponding constant functor, i.e. the functor sending all objects to $X$ and all morphisms to the identity morphisms of $X$. Some examples: • Take $C = Set$,  the category of sets. The above construction gives us the category $sSet$ of simplicial sets. This category is “sort of the same as the category $Top$ of topological spaces”. The precise statement is that there is a pair of adjoint functors which make $Top$  and$sSet$ into Quillen equivalent model categories; in particular, their homotopy categories are equivalent (as categories). For the purposes of algebraic topology, we can work with any of these categories. For example, we can define homotopy groups and various generalized homology and cohomology groups of a simplicial set. The inclusion of $C$ into $sC$ corresponds to viewing a set as a discrete topological space. A weak equivalence between two simplicial sets is a morphism inducing isomorphisms on all homotopy groups. • Take $C = Ab$, the category of abelian groups. There is a forgetful functor from $sAb$ to the category $sSet$, induced by the forgetful functor from$Ab$ to$Set$. The Dold-Kan correspondence tells us that there is an equivalence between $sAb$ and the category of (non-negatively graded) chain complexes of abelian groups. Under this equivalence, homotopy groups of a simplicial abelian group correspond to homology groups of a chain complex. • Take$C = k-Alg$, the category of$k$-algebras for a commutative ring$k$. Then there is some kind of Dold-Kan correspondence between simplicial algebras and DG-algebras. See Schwede-Shipley for precise statements. • Take $C = Shv$, the category of sheaves of sets on some topological space or site. Then $sShv$ is the category of simplicial sheaves. This category can also be viewed as the category of sheaves of simplicial sets on the site. Any category of simplicial sheaves is a “homotopical category” (I am not making this precise here). For example, one way of defining weak equivalences is to say that a morphism of simplicial sheaves is a weak equivalence iff it induces weak equivalences of simplicial sets on all stalks. Homotopical categories in algebraic geometry Now to algebraic geometry. Through a few examples I want to argue that homotopical categories (in particular categories of simplicial sheaves) provide a useful and natural setting for certain aspects of algebraic geometry. Firstly, let’s consider the general problem of viewing a cohomology theory as a representable functor. In algebraic topology, the Brown representability theorem says that any generalized cohomology group is representable, when viewed as a functor on the homotopy category $Hot$ of topological spaces. In other words, there is a space $K$ such that the cohomology of a space $X$ is given by $Hom(X,K)$, where the $Hom$ is taken in the homotopy category. Examples include the Eilenberg-MacLane spaces $K(G, n)$, which represent the singular cohomology groups $H^n(X, G)$, and the space $BU \times \mathbf{Z}$, which represents K-theory. The existence of a long exact sequence relating the cohomology groups for various $n$ corresponds to the fact that the different Eilenberg-MacLane spaces fit together to form a so called spectrum. The Brown representability theorem is best expressed using the language of spectra, i.e. stable homotopy theory, but I want to postpone a discussion of this to a future post. An interesting aspect of Brown representability for singular cohomology is that by identifying the coefficient group $G$ with the corresponding Eilenberg-MacLane space, the two arguments of a singular cohomology group $H^n(X, G)$, namely the space $X$ and the coefficient group $G$, suddenly are on equal footing. By this I mean that they both live in the same category of topological spaces, rather than in the two separate worlds of topological spaces and abelian groups, respectively. In classical algebraic geometry, there is no analogue of Brown representability. Most cohomology theories are of the form $H^n(X, F)$, where $X$ is some kind of variety, and $F$ is a sheaf of abelian groups. One may ask if there is a way to express such a cohomology group as a representable functor. In order to obtain a picture parallell to the topological picture above, a necessary requirement is to have a homotopical category in which the variety $X$ and the sheaf $F$ both live as objects, “on equal footing”. One possibility for such a category is some category of simplicial sheaves. In order to explain how this works, let us fix some category $Var$ of varieties, for example the category smooth varieties over some base field $k$. Let us also fix some Grothendieck topology on this category, for example the Zariski topology, the Nisnevich topology, the etale topology, or some flat topology. This defines a site, and we can speak of sheaves on this site, i.e. contravariant functors on $Var$, satisfying a “glueing” or “descent” condition with respect to the given topology. Since Grothendieck, we are familiar with the idea of identifying a variety with the sheaf of sets that it represents, by the Yoneda embedding. We mentioned earlier that for any category $C$, there is a functor $C \to sC$. Taking $C$ to be the category of sheaves of sets, we get a functor from sheaves of sets to simplicial sheaves. In particular, any variety can be viewed as a simplicial sheaf, by composing the Yoneda embedding with the canonical functor from sheaves of sets to simplicial sheaves. We also want to show that a sheaf of abelian groups can be viewed as a simplicial sheaf. We can regard any abelian group as a chain complex, by placing it in degree zero, and placing the zero group in all other degrees. This gives an embedding of the category of abelian groups into the category of chain complexes, and by composing with the Dold-Kan equivalence we get a functor from abelian groups to simplicial sets. This induces a functor from sheaves of abelian groups to simplicial sheaves. More generally, any complex of sheaves of abelian groups can be viewed as a simplicial sheaf. Now one could hope for an analogue of Brown representability, namely that the sheaf cohomology group $H^n(X, F)$ could be expressed as $Hom(X,F)$, where the Hom is taken in the homotopy category of simplicial sheaves. It seems to be the case that something along these lines should be true. For example, this nLab page on cohomology seems to imply that all forms of cohomology should be of this form, at least sheaf cohomology groups of the type just described. Also, Hornbostel has proved a Brown representability theorem in the setting of motivic homotopy theory. There are many other phenomena in algebraic geometry which also seem to indicate that categories of simplicial sheaves might be more natural to study than the smaller categories of schemes and varieties we typically consider. Some examples (longer explanations of these will have to wait until future posts): • It seems to be the case that almost any geometric object generalizing the concept of a variety can be thought of as a simplicial sheaf. Examples: Simplicial varieties, stacks, algebraic spaces. • Deligne’s groundbreaking work on Hodge theory in the 70s (see Hodge II and Hodge III) uses in a crucial way that the singular cohomology of a complex variety can be defined on the larger category of simplicial varieties. Simplicial varieties are special cases of simplicial sheaves, and I believe it should be true that functors on simplicial varieties can be extended to simplicial sheaves. • Simplicial varieties/schemes also pop up naturally in other settings. For example, Huber and Kings need K-theory of simplicial schemes for their work on the motivic polylogarithm. • As already indicated, simplicial sheaves appears to be the most natural domain of definition for many different kinds of cohomology theories. • Morel and Voevodsky’s  A1-homotopy theory (also known as motivic homotopy theory) is based on categories of simplicial sheaves for the Nisnevich topology. • Brown showed that Quillen’s algebraic K-theory can be thought of as “generalized sheaf cohomology”, where the coefficients is no longer a sheaf of abelian groups, but a simplicial sheaf. • The work of Thomason relating algebraic K-theory and etale cohomology uses the language of simplicial sheaves. • Simplicial sheaves provide a natural language for “resolutions”. For example, it gives a unified picture of the two methods for computing sheaf cohomology: Cech cohomology and injective resolutions. • Simplicial sheaves seems to be the most natural language for descent theory. • Toen‘s work on higher stacks can be formulated in terms of simplicial sheaves. • Homotopy categories of simplicial sheaves can be thought of a generalization of the more classical derived categories of sheaves. The homotopical point of view seems to clarify some unpleasant aspects of the classical theory of triangulated categories. See also the nLab entry on motivation for sheaves, cohomology, and higher stacks. Questions I hope to come back to many of these examples in detail. For now, I  just want to list a few questions which I find intriguing. • To define a category of simplicial sheaves, we must choose a Grothendieck topology. How does this choice affect the properties of the category we obtain? Morel and Voevodsky work with the Nisnevich topology, Huber and Kings work with the Zariski topology, and Toen (at least sometimes) works with some flat topology. For some purposes, it seems to be the case that we don’t need a topology at all, instead we can just work with simplicial presheaves. What is the role of the Grothendieck topology? • Most of the above examples are developed for varieties over a base field of characteristic zero. Based on the above, it seems reasonable to believe that simplicial sheaves are useful in this case, but what if the base scheme is field of characteristic p, a local ring, a Dedekind domain, or something even more general? Is it the case that simplicial sheaves is the most natural language for understanding cohomology theories for arithmetic schemes, such as schemes which are flat and of finite type over $Spec(\mathbb{Z})$? Are simplicial sheaves important in number theory/Arakelov theory/geometry over the field with one element? What are the obstacles to “doing homotopy theory over an arithmetic base”? Obviously I hope that there will be interesting answers to these questions, but I am still completely in the dark as to what these answers might be. ### Like this: This entry was posted on May 20, 2009 at 11:19 pm and is filed under Uncategorized. Tagged: arithmetic geometry, arithmetic schemes, Brown representability, Cohomology, Deligne, generalized sheaf cohomology, higher categories, homotopical categories, Homotopy theory, Lurie, Morel, Motivic homotopy theory, Quillen, simplicial schemes, simplicial sheaves, stacks, Toen, Voevodsky. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. 1. ### Ben Antieausaid May 20, 2009 at 11:58 pm One of the key points about simplicial presheaves is that there is [Jardine, Simplicial Presheaves] a closed model category structure on simplicial presheaves that uses the data of the Grothendieck covers. The fibrant objects are the presheaves that satisfy descent with respect to all hypercovers in the given topology [Dugger-Hollander-Isaksen]. Thus, the [Brown-Gersten] result states that the algebraic K-theory simplicial sheaf satisfies Zariski descent. But, it is also known that this is no longer true for the etale topology. So, one doesn’t think of the topology as affecting the category of simplicial presheaves, but rather the model category structure on that category. 2. ### Samsaid May 21, 2009 at 2:21 am Let me point out that “homotopical” categories in the most generality need not be homotopy cocomplete and complete, and might behave quite differently from Top. Examples arise in a silly way: every category ought to be a homotopical category with discrete mapping spaces, so one could choose a category without some small limits. They also arise in applications: given an arbitrary PROP in spaces, its homotopy algebras may not be homotopically complete or cocomplete. (In particular, the Hopkins-Lurie theorem identifies 0-1-2 field theories with a certain class of dualizable objects in a symmetric monoidal (infinity, 1)-category; off the top of my head, I’m not sure that the resulting category has all homotopy limits and colimits.) 3. ### Novicesaid May 21, 2009 at 3:55 pm If you get any more insights on doing homotopy theory over an arithmetic base, please take the time to write a post on it. I also find the question of interest and will be grateful for any explanations. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 67, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9110202789306641, "perplexity_flag": "head"}
http://mathoverflow.net/questions/104640/where-else-has-proposition-b1-3-17-in-the-elephant-been-proved
## Where else has Proposition B1.3.17 in the Elephant been proved? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) (I asked the same question here and got some helpful comments, but thought I'd re-ask in case I get a more direct response.) This is a sort of reference request. Proposition B1.3.17 in Johnstone's Elephant reads: Proposition 1.3.17 Let $\mathcal{S}$ and $\mathcal{T}$ be categories with pullbacks, $F \colon \mathcal{S} \rightarrow \mathcal{T}$ a functor having a right adjoint $R$, and $\Pi \colon \mathcal{C} \rightarrow \mathcal{T}$ a fibration. Then $F^* \Pi \colon F^* \mathcal{C} \rightarrow \mathcal{S}$ satisfies any comprehension scheme satisfied by $\Pi$. I find the proof Johnstone offers very confusing (partly because it's very elliptical.) Does anyone know where this was originally proved? I haven't found the result in Benabou's writing (unless it's in the paper in French, referenced as [101] in Johnstone, which I cannot read) nor anywhere else. Is there another version of this proof in print? Or was Johnstone the first to prove this result in this generality? Furthermore, if someone patient among you actually looks at the proof, could you possibly clarify this for me: What is the notation $\mathcal{T}^{\pi_0\mathcal{D}}$ supposed to describe ($\pi_0 \mathcal{D}$ is described as the "set of connected components of $\mathcal{D}$"? At first I suspected it was (collections of) connected diagrams in $\mathcal{T}$ - i.e. I interpreted as something like a component-wise $[\pi_0 \mathcal{D},\mathcal{T}]$ where the diagrams are sent to $\mathcal{T}$ via $\Pi$ (because of what Johnstone says right before the diagram, namely that "applying $\Pi$ to objects and morphisms of Rect($\mathcal{D},\mathcal{C}$) yields a functor Rect($\mathcal{D},\mathcal{C}) \rightarrow \mathcal{T}^{\pi_0 \mathcal{D}}$"- but then I cannot really make sense of what he says at the last sentence of the first full paragraph of pg. 278, for which interpreting it as the $\pi_0 \mathcal{D}$-fold product of $\mathcal{T}$ makes sense. Basically, what categories are we dealing with on the bottom square of the diagram on pg. 277? - 2 I haven't been through this, but here's my guess as to what the notation means. (I assume that D is a category.) Every category D has a set of connected-components: it's the set of equivalence classes of the equivalence relation ~ on ob(D) generated by x~y whenever there exists a map x --> y. So I guess T^{pi_0 D} is the product of (pi_0 D) copies of T, as you say in your penultimate sentence. – Tom Leinster Aug 13 at 21:26 (Sorry, I guess I should have said "Every small category D", or something. Anyway, that doesn't seem to be the issue here.) – Tom Leinster Aug 13 at 21:51 @Tom Leinster Yes $\mathcal{D}$ in this case is a finite category, so that's fine. – Chuck Aug 14 at 1:38 What is reference [101], for those without the Elephant at hand? – David Roberts Aug 14 at 3:07 @David Roberts "Fibrations Petites et localement petites", J. Benabou C. R. Acad. Sci. Paris 281 (1975) – Chuck Aug 14 at 14:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487834572792053, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/151975-kernel-range-linear-transformation.html
# Thread: 1. ## Kernel and range of a linear Transformation $\ L:M_{22} \to M_{22}\$ be defined by $<br /> \ L(A)=\begin{bmatrix}<br /> 1 &2 \\ <br /> 1 & 1<br /> \end{bmatrix}A-A\begin{bmatrix}<br /> 1 &2 \\ <br /> 1 & 1<br /> \end{bmatrix}\ <br />$ Find the basis for ker L and basis for range L My work: ---> $<br /> \ \ker L = \left \{ {L(A)| \begin{bmatrix}<br /> 1 &2 \\ <br /> 1 & 1<br /> \end{bmatrix}}A-A\begin{bmatrix}<br /> 1 &2 \\ <br /> 1 & 1<br /> \end{bmatrix}=\vec{0} \right \}\ <br /> <br />$ $<br /> \[\ker L = \begin{bmatrix}<br /> 1 &2 \\ <br /> 1 & 1<br /> \end{bmatrix}\begin{bmatrix}<br /> a &b \\ <br /> c&d <br /> \end{bmatrix}-\begin{bmatrix}<br /> a &b\\ <br /> c& d<br /> \end{bmatrix}\begin{bmatrix}<br /> 1 &2 \\ <br /> 1 & 1<br /> \end{bmatrix}=\vec{0}\\<br /> =\begin{bmatrix}<br /> a+c \\ <br /> b+d<br /> \end{bmatrix} - \begin{bmatrix}<br /> a+c \\ <br /> b+d<br /> \end{bmatrix}=\begin{bmatrix}<br /> 0&0 \\ <br /> 0&0<br /> \end{bmatrix}\\<br /> =...?\]<br /> \$ Sorry, I don't know how to find the kernel for an Mnn --> Mnn with an expression like this Can someone please help me out with this. Thanks in advance 2. You are fine all the way up to here: $\[\ker L = \begin{bmatrix}<br /> 1 &2 \\<br /> 1 & 1<br /> \end{bmatrix}\begin{bmatrix}<br /> a &b \\<br /> c&d<br /> \end{bmatrix}-\begin{bmatrix}<br /> a &b\\<br /> c& d<br /> \end{bmatrix}\begin{bmatrix}<br /> 1 &2 \\<br /> 1 & 1<br /> \end{bmatrix}=\vec{0}$ The step after this does not make sense. Check your matrix multiplication! 3. I apologize. I got pretty sloppy there. I was doing that late at night and so badly wanted to crash. Ok Here is what I got so far. Hopefully there wont typos... $<br /> \ L(A)=\begin{bmatrix}<br /> 1 &2 \\ <br /> 1&1 <br /> \end{bmatrix}\begin{bmatrix}<br /> a &b \\ <br /> c & d<br /> \end{bmatrix}-\begin{bmatrix}<br /> a & b\\ <br /> c& d<br /> \end{bmatrix}\begin{bmatrix}<br /> 1& 2\\ <br /> 1 & 1<br /> \end{bmatrix}=<br /> \begin{bmatrix}<br /> a+2c & b+2b\\ <br /> a+c & b+d<br /> \end{bmatrix}-\begin{bmatrix}<br /> a+b &2a+b \\ <br /> c+d&2c+d<br /> \end{bmatrix}=\begin{bmatrix}<br /> 0& 0\\ <br /> 0&0 <br /> \end{bmatrix}\<br />$ $\ L(A)=\begin{bmatrix}<br /> 2c-b & 2d-2a\\ <br /> a+d & b-2c<br /> \end{bmatrix}=\begin{bmatrix}<br /> 0& 0\\ <br /> 0&0 <br /> \end{bmatrix}\$ I turned this into a matrix and performed rref $\ \begin{bmatrix}<br /> 0 &-1 &2 & 0 &0 \\ <br /> -2&0 &0 &2 &0 \\ <br /> 1& 0& 0 &1 &0 \\ <br /> 0&1 & -2 & 0 &0 <br /> \end{bmatrix}\sim \begin{bmatrix}<br /> 1 &0 &0 & 0 &0 \\ <br /> 0&1 &-2&0 &0 \\ <br /> 0& 0& 0 &1 &0 \\ <br /> 0&0 & 0 & 0 &0 <br /> \end{bmatrix}\$ a= 0 b = 2 c = arbitrary d - 0 However the correct answer for the basis for ker L is: $\ \left \{ \begin{bmatrix}<br /> 1&0 \\ <br /> 0&1 <br /> \end{bmatrix} ,\begin{bmatrix}<br /> 0 &1 \\ <br /> 1/2& 0<br /> \end{bmatrix}\right \}\$ I don't know how to answer this. I also have to find the basis for range L. The correct answer for the basis for range L is: $\ \left \{ \begin{bmatrix}<br /> 0&-2 \\ <br /> 1&0 <br /> \end{bmatrix} ,\begin{bmatrix}<br /> -1 &0\\ <br /> 0& 1<br /> \end{bmatrix}\right \}\$ But I don't know how to get that since I don't know how to get the ker L for this problem. I know that range L = span S. Not sure how to set this up.... Can someone please help me figure out this problem. Thanks in advance. 4. Originally Posted by wilday86 I apologize. I got pretty sloppy there. I was doing that late at night and so badly wanted to crash. Ok Here is what I got so far. Hopefully there wont typos... $<br /> \ L(A)=\begin{bmatrix}<br /> 1 &2 \\ <br /> 1&1 <br /> \end{bmatrix}\begin{bmatrix}<br /> a &b \\ <br /> c & d<br /> \end{bmatrix}-\begin{bmatrix}<br /> a & b\\ <br /> c& d<br /> \end{bmatrix}\begin{bmatrix}<br /> 1& 2\\ <br /> 1 & 1<br /> \end{bmatrix}=<br /> \begin{bmatrix}<br /> a+2c & b+2b\\ <br /> a+c & b+d<br /> \end{bmatrix}-\begin{bmatrix}<br /> a+b &2a+b \\ <br /> c+d&2c+d<br /> \end{bmatrix}=\begin{bmatrix}<br /> 0& 0\\ <br /> 0&0 <br /> \end{bmatrix}\<br />$ $\ L(A)=\begin{bmatrix}<br /> 2c-b & 2d-2a\\ <br /> a+d & b-2c<br /> \end{bmatrix}=\begin{bmatrix}<br /> 0& 0\\ <br /> 0&0 <br /> \end{bmatrix}\$ The lower left entry should be "a- d", not "a+ d". Rather than go to 5 by 4 matrices, note that this says the 2c- b= 0, 2d- 2a= 0, a- d= 0, and b- 2c= 0. The second and third equations both give a= d. From the first equation b= 2c so the fourth equation becomes 2c- 2c= 0 which is true for all c. These will be true for all d= a and b=2c: $\begin{bmatrix}a & 2c \\ c & a\end{bmatrix}= a\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}+ c\begin{bmatrix}0 & 2 \\ 1 & 0\end{bmatrix}$. I turned this into a matrix and performed rref $\ \begin{bmatrix}<br /> 0 &-1 &2 & 0 &0 \\ <br /> -2&0 &0 &2 &0 \\ <br /> 1& 0& 0 &1 &0 \\ <br /> 0&1 & -2 & 0 &0 <br /> \end{bmatrix}\sim \begin{bmatrix}<br /> 1 &0 &0 & 0 &0 \\ <br /> 0&1 &-2&0 &0 \\ <br /> 0& 0& 0 &1 &0 \\ <br /> 0&0 & 0 & 0 &0 <br /> \end{bmatrix}\$ a= 0 b = 2 c = arbitrary d - 0 However the correct answer for the basis for ker L is: $\ \left \{ \begin{bmatrix}<br /> 1&0 \\ <br /> 0&1 <br /> \end{bmatrix} ,\begin{bmatrix}<br /> 0 &1 \\ <br /> 1/2& 0<br /> \end{bmatrix}\right \}\$ Of course $\begin{bmatrix}0 & 2 \\ 1 & 0\end{bmatrix}= 2\begin{bmatrix}0 & 1 \\\frac{1}{2} & 0\end{bmatrix}$ so this is just a variation on my answer. I don't know how to answer this. I also have to find the basis for range L. The correct answer for the basis for range L is: $\ \left \{ \begin{bmatrix}<br /> 0&-2 \\ <br /> 1&0 <br /> \end{bmatrix} ,\begin{bmatrix}<br /> -1 &0\\ <br /> 0& 1<br /> \end{bmatrix}\right \}\$ But I don't know how to get that since I don't know how to get the ker L for this problem. I know that range L = span S. Not sure how to set this up.... Can someone please help me figure out this problem. Thanks in advance. range L= span S? You haven't said what S is. The range of L is all matrics [tex]\begin{bmatrix}u & v \\ w & x\end{bmatrix} such that $\begin{bmatrix}u & v \\ w & x\end{bmatrix}= L(A)= \begin{bmatrix}2c- b & 2d- 2a \\ a- d & b- 2c\end{bmatrix}$ for some a, b, c, d. We need to reduce u= 2c- b, v= 2d- 2a, w= a- d, and x= b- 2c to equations in u, v, w, and x only. Obviously the second and third equation say v= 2w. From the first equation b= 2c- u and then the fourth equation becomes x= (2c-u)- 2c= -u. We can write any matrix in the range of L as $\begin{bmatrix}u & 2w \\ w & -u\end{bmatrix}= u\begin{bmatrix}1 & 0 \\ 0 & -1\end{bmatrix}+ w\begin{bmatrix}0 & 2 \\ 1 & 0\end{bmatrix}$ which is equivalent to the basis you give.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9653784036636353, "perplexity_flag": "middle"}
http://dsp.stackexchange.com/questions/7859/relationship-between-dct-and-pca
Relationship between DCT and PCA I have a basic implementation knowledge of the 2D 8x8 DCT used in image & video compression. Whilst reading about Principle Component Analysis, I can see a lot of similarity, albeit PCA is clearly more generic. When I've read about DCT previously it was always presented in relation to DFT. So my question is how can the DCT be derived from a PCA perspective? (even a hand-wavey explanation is sufficient) Many thanks - 1 Answer The main difference between DCT and PCA (more precisely, representing a dataset in the basis formed by the eigenvectors of its correlation matrix - also known as the Karhunen Loeve Transform) is that the PCA must be defined with respect to a given dataset (from which the correlation matrix is estimated), while the DCT is "absolute" and is only defined by the input size. This makes the PCA an "adaptive" transform, while the DCT is data-independent. One might wonder why the PCA is not used more often in image or audio compression, because of its adaptivity. There are two reasons : 1. Imagine an encoder computing a PCA of a dataset and encoding the coefficients. To reconstruct the dataset, the decoder will need not only the coefficients themselves, but also the transform matrix (it depends on the data, which it does not have access to!). The DCT or any other data-independent transform might be less efficient in removing statistical dependencies in the input data, but the transform matrix is known in advance by both the coder and decoder without the need for transmitting it. A "good enough" transform which requires little side information is sometimes better than an optimal transform which requires an extra load of side information... 2. Take a large collection of $N$ 8x8 tiles extracted from photos. Form a $N \times 64$ matrix with the luminosity of these tiles. Compute a PCA on this data, and plot the principal components that will be estimated. This is a very enlightening experiment! There is a very good chance that most of the higher-ranked eigenvectors will actually look like the kind of modulated sine-wave patterns of the DCT basis. This means that for a sufficiently large and generic set of image tiles, the DCT is a very good approximation of the eigenbasis. The same thing has also been verified for audio, where the eigenbasis for log- signal energy in mel-spaced frequency bands, estimated on a large volume of audio recordings, is close to the DCT basis (hence the use of DCT as a decorrelation transform when computing MFCC). - 1 It is interesting, however might not a different basis set be constructed based on the 'usual' statistics of images to begin with, and those used instead of DCT? I imagine such a basis would not be as good as PCA, but better then DCT no? – Mohammad Feb 15 at 14:41 – trican Feb 15 at 15:15 Btw @pichenettes many thanks for your insightful answer. I was aware of point 1, but hadn't really considered point 2. – trican Feb 15 at 15:17 1 @Mohammad: this is a good question, and I don't know the answer. I see advantages in using the DCT: easier to write specs (it's easier to print "our transform is this closed form function" than "our transform is this 64x64 matrix published in the annex"), no standardization committees meetings about which dataset to train the transform on, less lookup tables to embed in decoders' ROM, and probably "symmetries" in the transform matrix that make its hardware acceleration possible compared to a brutal 64x64 matrix multiplication - these advantages might outweigh marginal compression gains. – pichenettes Feb 15 at 15:57 1 @trican: the image you linked to represents the 2-D DCT basis for 8x8 tiles. Each of the 64 small tiles is a basis function. If you take a large collection of 8x8 tiles from actual images and perform a PCA on the data, the eigenbasis you will get will be quite similar to that. – pichenettes Feb 15 at 16:02 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412083625793457, "perplexity_flag": "middle"}
http://blog.brilliant.org/2013/01/09/permutations-ii/
Follow: [This post is targeted at a Level 2 student. You should be familiar with Permutations before proceeding.] We are already familiar with using basic permutations, and will work on generalizing this result. Let’s think again about Lisa’s mantle. She still wants to put 5 ornaments on it, but she actually has 12 ornaments in total she can use. How many ways can she do this? Once again, we use the Rule of Product. Lisa has 12 choices for what to put in the first position, 11 for the second, 10 for the third, 9 for the fourth and 8 for the fifth. So the total number of choices she has is $12 \times 11 \times 10 \times 9 \times 8$. Using the factorial notation, we could write this more compactly as $\frac{12!}{7!}$. Using the same argument, we can proceed with the general case. If we have $n$ objects and we want to arrange $k$ of them in a row, there are $\frac{n!}{(n-k)!}$ ways to do this. This is also known as a k-permutation of n, and the number of ways is sometimes denoted as $P_k ^n$. Let’s consider a different extension of the permutation problem. What happens if Lisa has some ornaments that are the same? If she has 2 identical cat ornaments, 3 identical dog ornaments and 3 other completely different ornaments, how many ways can they all be arranged on her mantle? In total there are 7 objects, and if we pretend they are all distinct, there are $7!$ ways to arrange them on the mantle. For any arrangement, we can swap the pair of cats and get the same arrangement back. Also, we can move the dogs around and again get the same arrangement. How many ways can the dogs be moved around? Since the positions of the dogs are fixed, it is just the number of permutations of the dogs, which is $3!.$ Thus, to account for these repeated arrangements, we divide out by the number of repetitions to obtain that the total number of permutations is $\frac{7!}{3!2!}$. Worked Examples 1. Out of a class of 30 students, how many ways are there to choose a class president, a secretary and a treasurer? A student may hold at most 1 post. Solution 1: There are 30 students to pick for the class president, which leaves 29 students for the secretary and 28 students for the treasurer. Hence, by the rule of product, there are $30 \times 29 \times 28 = 24360$ ways. Solution 2: By the above discussion, there are $P_{27}^{30} = \frac {30!}{(30-3)!}$ ways. While it is extremely hard to evaluate $30!$ and $27!$, we notice that dividing out gives $30 \times 29 \times 28 = 24360$. 2. How many ways can the letters in the name RAMONA be arranged? Solution: As before, if we treat the A’s as distinct from each other (say $A_1$ and $A_2$), then there are $6!= 720$ ways to rearrange the letters. However, since the letters are the same, we have to divide by $2!$ to obtain $\frac {720}{2!} = 360$ ways. 3. 6 friends go out for dinner. How many ways are there to sit them around a round table? Rotations of a sitting arrangement are considered the same, but a reflection will be considered different. Solution 1: Since rotations are considered the same, we may fix the position of one of the friends, and then proceed to arrange the 5 remaining friends clockwise around him. Thus, there are $5! = 120$ ways to arrange the friends. Solution 2: There are $6!$ ways to seat the 6 friends around the table. However, since rotations are considered the same, there are 6 arrangements which would be the same. Hence, to account for these repeated arrangements, we divide out by the number of repetitions to obtain that the total number of arrangements is $\frac {6!}{6} = 120$. Both solutions are equally valid and illustrate how thinking of the problem in a different manner can yield another way of calculating the answer. Test Yourself 1. Samir and Mirdula each have 7 distinct books that they want to display on their shelves. Mirdula has lots of room on her shelf, however, Samir only has room to put 6 of his 7 books. Assuming that they fill up the entire shelf, who has more possible ways to arrange the books on their shelf? Why is this the case? 2. A waiter at a restaurant takes the drink order for a table of ten people but forgets to write down who ordered which drink. The orders are 5 cups of coffee, 3 glasses of water, 1 glass of milk, and 1 apple juice. How many different ways can the waiter deliver the drinks to the customers so that each customer gets one drink? 3. Consider all the ways to rearrange the letters of the word CALVIN, and arrange them in alphabetical order. In what position will CALVIN appear in? How about NIVLAC? 4. (*) Lisa wants to arrange 7 ornaments on her mantle. She has 2 identical cats, one mouse and 4 other different ornaments. If the mouse cannot be next to either of the cats, how many ways can they be arranged? Hint: Principle of Inclusion and Exclusion. Like this: From → Combinatorics, Key Technique, Level 2 7 Comments 1. Saurabh Dubey 1->Samir has more options because he has to select 6 books out of seven (7!) and then arrange them . so total of 6!*7! whereas Mirdula only has to arrange 7 books (7!). 2->The number of ways would be 10!/(5!*3!*1!*1!) i.e 5040 ways. 2. zubayr khalid No. 1, both have equal options.Samir has 7p6 options and Mirdula has 7! options. No 3. no of ways is 6!. CALVIN occurs at 131th position and NIVLAC occurs at 551th position. 3. Prajwal.B.Bharadwaj 1..both have same no of ways …7p6 = 7p7 2..5040 3..Calvin in 131 …Nivlac ..550 4.. 4. Anonymous Mr. Calvin please tell answers for Q 3 and 4. I want to confirm • I never discuss numerical answers, especially if you do not show your working. If you explain the steps that you took, I can see what was done well / wrong. 5. Nisha When cats are together they can be arrange 6 ways then mouse can 4 and others can 4! total 6*4*4!=576 and when cats are not together first can be arrange 7 ways second 5 ways and mouse 3 Others Be 4! total 7*5*3*24=2520 then total=2520+576=3096 ways Trackbacks & Pingbacks 1. Combinations « Brilliant Training Blog %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426471590995789, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/133809-show-process-brownian-motion.html
# Thread: 1. ## Show that a process is a Brownian motion Question: Show that the process $W(s) = \frac{\sqrt 2}{\pi} \sum^\infty_{j=1} \frac{sin[(j-\frac{1}{2})\pi s}{j- \frac{1}{2}} \zeta_j$ is a Brownian motion on [0,1]. No idea even where to start.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8513796925544739, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/125405-complex-number-print.html
Complex Number Printable View • January 25th 2010, 11:53 AM BabyMilo Complex Number which topics does complex number come under as? anyway. $\begin{vmatrix}<br /> z-(3+3i)=2<br /> \end{vmatrix}$ $\Rightarrow \begin{vmatrix}<br /> (x-3)+(y-3)i\end{vmatrix}=2<br />$ $= (x-3)^2+(y-3)^2=2^2$ then it asks find the max and min of $\begin{vmatrix}<br /> z \end{vmatrix}$ thanks! • January 25th 2010, 12:03 PM Jhevon Quote: Originally Posted by BabyMilo which topics does complex number come under as? anyway. $\begin{vmatrix}<br /> z-(3+3i)=2<br /> \end{vmatrix}$ $\Rightarrow \begin{vmatrix}<br /> (x-3)+(y-3)i\end{vmatrix}=2<br />$ $= (x-3)^2+(y-3)^2=2^2$ then it asks find the max and min of $\begin{vmatrix}<br /> z \end{vmatrix}$ thanks! you should post the problem in its entirety. you could solve for $\sqrt{x^2 + y^2}$, you would get it as a function of x and y and you minimize and maximize that. but that requires calc 3. so too much work. another way is to let geometry help you. if you graphed what's happening in the complex plane, you would notice that z must lie on the circle of radius 2 centered at (3,3). draw a line from the origin passing through (3,3) cutting right across the circle. the length of the line segment from the origin to the first place the line cuts the circle is the min |z|, add the diameter of the circle to that, that is, add 4, and you get the max |z| (by the way, | is a symbol found on your keyboard. hold down shift and press \) • January 25th 2010, 12:06 PM BabyMilo Quote: Originally Posted by Jhevon you should post the problem in its entirety. you could solve for $\sqrt{x^2 + y^2}$, you would get it as a function of x and y and you minimize and maximize that. but that requires calc 3. so too much work. another way is to let geometry help you. if you graphed what's happening in the complex plane, you would notice that z must lie on the circle of radius 2 centered at (3,3). draw a line from the origin passing through (3,3) cutting right across the circle. the length of the line segment from the origin to the first place the line cuts the circle is the min |z|, add the diameter of the circle to that, that is, add 4, and you get the max |z| (by the way, | is a symbol found on your keyboard. hold down shift and press \) im bad at geometry as well XD...seriously not joking. how do i find out the min or caluclate? • January 25th 2010, 12:12 PM BabyMilo Quote: Originally Posted by Jhevon you should post the problem in its entirety. you could solve for $\sqrt{x^2 + y^2}$, you would get it as a function of x and y and you minimize and maximize that. but that requires calc 3. so too much work. another way is to let geometry help you. if you graphed what's happening in the complex plane, you would notice that z must lie on the circle of radius 2 centered at (3,3). draw a line from the origin passing through (3,3) cutting right across the circle. the length of the line segment from the origin to the first place the line cuts the circle is the min |z|, add the diameter of the circle to that, that is, add 4, and you get the max |z| (by the way, | is a symbol found on your keyboard. hold down shift and press \) would it be $\sqrt {3^2+3^2}-2$? • January 25th 2010, 12:14 PM Jhevon 1 Attachment(s) Quote: Originally Posted by BabyMilo im bad at geometry as well XD...seriously not joking. how do i find out the min or caluclate? it's not as hard as you may think. i have attached a very suggestive diagram. think about how you would do it. min |z| = OA max |z| = OB • January 25th 2010, 12:16 PM Jhevon Quote: Originally Posted by BabyMilo would it be $\sqrt {3^2+3^2}-2$? for min |z|, yes. of course, you can simplify this see, not so bad, right? • January 25th 2010, 12:27 PM BabyMilo Quote: Originally Posted by Jhevon for min |z|, yes. of course, you can simplify this see, not so bad, right? what does it simplify to? $3\sqrt{2}-2$? but shouldnt it be in the form of a+bi since y=imaginary and x=real thanks for your time. so the final answer would be? min(1.59,1.59) (3sf)? • January 25th 2010, 12:34 PM Jhevon Quote: Originally Posted by BabyMilo what does it simplify to? $3\sqrt{2}-2$? yes Quote: but shouldnt it be in the form of a+bi since y=imaginary and x=real no, |z| is a real number. it is the modulus of z, which is a magnitude. Quote: so the final answer would be? min(1.59,1.59) (3sf)? as mentioned above, |z| is a real number, not a coordinate. and do not use decimals, leave your answer exact as you did above All times are GMT -8. The time now is 08:10 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503926634788513, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/2206-uncountable-sets-print.html
# Uncountable sets Printable View • March 14th 2006, 08:42 AM kennyb Uncountable sets I have a cardinality question. If anyone can give me some guidance on this, I would greatly appreciate it. Here it goes: If X={0,1} and w={0,1,2,...}, then show X^w and w^w have the same cardinality. Showing that both these sets are uncountable isn't too bad, but I need them to be the same uncountable size. I'm just not seeing how to set up a bijection between these two sets. I was also thinking that I might be able to show that both sets inject into the reals, then using the continuum hypothesis. Can anyone help? • March 14th 2006, 09:50 AM CaptainBlack Quote: Originally Posted by kennyb I have a cardinality question. If anyone can give me some guidance on this, I would greatly appreciate it. Here it goes: If X={0,1} and w={0,1,2,...}, then show X^w and w^w have the same cardinality. Showing that both these sets are uncountable isn't too bad, but I need them to be the same uncountable size. I'm just not seeing how to set up a bijection between these two sets. I was also thinking that I might be able to show that both sets inject into the reals, then using the continuum hypothesis. Can anyone help? This will be a bit informal. First $X^w$ is the set of all infinite sequences of elements drawn from $\{0,1\}$ and $w^w$ is the set of all sequences of elements drawn from $\{0,1,2, \dots\}=\mathbb{N}$. Clearly $\mathcal{C}(X^w) \le \mathcal{C}(w^w)$ Now let $x \in w^w$ then $x=x_1,x_2, \dots$. Now we may write each of the $x_i$s in unary, that is $n$ is represented by $n\ 1$s, but we will write them in unary+ as $n+1\ 1$s. Now map $x \in w^w$ to $y \in X^w$ such that $y$ consists of the unary+ repersentations of the $x_i$s seperated by a single $0$s. This map takes $w^w$ one-one onto a subset of $X^w$ (in fact the sub-set where there are no two consecutive $0$s. Hence: $\mathcal{C}(X^w) \ge \mathcal{C}(w^w)$ so with the earlier result: $\mathcal{C}(X^w) = \mathcal{C}(w^w)$ RonL • March 14th 2006, 10:06 AM ThePerfectHacker Quote: Originally Posted by kennyb I have a cardinality question. If anyone can give me some guidance on this, I would greatly appreciate it. Here it goes: If X={0,1} and w={0,1,2,...}, then show X^w and w^w have the same cardinality. Showing that both these sets are uncountable isn't too bad, but I need them to be the same uncountable size. I'm just not seeing how to set up a bijection between these two sets. I was also thinking that I might be able to show that both sets inject into the reals, then using the continuum hypothesis. Can anyone help? I have an idea but maybe it is useless here. Axiom of Choice, thus, conclude that there exists a surjetive map between $X^w$ and $w^w$. • March 14th 2006, 10:08 AM CaptainBlack Quote: Originally Posted by ThePerfectHacker I have an idea but maybe it is useless here. Axiom of Choice, thus, conclude that there exists a surjetive map between $X^w$ and $w^w$. If a result can be proven without AC it should be, and here we can RonL • March 14th 2006, 10:10 AM kennyb
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9734671711921692, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/7323/can-a-long-put-trade-be-profitable-through-vega-even-if-the-underlying-moves-upw?answertab=active
# Can a long put trade be profitable through Vega even if the underlying moves upwards? Generally speaking, I know when implied vol increases, option prices increase for calls. However, does the same occur for puts? If I am expecting implied volatility to increase for an option on an underlying asset (let's say a stock) and I believe the price will decline as implied vol rises, would the best strategy be to buy a put, as opposed to buying a call (forget strangle/straddles for the moment being)? Is it possible for a long put position to be profitable if the gain on vega, due to increase in implied vol on either the upside or downside, is larger enough to offset the short delta position? - 1 not a very clear question... are you asking about the relationship of implieds to delta moves.. look up sticky delta and sticky strike.. – cdcaveman Feb 18 at 5:18 1 @jessica, your question is indicating that you are not affiliated with this industry and thus other boards should be serving you better. You can easily google what the impact of changes in implied vol is on option prices. Your question contains logical errors (a: there is no iVol for a stock, only iVol of an option which is linked to the expected future volatility of the underlying asset's price returns, b: price does not always decline when iVol increases, c: a long put position benefits from generally increasing iVol and short delta exposure when prices of the underlying decrease. – Freddy Feb 18 at 6:03 "c3: a long put position makes money on both, increasing iVol (if it indeed inceases) and your short delta exposure, there is no offset." hmm I see. Though isn't it more often than not that iVol and the delta position for a put are going in opposite directions? – jessica Feb 18 at 6:08 1 @jessica, iVol is not necessarily positively correlated with the direction of the underlying. That is why I said "generally" (and that does not even apply to certain assets, for example iVol of options on agricultural futures ticks "generally" up when the futures price increases). Re delta, you need to become clear what your delta exposure is: Long put = short delta, short call = short delta,...you then know exactly what your delta impact will be from rising or falling underlying asset prices. – Freddy Feb 18 at 6:24 ## 1 Answer First, notice that the two greeks you mentioned in your question are simply the partial derivatives of the value of the option $V$ with respect to two different variables $S$ (the price of the underlying) and $\sigma$ (the volatility of the underlying): $$\Delta = \frac{\partial V}{\partial S} \quad \text{and} \quad \nu=\frac{\partial V}{\partial \sigma}$$ As mentioned in the comments, volatility is not per-say a sign of declining prices, but, at least under the Black-Scholes model, $\nu$ is positive for both puts and calls meaning that the value of both types of options is expected to rise if $\sigma$ goes up. So, from a mathematical point of view, if only $\sigma$ goes up (i.e. $S$ stays the same) then yes the trade would be profitable even by buying a put option. However, it is fair to say that this situation is not very realistic and that in real-world you would be exposed to wild moves in $S$ which would affect the price of the option through the $\Delta$. As a result, what you would like to do is to is to perform delta hedging which consists in offsetting your exposure to $S$ by buying or selling an amount of the underlying $S$ corresponding to $\Delta$. In the case of the put, you know that $\Delta<0$ (i.e. the price of the put $V$ decreases as the price of the underlying $S$ increases) and you hence need to buy $\Delta$ of $S$ to make your resulting portfolio have a $\Delta=0$: it is delta-neutral. Once you've done that, you've removed your exposure to changes in $S$ and you could consider that your put position is mainly determine by the remaining $\nu$. Some funds such as the Amundi Volatility Funds do exactly that. This reasoning though is really perfectly working only for small changes in $S$, as bigger changes would also be sensitive to other greeks such as $\Gamma = \frac{\partial^2V}{\partial S^2}$. You can also hedge against this kind of move using a similar reasoning. - Sigma is not the sign for Vega or implied volatility.. its the sign for standard deviation – cdcaveman Feb 20 at 8:54 @cdcaveman I use $\nu$ for Vega and $\sigma$ for the volatility inputted in the pricing model. – SRKX♦ Feb 20 at 11:28 @cdcaveman Um, you do realize that volatility and standard deviation are synonymous in Black Scholes, right? – chrisaycock♦ Feb 20 at 12:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9541968703269958, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/95520/list
## Return to Answer 2 added 247 characters in body; added 4 characters in body Everything is known. In fact as spectra we have canonically K(finAb) $K(\mathsf{finAb}) = \vee_p K(F_p)K(\mathbb{F}_p)$, and the spectra K(F_p) $K(\mathbb{F}_p)$ are identified in the work of Quillen (see e.g. http://www.math.uiuc.edu/K-theory/1006/). In particular on \pi_0 $\pi_0$ we find K_0(finAb) $K_0(\mathsf{finAb}) = \oplus_p Z, \mathbb{Z}$, agreeing with your claim, and on \pi_n $\pi_n$ for n>0 $n>0$ we find that K_n(finAb) $K_n(\mathsf{finAb})$ is 0 $0$ for n $n$ even and is $\oplus_p \oplus_p Z/(p^k-1) mathbb{Z}/(p^k-1)$ (non-canonically) for $n = 2k-12k-1$. To justify the claimed equality K(finAb) $K(\mathsf{finAb}) = \vee_p K(F_p)K(\mathbb{F}_p)$, note first that finAb $\mathsf{finAb}$ is the filtered colimit over increasing finite sets of primes P $P$ of the variant finAb_P $\mathsf{finAb}_P$ where only products of p-groups $p$-groups for $p \in P P$ are allowed; since K-theory commutes with filtered colimits, it then suffices to show that each K(finAb_P) `$K(\mathsf{finAb}_P) = \prod_{p\in P}K(F_p) P} K(\mathbb{F}_p)$` and that for P inside $P ' \subseteq P'$ this identification intertwines the inclusion K(finAb_P) --> K(finAb_{P'}) `$K(\mathsf{finAb}_P) \to K(\mathsf{finAb}_{P'})$` with the evident map \prod_{p\in `$\prod_{p\in P} K(F_pK(\mathbb{F}_p) --> \to \prod_{p\in P'} K(F_p) K(\mathbb{F}_p)$` which is zero outside of P.$P$. But finAb_P $\mathsf{finAb}_P$ is just the product over $p \in P P$ of the categories finAb_p, $\mathsf{finAb}_p$, whose K-theory identifies with that of vector spaces over F_p $\mathbb{F}_p$ by Quillen's devissage theorem. And K-theory commutes with finite products, so that's that. Here I guess I was actually arguing using Quillen's Q-construction instead of Waldhausen's S_\dot-construction. $S_{\bullet}$-construction. Otherwise I'm not sure how to justify the last step, the devissage. Actually I'm sure all of the above is in Quillen's paper on the Q-construction. 1 Everything is known. In fact as spectra we have canonically K(finAb) = \vee_p K(F_p), and the spectra K(F_p) are identified in the work of Quillen (see e.g. http://www.math.uiuc.edu/K-theory/1006/). In particular on \pi_0 we find K_0(finAb) = \oplus_p Z, agreeing with your claim, and on \pi_n for n>0 we find that K_n(finAb) is 0 for n even and is \oplus_p Z/(p^k-1) (non-canonically) for n = 2k-1. To justify the claimed equality K(finAb) = \vee_p K(F_p), note first that finAb is the filtered colimit over increasing finite sets of primes P of the variant finAb_P where only products of p-groups for p in P are allowed; since K-theory commutes with filtered colimits, it then suffices to show that each K(finAb_P) = \prod_{p\in P}K(F_p) and that for P inside P' this identification intertwines the inclusion K(finAb_P) --> K(finAb_{P'}) with the evident map \prod_{p\in P} K(F_p) --> \prod_{p\in P'} K(F_p) which is zero outside of P. But finAb_P is just the product over p in P of the categories finAb_p, whose K-theory identifies with that of vector spaces over F_p by Quillen's devissage theorem. And K-theory commutes with finite products, so that's that. Here I guess I was actually arguing using Quillen's Q-construction instead of Waldhausen's S_\dot-construction. Otherwise I'm not sure how to justify the last step, the devissage. Actually I'm sure all of the above is in Quillen's paper on the Q-construction.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.908488929271698, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/tagged/combinatorics
# Tagged Questions Questions on the use of Mathematica in combinatorics, including the Combinatorica add-on package. 0answers 32 views ### How could I find the correct values for every array that would lead me to unique summation number for every unique chain? [migrated] I have 9 arrays, each array has 9 values, I need to get the proper values in every value's position for every array, and that would give my a completely unique summations for every value's chain from ... 0answers 33 views ### Generating partitions of a set with a specified size of the parts [duplicate] I tried the following (inspired by the answer here) myList = {a, b, c}; Needs["Combinatorica`"]; SetPartitions[myList] and I got this answer, ... 1answer 51 views ### How to enumerate multisets Given the eight-element set {1,2,3,4,5,6,7,8}, I would like to enumerate all multisets (subsets with repetition) of size n, where n >=3. For example, with n = 3, the sets {1,1,1}, {1,1,2}, ..., ... 0answers 45 views ### Generating partitions of a set [duplicate] Is it possible to get Mathematica to generate all possible partitions of a set of objects? (..or equivalently if it can be made to do the cumulant expansion or at least the Gaussian special case of ... 1answer 72 views ### How do I expand StirlingS2[n, 10] in terms of elementary functions? I know that it is possible to expand StirlingS2[n, 10] in terms of elementary functions of n. I tried ... 1answer 116 views ### Combinations which do not have elements in common I can choose 2 letters from the four letters $\{A,B,C,D\}$ in 6 combinations using the combination formula $$\frac{n!}{ r! (n-r)!}$$ ... 1answer 66 views ### Looking for a package regarding Schur Polynomials and Kostka numbers I'm currently looking for any Mathematica package that involves Schur polynomials and/or Kostka numbers. More generally, I'd be happy with anything that expands on symmetric polynomials in general. ... 2answers 177 views ### How to determine all cases consistent with constraints Suppose that $(i, j), (k, l)$ and $(m, n)$ are pairs of non-negative integers, satisfying the following constraints: $$i < j,\;\; k < l, \;\; m < n$$ and (i, j) < (k, l) < (m, ... 1answer 83 views ### How can I prevent these warnings while using ParallelTable I have this code to find all the permutations of a set of letters that form legal words. ... 3answers 157 views ### Counting the number of a specific type of permutation In the theory of cumulants of vector-valued random variables, the following types of formulas appear: \begin{equation} \theta^i \theta^{jk} [3] = \theta^i \theta^{jk} + \theta^j \theta^{ik} + \theta^k ... 2answers 140 views ### Solving variant of the knapsack/money-changing problem I'm trying to solve a variant of the knapsack/changing money problem where I have a set of a few numbers and I'm trying to find the linear (integer) combinations of them which are close to a given ... 2answers 134 views ### Find all permutations with a condition How can I find all the permutations of {a, b, c} where a + b + c = n? For instance: if ... 2answers 118 views ### What function returns all possible permutations with repeating list elements? [closed] Let's say I want to find a sample space of such experiment: There are three exits from the box: Left (L), right (R) and front (F). Three mice have been put in that box. Find the sample space of an ... 2answers 128 views ### Partition a set into $k$ non-empty subsets The Stirling number of the second kind is the number of ways to partition a set of $n$ objects into $k$ non-empty subsets. In Mathematica, this is implemented as ... 1answer 88 views ### How to evaluate the sum over a hyperplane I have difficulties in evaluating the following expression: $$\sum_{\small n_1+...+n_{k}=m-k}\; \prod_{i=1}^{k}\frac{1}{(n_i+1)(n_i+2)}$$ I have tried the function ... 1answer 88 views ### Finding integer partitions one at a time I am writing a small program and I need to calculate integer partitions of a handful of numbers. In my code I just run IntegerPartitions[k,{n}] and then iterate through the results. Since I only use ... 3answers 132 views ### Need Help Writing (a Pascal) Matrix in Mathematica I want to write a function $f[n]$ in Mathematica which gives me an $n\times n$ lower triangular Pascal matrix with a row of zeros in between each nonzero row. That is, I want the matrices ... 2answers 296 views ### Exact cover solution Is it possible to get a exact cover solution(s) and/or number of possible solutions in Mathematica? 6answers 847 views ### Insert $+$, $-$, $\times$, $/$, $($, $)$ into $123456789$ to make it equal to $100$ Looks like a question for pupils, right? In fact if the available math symbol is limited to $+$, $-$, $\times$, $/$ then it's easy to solve: ... 0answers 194 views ### Generating a function which outputs possible chemical reactions I want to make a list of chemical reactions and I write them down in a $\require{mhchem}\LaTeX$ format. They are of the following form NA_n^i+MB_m^j \rightarrow \hat NA_{\hat n}^{\hat i}+\hat ... 3answers 220 views ### How to find all graph isomorphisms in FindGraphIsomorphism I found the second definition of the function FindGraphIsomorphism not working. Here's the definition Mathematica 8 gives: ... 5answers 502 views ### Get number of combinations without “forbidden patterns” I need to get the number of all combinations of binary digits in an 8-digit binary number, but minus some "forbidden" patterns like: xxxx0xx1 x1xxx0xx x1xxx0x0 ... 2answers 736 views ### Plotting an Unreasonable Function Without getting into too much detail, the following (very complicated) function recently appeared as a solution to a combinatorics problem I've been thinking about: P(n) = \frac{52!}{52^{52}} \cdot ... 1answer 274 views ### (Efficiently) Generating graphs with vertex degree 3 for all vertices I'm working on a graph theory problem, and given $n$ vertices, I would like to be able to generate all non-isomorphic connected graphs (not necessarily simple) with $n$ vertices, each having degree 3. ... 0answers 160 views ### Find all permutations with reversals / cyclic permutations removed I have a list of all non-cyclic permutations of n labels. How can I get rid of all elements which are redundant in the sense that they are the inverse of another one. For instance if n=4, the ... 4answers 236 views ### permutation as product of transpositions How can the permutation that takes{-f, -i, i, -e} into {-e, -i, i, -f} be realised as a sequence of nearest neighbour ... 1answer 186 views ... 2answers 173 views ### Combinatorica: Girth[] and FindCycle[] disagreement Warning: run the following code in a fresh Mma session, as some symbols could be shadowed (depending on your Mma version) While trying to answer this question, I fell into the following: ... 4answers 846 views ### What is the fastest way to count square-free words? Background A word is a string of letters in an alphabet. A square-free word has no adjacent repeating substring. For example, (in the ternary alphabet of {0,1,2}) the words 00, 012121, and 0212012021 ... 3answers 497 views ### Find cycles of graphs with both directed and undirected edges I need to enumerate all the simple cycles of a graph which has both directed and undirected edges, where we can treat the undirected edges as doubly directed. (Specifically, I am looking at the Cayley ... 2answers 333 views ### Word Squares and Beyond A word square is a set of words which, when placed in a grid, read the same horizontally and vertically. For example, the following is an English word square of order 5: ... 3answers 267 views ### Generating Linear Extensions of a Partial Order Given a set $S$ and a partial order $\prec$ over $S$, I'm looking for a way to "efficiently" generate a list of linear extensions of $\prec$. Suppose the partial order is given by a ... 2answers 414 views ### Finding all partitions of a set I'm looking for straightforward way to find all the partitions of a set. IntegerPartitions seems to provide a useful start. But then things get a bit complicated. ... 0answers 67 views ### BipartiteMatching hangs with floating-point weights in Mathematica 7 (package Combinatorica) I am trying to use Combinatorica's BipartiteMatching algorithm on a weighted graph. I need to use real numbers as weights. If I try the following ... 1answer 374 views ### How to create tournament bracket I'd like to create a tournament bracket using Mathematica. I've looked around online, but haven't found any examples yet. Can someone show me how to do this? Specifically, I'd like to have a ... 4answers 2k views ### Looking for “Longest Common Substring” solution I'm looking for robust code to solve the "Longest Common Substring" problem. I can just code it up from that description, but I'd thought I'd ask here, first, in case someone knows of an ... 3answers 336 views ### Determining all possible traversals of a tree I have a list: B={423, {{53, {39, 65, 423}}, {66, {67, 81, 423}}, {424, {25, 40, 423}}}}; This list can be visualized as a tree using ... 3answers 408 views ### How to load a package without naming conflicts? This question applies to any package, but I encountered this problem while working with graphs. There are symbols in the Combinatorica package (such as ... 1answer 367 views ### Is it possible to generate a Hasse Diagram for a defined relation? I'm looking for a way to create a Hasse Diagram from a given partial order binary relation. The relation will be given explicitly, for example: ... 1answer 280 views ### When to use built-in Graph/GraphPlot vs. Combinatorica What are the pros and cons of using built-in Graph/GraphPlot (and related) types vs. types in the Combinatorica package? 1answer 168 views ### Combinatorica Graph from Edge List Can someone provide an example of a Combinatorica-based graph which uses ShowGraph and Graph and takes in an explicitly defined list of edges (not some auto-generated graph). I have not been able to ... 1answer 193 views ### Mathematica function/package for shuffle permutations Does anyone knows a way to compute a list of all (p,q)-shuffles in mathematica? For a definition of the shuffle permutations see for example http://ncatlab.org/nlab/show/shuffle I'm dreaming of a ... 1answer 201 views ### Finding all length-n words on an alphabet that have a specified number of each letter For example, I might want to generate all length n=6 words on the alphabet {A, B, C} that have one ... 2answers 148 views ### NumberOfSpanningTrees command not working correctly I am attempting to use the Combinatorica NumberOfSpanningTrees command for all n-cycle graphs from 3 to 30. I am trying to get a ... 4answers 430 views ### How do I generate the upper triangular indices from a list? I have some list {1,2,3}. How do I generate nested pairs such that I get {{1,2},{1,3},{2,3}}? That is I'd like a way to ... 5answers 721 views ### Partition a set into subsets of size $k$ Given a set $\{a_1,a_2,\dots,a_{lk}\}$ and a positive integer $l$, how can I find all the partitions which includes subsets of size $l$ in Mathematica? For instance, given ... 4answers 442 views ### Efficiently Visualising Very Large Data Sets (without running out of memory) I have put a few really hard problems in combinatorics up against Mathematica 8. I'd have to say that it works really well, until you want to view the data. If you look at my question Advanced ... 3answers 583 views ### Advanced Tupling I had asked this question before but I guess I did not make the question clear enough and I apologize for that. Here is the problem: Tuples gives me more data than ... 1answer 477 views ### How to apply a permutation to a symmetric square matrix? Given a symmetric square matrix, how can I apply a permutation to the rows and columns (i.e. the same permutation to both the rows and the columns) such a way that the new structure of the matrix ... 7answers 809 views ### How to Derive Tuples Without Replacement Given a couple of lists like a={1,2,3,4,6} and b={2,3,4,6,9} I can use the built-in Mathematica symbol ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8947526812553406, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/41628/the-relation-su1-iuj-1-q-1uj-1su1-i
## The relation $S(u^1_i)u^j_1 = q^{-1}u^j_1S(u^1_i)$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In the Hopf algebra $SL_q(N)$, it can be shown, using direct calculations, that $S(u^1_i)u^j_1 = q^{-1}u^j_1S(u^1_i)$. Can anyone see a more elegant way of establishing this? Moreover, does anyone know of a similar relation in the more general case of $S(u^1_r)u^i_j$? Edit (referneces): By $SL_q(N)$ I mean the quantized coordinate algebra (not the quantized enveloping algebra). I am using the conventions of Klimyk and Schmudgen, Chpt 4 for the N=2 case, or Chpt 9 for the general case. - Could you add a reference for this model of quantum groups? It is not exactly the same as the usual $U_q(sl(N))$. And your question depends on precise conventions. – Greg Kuperberg Oct 15 2010 at 18:15 Sorry for the confusion. It's not the quantum enveloping algebra, it's the quantised coordinate algebra. I've added the references. – John McCarthy Oct 15 2010 at 19:53 ## 2 Answers This is a reasonably known result. That $S(u^1_i)u^j_1 = q^{-1}u^j_1S(u^1_i)$, was originally proven (to the best of my knowdledge) in FRT's '89 paper "Quantum Groups and Lie Algebras" - the paper is in Russian though. The only English write up of the proof that I known is in Theorem 1 of Vainermann and Podkolzin's '99 paper on Quantum Stiefel Manifolds. It gives a general comm rel for $[S(u^i_j),u^r_s]$, for the general $N$ case, using just the $R$-matrix construction of the $SU_q(N)$. I am sure there are other versions around somewhere though. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For the first question, I would use the dual pairing with $U_q(\mathfrak{sl}_N)$. The $u_i^j$'s are defined to be matrix coefficients of the vector representation of $U_q(\mathfrak{sl}_N)$ with respect to some distinguished basis, usually a basis of weight vectors. There are unfortunately a lot of different conventions in use. My standard reference is Klimyk and Schmudgen. See, for example, Theorem 19 of Chapter 9 of their book. It states: There is a unique dual pairing $( , )$ of Hopf algebras between $U_q^{ext}(\mathfrak{sl}_N)$ and $\mathcal{O}(SL_q(N))$ such that $(f, u^k_l) = t_{kl}(f)$ for all $f \in U_q^{ext} (\mathfrak{sl}_{N})$. Here $((t_{kl}(f))$ is the matrix for $f$ in the vector representation. OK, this theorem is a little bogus in the sense that it is more of a definition. But the point is that $\mathcal{O}(SL_q(N))$ is generated by the matrix coefficients of all finite-dimensional irreducible representations of $U_q^{ext} (\mathfrak{sl}_{N})$, and these separate points of $U_q^{ext} (\mathfrak{sl}_{N})$, so the pairing is nondegenerate. So, to show that your two guys are equal, just show that they pair the same way with $U_q^{ext} (\mathfrak{sl}_{N})$. Since it is a pairing of Hopf algebras, you just need to check on the generators $E_i, F_i, K_\lambda$. This just requires you to have a handle on the vector representation. In my opinion this is much cleaner than doing the calculations directly. - Great, this is just what I was looking for. Just one thing though, I don'e see why the fact that its a Hopf algebra pairing implies that I only need to check it on the generators. How do I get the value of $< x,E^2> = <x_{(1)},E> < x_{(2)}, E >$ from the pairings of $x$ with the generators? – John McCarthy Oct 12 2010 at 21:48 ... take the example, for $N=2$, of $<b^2,K>=<b^2,E>=<b^2,F>=0$, while of course $b^2 \neq 0$. – John McCarthy Oct 13 2010 at 0:28 Yeah, you're right. Hmmm... not quite sure how to resolve that. I guess the generators aren't enough. So perhaps this doesn't work after all. My bad. – MTS Oct 13 2010 at 20:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365591406822205, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/188181-determinant-block-matrix.html
# Thread: 1. ## Determinant of Block Matrix Let $M=\begin{pmatrix} A & B \\ O & C\end{pmatrix}$, where A and C are square matrix. Show that det(M)=det(A)det(C) I have the following idea but i dont have a concrete proof. We can perform elementary row operations on the matrix M. Eventually A and C will be a upper triangular matrix. Hence det(A) and det(C) will just be the product of their diagonal entries. On the other hand, M will also become an upper triangular matrix after the elementary row operations. Therefore, det(M)=product of its diagonal entries=product of diagonal entries of A * product of diagonal entries of C=det(A)det(C). How do i write a concrete proof using this idea or is there any other method of doing it (e.g. cofactor expansion, etc.) 2. ## Re: Determinant of Block Matrix Originally Posted by H12504106 Let $M=\begin{pmatrix} A & B \\ O & C\end{pmatrix}$, where A and C are square matrix. Show that det(M)=det(A)det(C) I have the following idea but i dont have a concrete proof. We can perform elementary row operations on the matrix M. Eventually A and C will be a upper triangular matrix. Hence det(A) and det(C) will just be the product of their diagonal entries. On the other hand, M will also become an upper triangular matrix after the elementary row operations. Therefore, det(M)=product of its diagonal entries=product of diagonal entries of A * product of diagonal entries of C=det(A)det(C). You can perform elementary row operations on the matrix $M$ but then you will take a matrix $M_1$ with $det(M_1) \neq det(M)$. Originally Posted by H12504106 How do i write a concrete proof using this idea or is there any other method of doing it (e.g. cofactor expansion, etc.) One method is by induction on $n$ - using cofactor expansion - where $n$ is the type of $A$ i.e. $A$ is an $n\times n$ matrix.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8869234323501587, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/tagged/regression-coefficients+standard-error
# Tagged Questions 2answers 301 views ### How do I calculate standard errors for sums of OLS coefficients? I'm estimating a simple OLS regression model of the type: $y = \beta X + u$ After estimating the model, I need to generate a weighted combination of coefficients (e.g. $w_1 \beta_1 + w_2 \beta_2$) ... 1answer 242 views ### What is the impact of low predictor variance on logistic regression coefficient estimates? Let's say I am using a logistic model to predict whether it rains (yes or no) based on the high temperature and have collected data for the past 100 days. Let's say that it rains 30/100 days. ... 1answer 4k views ### Standard errors for multiple regression coefficients? I realize that this is a very basic question, but I can't find an answer anywhere. I'm computing regression coefficients using either the normal equations or QR decomposition. How can I compute ... 2answers 2k views ### Extract standard errors of coefficient linear regression R [duplicate] Possible Duplicate: How do I reference a regression model's coefficient's standard errors? If I have a dataset: ... 0answers 95 views ### Can I combine Standard errors of coefficients with an unbalanced data set? I have just read the answers to the question ...... Adding coefficients to obtain interaction effects - what to do with SEs? .......and found this really helpful! I am looking to do something similar ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8702192902565002, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/115891?sort=oldest
## Hadamard’s product formula for the derivative ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $f$ be an entire function of order $ρ<\infty$. Assume that $f$ does not vanish identically on $\mathbb{C}$. Then, we know that $f$ has a Hadamard's product formula $$f(s) =e^{g(s)}s^{r}\prod _ {k=1}^{\infty}\frac{s _ {k}-s}{s _ {k}} e^{s/s _ k}$$ the integer $r$ is the order of vanishing of $f$ at $s=0$, the $s_{k}$ are the other zeros of $f$ listed with multiplicity, $g$ is a polynomial of degree at most $ρ$, and the product converges uniformly in bounded subsets of $ℂ$. My question is how I can deduce directely a Hadamard's product formula for the derivative $f^′$ from the one of the function $f$. - 1 I don't think there is a direct way. Why knowing the zeros of $f$ should help in finding the zeros of $f'$? – Pietro Majer Dec 9 at 19:03 ## 2 Answers (1) It seems your formula for the Hadamard product is only correct for $\rho<2$; more generally the exponent of $e^{s/s_k}$ contains a power series in $s$ of order $q={\rm Int}\;\rho$; see for example Eq. 1 in these lecture notes. (2) To find a similar expression for $f'$, just take the logarithmic derivative: $$f'(s)/f(s)=g'(s)+r/s+\sum_{k=1}^{\infty}\frac{(s/s_k)^q}{s-s_k}.$$ - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The statement of the question must be corrected. First, as Carlo pointed out, the Hadamard representation as in the question is not valid for all functions of finite order. The correct Hadamard representation is $$f(z)=z^me^{P(z)}\prod_{n=1}^\infty \left( 1-\frac{z}{z_n}\right) \exp\left(\frac{z}{z_n}+\ldots+\frac{1}{q}\left(\frac{z}{z_n}\right)^q\right).$$ Here $q$ is the genus of the function. You have to specify whether you are talking about functions of finite order (and thus finite genus) or functions of genus $1$. Second, what does it mean "to deduce" a representation for the derivative? The derivative of a function of finite order is of finite order, so there is a similar representation for the derivative. To "find" it means to find the zeros of derivative in terms of zeros of the function, and to find the number $q$ and polynomial $P$. Can you "deduce" the zeros of derivative of a polynomial in terms of zeros of this polynomial? Of course, by taking log and differentiating the Hadamard formula, you obtain a formula for $f'$ which Carlo wrote, but this is not the Hadamard representation of $f'$. By the way, in the beginning of 20-s century, the question of whether the genus of $f$ is the same as that of $f'$ was intensively discussed. If I remember correctly, it can be different. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447537064552307, "perplexity_flag": "head"}
http://mathoverflow.net/questions/113780/proof-without-words-for-surface-area-of-a-sphere
## Proof without words for surface area of a sphere [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I love the book Proofs Without Words by Roger B. Nelsen. One of the proofs I liked the most was this: Area under one arch of a cycloid is 3 times the area of the wheel that traces it. You break the cyloid in to three parts and show that each part has an area equal to that of the wheel. I have always wondered if there is a similar proof for why the surface area of a sphere is equal to the area of a circle with radius twice that of the sphere. Is there a nice way to see this? I would also like to know if one can show without doing any calculus that the length of the arch of the cycloid is 8 times the radius of the wheel. (Note: $\pi$ does not show up here, so this sounds tricky!) - This question is off topic for this site, but why not look for some of these proofs yourself? – Anthony Quas Nov 18 at 23:59 2 @Anthony, I think this is on topic. The standard proofs do not convey the level of direct geometric insight which the OP desires. A large number of mathematicians (myself included) treasure these kinds of proofs, and I think it is reasonable to ask for one here. There was a very successful question asking for proofs without words, but neither of these appear as answers. – Steven Gubkin Nov 19 at 0:24 I remember this from a while ago, not sure if the methods will satisfy you: johncarlosbaez.wordpress.com/2010/10/11/… – Steven Gubkin Nov 19 at 0:37 I do think this question is off topic, even if other might be that are not closed. – Benoît Kloeckner Nov 19 at 20:22 ## 4 Answers It seems that you are more or less asking for a proof-without-words of Archimedes's theorem equating the area on a sphere between two horizontal slices with the area of the circumscribed cylinder between the same two slices. But this is a notoriously non-obvious theorem, so I'll be very surprised if such a proof exists. Here's why I say that proving the $4\pi r^2$ formula is more or less the same as proving Archimedes's theorem: The theorem is clearly approximately true for a very thin slice symmetric around the equator. As you move from the equator to a pole, taking slices of equal width, it would be at least mildly suprising if the corresponding spherical surface areas did not change monotonically. If they decrease, then the total area of the sphere is less than that of the cylinder; if they increase, then the total area of the sphere is more than that of the cylinder. So (at least given the expectation of monotonicity), having them all equal (i.e. the full strength of Archimedes's theorm) is equivalent to the sphere having the same surface areas as the cylinder, which in turn is clearly $4\pi r^2$. - That's a whole lot of words for a proof-without-words. – Ryan Budney Nov 19 at 1:36 5 @Ryan - he is arguing that he doesn't think there is such a proof. – Steven Gubkin Nov 19 at 5:16 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I will describe in words a proof which might as well be illustrated without words (I would provide it in such visual form instead had I only the skills to make diagrams any nicer than MS Paint scrawls): Imagine a sphere as the Earth, oriented in the usual way with the North pole on top. [This makes no difference except to my linguistic convenience, of course] Consider an infinitesimal "square" patch of the Earth's surface, whose sides are oriented along lattitudinal and longitudinal lines, and the distortion this square undergoes when projected horizontally outward to the cylinder circumscribing the Earth (with the polar axis as its axis). The ratio of the square's horizontal distance from the polar axis to the radius of the Earth is equal to the ratio of the square's vertical span to the length of its longitudinally oriented sides. (These are both the cosine of the angle between the square's position and the equator (equivalently, the angle between the square's orientation and vertical)). Accordingly, the factor by which the square's lattitudinally oriented sides are stretched in our cylindrical projection is equal to the factor by which its longitudinally oriented sides are squashed. Thus, our cylindrical projection is area-preserving, from which we have that the area of the entire sphere is the same as the side area of its circumscribing cylinder. This, it seems to me, is a perfectly "directly geometric" account of the desired fact. - I like your proof. The proof for area under a cycloid also required some simple geometrical calculations which I am willing to still consider as a "proof without words". It is still intriguing though that it is not easy to find a natural proof mapping the area of a circle with twice the radius of the sphere to the surface area of the sphere. – Sudeep Kamath Nov 20 at 1:48 Not quite no words, but perhaps $$\frac{dy}{ds} = \frac{r}{R} \ \mbox{so} \ 2 \pi r \cdot ds =2 \pi R \cdot dy$$ combined with Joseph O'Rourke's picture would count? - (From MathWorld: Archimedes Hat-Box Theorem) - 2 How does this picture prove that the light blue areas are equal? – YangMills Nov 19 at 16:26 @YangMills: It doesn't. It is just a nice image illustrating the theorem's claim. – Joseph O'Rourke Nov 19 at 17:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552265405654907, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/tagged/ac.commutative-algebra
## Tagged Questions 1answer 70 views ### Embedded associated prime $\underline{\textbf{Embedded associated prime}}$ I am reading the book "Joins and Intersections". In the proof of Rees theorem I have some doubt. Let $\mathbf M$ be a finitely ge … 0answers 86 views ### Lattices as invertible modules. I have asked this question in Math Stack exchange but got no answer. Maybe it fits Mathoverflow better. All rings below are assumed to be Noetherian. Let $E$ be an etale algeb … 2answers 144 views ### Germs at infinity of sequence of integers Consider the $\mathbb Z$-module $\mathcal Z$ obtained as the set of sequences of integers $\mathbb Z ^ \mathbb N$ modulo the relation that two sequences are deemed equivalent when … 1answer 94 views ### General and translational Birkhoff lattices. Equational classes. By  lattice  I'll mean  Birkhoff lattice. The two classical equational classes of lattices are modular lattices and distributive lattices. The old problem used to b … 1answer 211 views ### Examples of polynomial rings $A[x]$ with relatively large Krull dimension If $A$ is a commutative ring we have the estimate $$\dim (A)+1 \le \dim (A[x])\le 2\dim (A)+1$$ for the Krull dimension, with $\dim (A)+1 = \dim (A[x])$ for Noetherian rings. I … 0answers 102 views ### Identity on topological space but not on scheme I have this question just out of curiosity. If X is a scheme, then a morphism $f: X \rightarrow X$ can be the identity on the underlying topological space of X, but not the identi … 1answer 138 views ### An example of a tensor product consisting of only simple tensors? Hy guys. I'm doing some independent analysis which makes use of the tensor product of modules (over commutative rings with unit 1, and ring homomorphisms map $1 \mapsto 1$). Let \$\ … 0answers 67 views ### Flatness over Jacboson ring This is an elementary question which did not get answered on math.stackexchange. I would like to know the answer for expository purposes. I need either a reference or a counter-e … 0answers 92 views ### ideals of a noetherian ring $R$ Cohen-Macaulay as $R-$modules When are (prime) ideals of a noetherian ring $R$ Cohen-Macaulay as $R-$modules? That is, $depth_R(Ann_R(P))=dim_R(R/Ann_R(P)$ for each $P\in {\rm Spec}(R)$ 0answers 126 views ### Is a complete intersection satisfying Jacobian matrix smooth criterion a smooth variety? Every scheme here is over complex number. Let $X \subset (\mathbb{C}^*)^n$ be a complete intersection with $X$ defined by the ideal \$I \subset \mathbb{C}[x_{1}^{\pm},\dots,x_{n} … 1answer 80 views ### An example of a ring $R$ with the property that for each $P=Ann_R(r)\in {\rm Min}(R)$ we have $Ann_R(P)=Rr$. I'm looking for an example of a commutative (preferably local) ring $R$ such that ${\rm dim}R>0$ and $R$ has the property that for each $P=Ann_R(r)\in {\rm Min}(R)$ we have \$Ann_R … 0answers 32 views ### A question on from the paper “A Numerical charecterization of reduction ideals ” I am currently reading the paper "A Numerical characterization of reduction ideals" by Hubert Flenner and Mirella Manaresi. In this paper they have quoted two results from "Joins … 0answers 90 views ### ideal generated by highest weight vectors Let $S$ be a polynomial ring which carries the action of a semi-simple linear algebraic group $G$ (I'm interested in a product of $GL$'s). Take $S$ and $G$ to be over an algebraica … 1answer 441 views ### Solve for $A$ and $B$ in $AXB=Y$ Let $R = \mathbb{Z}[x_{1}, \dots, x_{r}]$. Let $X$ be $n \times n$ matrix with entries in $R$. Let $Y$ be $m \times m$ matrix with entries in $R$ formed from $\mathbb{Z}$-linear or … 1answer 350 views ### First order decidability of rings vs Diophantine decidability Are there known (preferably concrete'') examples of a ring $R$ (commutative, with 1) such that: $\bullet$ the first order theory of $R$ is undecidable, but $\bullet$ the posit … 15 30 50 per page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8912491202354431, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/1487/how-to-unload-automatically-loaded-packages
# How to unload automatically loaded packages? I know that this has been discussed here (How do I clear all user defined symbols?), but my case is somewhat different. How does one unload packages during runtime that were loaded with the start of Mathematica? Occasionally, for deployment reasons, I would like to revert my system to the default state, to check whether a function works in that environment as well. I have a package that starts when Mathematica starts, and it contains some modified system symbols too (e.g. extended usage messages for options). Should I include CleanSlate in the same init.m file before loading my own package and then later refer to it when I want to return to the default Mathematica state? Would it revert modified System context symbols too? At the moment I manually have to edit the init.m to remove package loading and then the kernel must be restarted. This is quite tedious. - ## 2 Answers If you want to revert the entire system to some state, then CleanSlate may be the best option. If you want to unload a few specific packages though, you can use my package PackageManipulations, available here. It has a function PackageRemove, which does exactly that. It has an accompanying notebook with explanations. Some additional notes on it are in this and this answers. If you want to just clear all the package and sub/package symbols but not Remove them, you can use functions PackageClear and PackageClearComplete from the same package (a disclaimer : the package may contain bugs, although I used it successfully many times) Note that removing all symbols in the package (and subsequently removing its context) - which is what PackageRemove does and what you seem to ask for - may be unsafe if other packages use some of those symbols, since those symbols in the definitions of symbols in dependent packages would turn to Removed[symbs] and won't be usable any more, in a sutble way. To check that there are no such dependencies, you can use another package I wrote, PackageSymbolsDependencies`, available from the same place (it also has a notebook with explanations and examples). - The cleanest possible kernel you can get is starting a kernel only (no front end) with the -noinit option. math -noinit The front end will evaluate a lot of code when it connects to the kernel. -noinit prevents the kernel from loading initialization files (init.m). I don't believe you can get to a purer state than this unless you start surgically removing components that are practically built in (Remove, Clear, etc. applied to pre-defined symbols). - +1 I have a kernel configuration called "SafeMode" that starts with -noinit. I use it whenever testing code for posting or debugging. – Mr.Wizard♦ Feb 8 '12 at 22:28 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452815055847168, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/59939?sort=oldest
## Identifying poisoned wines ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The standard version of this puzzle is as follows: you have $1000$ bottles of wine, one of which is poisoned. You also have a supply of rats (say). You want to determine which bottle is the poisoned one by feeding the wines to the rats. The poisoned wine takes exactly one hour to work and is undetectable before then. How many rats are necessary to find the poisoned bottle in one hour? It is not hard to see that the answer is $10$. Way back when math.SE first started, a generalization was considered where more than one bottle of wine is poisoned. The strategy that works for the standard version fails for this version, and I could only find a solution for the case of $2$ poisoned bottles that requires $65$ rats. Asymptotically my solution requires $O(\log^2 N)$ rats to detect $2$ poisoned bottles out of $N$ bottles. Can anyone do better asymptotically and/or prove that their answer is optimal and/or find a solution that works for more poisoned bottles? The number of poisoned bottles, I guess, should be kept constant while the total number of bottles is allowed to become large for asymptotic estimates. - 17 From the viewpoint of a rat, any bottle of wine sounds like alcohol poisoning. – KConrad Mar 29 2011 at 4:26 3 I don't know how to do the problem, but I can't help feeling it's equivalent to finding a 2-error-correcting binary code of minimal dimension given ... something-or-other. I'm pretty sure the solution to the 1-bottle problem can be expressed in terms of the Hamming codes, and if you do that then maybe what I'm saying about 2-error-correcting codes will become clear. – Gerry Myerson Mar 29 2011 at 5:12 2 @Thomas: sure. After an hour, the only information you have if you used $m$ rats is the information about which rats are dead or alive, so you can only get $m$ bits of information this way. So $\lceil \log N \rceil$ is optimal for one bottle. – Qiaochu Yuan Mar 29 2011 at 5:47 4 @Roland You have only one hour to perform the test so you cannot wait to see which rats survive to reuse them. Otherwise, it would be easy: one would simply administrate the different wines successively to a single rat until it dies. – Anthony Leverrier Mar 29 2011 at 9:58 3 Isn't this what postdocs are for? – Mariano Suárez-Alvarez Jun 8 2011 at 4:23 show 5 more comments ## 6 Answers Each bottle of wine corresponds to the set of rats who tasted it. Let $\mathcal{F}$ be the family of the resulting sets. If bottles corresponding to sets $A$ and $B$ are poisoned then $A \cup B$ is the set of dead rats. Therefore we can identify the poisoned bottles as long as for all $A,B,C,D \in \mathcal{F}$ such that $A \cup B = C \cup D$ we have $\{ A, B \} = \{ C, D \}$. Families with this property are called (strongly) union-free and the maximum possible size $f(n)$ of a union free family $\mathcal{F} \subset 2^{ [n] }$ has been studied in extremal combinatorics. In the question context, $f(n)$ is the maximum number of bottles of wine which can be tested by $n$ rats. In the paper "Union-free Hypergraphs and Probability Theory" Frankl and Furedi show that $$2^{(n-3)/4} \leq f(n) \leq 2^{(n+1)/2}.$$ The proof of the lower bound is algebraic, constructive, and, I think, very elegant. In particular, one can find $2$ poisoned bottles out of $1000$ with $43$ rats. - 2 +1: very nice, now the rats have no chance ;-) – S. Sra Apr 1 2011 at 21:49 4 Perhaps oddly, I find myself caring more about the abstract formulation than about the puzzle formulation. Will have to look at the FF paper when I find time! – Yemon Choi Apr 1 2011 at 23:03 Many thanks! This question has been bothering me for quite some time. – Qiaochu Yuan Apr 2 2011 at 1:56 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For the case of one poisoned bottle I would expect the answer to be logbase2(N) because we could then have each rat drink from the bottles whose positions' binary digits include the rat's position and the rats that die will be those whose positions give the binary representation of the bad bottle's position. If that is correct then for two bad bottles I would be inclined to think of doing the same thing with the list of all bottle pairs, but of course there is not just one poisonous pair. In order to identify the unique doubly poisonous pair we need to replace the rats by something that only dies if it gets a double dose. It seems that pairs of rats would suffice for this, but then the total number needed would be 2*logbase2(NC2) which gives only 38 for N=1000, so if your answer is optimal I must have missed something. Where did I go wrong? - Try it for something smaller, like $N=5$, where you can write everything out, and see if your proposed protocol (which you haven't actually given) works. – Gerry Myerson Mar 29 2011 at 6:15 It is not known (by me) that Qiaochu's answer is optimal. Also, an obvious use of pairs of rats only pins down those digits in the encoding that the two bottles share. You need a parallel encoding and additional rats to determine which two bottles are affected, within an hour. I suspect, using three randomly chosen encodings and 60 rats, one has a high probability of success of identifying the two bottles. Now if I only had a proof... . Gerhard "Off To See The Wizard" Paseman, 2011.03.28 – Gerhard Paseman Mar 29 2011 at 6:35 Yes, of course you are right. A pair of rats could die from two single doses, each from a bottle-pair with only one bad bottle. – Alan Cooper Mar 29 2011 at 8:10 You yourself give the solution in the link: the probabilistic method. Without trying to optimize, take $r$ rats and for each one choose the subset of wines randomly, each wine with probability $1/2$ (say). Call these subsets $A_m$. Now let `$\{i,j\}$` and `$\{k,l\}$` be two possibilities for which bottles are poisoned. We want to know whether there's a rat separating these two possibilities, that is, that the outcome if the $i$-th and $j$-th bottles are the poisoned one will be different for that rat, then if the $k$-th and $l$-th bottles are poisoned. For any specific rat, this happens if $A_m$ intersects `$\{i,j\}$` but not `$\{k,l\}$` or vice verse. This happens with some fixed positive probability (again, not optimizing). Therefore, it fails with probability $q$ which is strictly less than 1. Hence, the probability for it to fail for all `$\{A_m\}_{m=1}^r$` is $q^r$. There are less than $n^4$ pairs of possibilities for poisoned wine bottles, hence the probability of having some pair for which this fails is at most $n^4 q^r$ and taking $r=C \log n$ suffices to make this negligible. - I guess the problem is not to find a solution that works on average but one working in the worst case. – Anthony Leverrier Mar 29 2011 at 7:09 4 This works in the worst case. With high probability, if you have $r=1000 \log n$ rats and choose the subsets of wine bottles randomly each with probability $1/2$ you get a solution which work in the worst case (i.e. no matter which pair of bottles are poisoned). – Ori Gurel-Gurevich Mar 29 2011 at 7:16 I believe that since there are exponentially many choices for the probabilistic experiment, but only polynomially many ways the poison could be picked, there is some single string of random bits that works with $O(\log n)$ rats for all possible arrangements of the bottles. This requires using a concentration bound, such as Chernoff bounds. – Derrick Stolee Mar 29 2011 at 13:25 If I'm not mistaken, the optimal choice here is probability 1/3, leading to $r\sim\frac{3\log n}{\log(27/19)}$. For $n=1000$, this gives $r=59$. – Emil Jeřábek Mar 29 2011 at 13:27 4 @Zander: It's possible my argument is flawed, but I don't think so. The random variables `$\{A_m\}_{m=1}^r$` are, by definition, independent, so the events are also independent. The events aren't independent for different $i,j,k,l$, but for that part I use union bound. (a general remark, I suggest using a less determined tone in your comments, such as "I think there is a flaw in the argument" or even "I don't understand this part, can you explain it", see the beginning of this comment. – Ori Gurel-Gurevich Mar 30 2011 at 5:19 show 4 more comments There's a very similar problem in compressed sensing genetic screening for rare alleles (cf http://nar.oxfordjournals.org/content/38/19/e179.full ). The technique almost works here provided we can determine how much poison a rat gets. Seems reasonable, a rat that gets more poison dies faster. In our problem the idea would be to create a sample for each rat to drink by randomly pooling together wine from many bottles. Specifically, for rat $i$ we draw $A_{ij}$ liters of wine from each of the $j=1,...,N$ bottles where $A_{ij}$ is $\mathcal{N}(0,1)$ distributed. Let $b_i$ denote the amount of poison measured in rat $i$ and let $x_j$ denote the amount of poison in bottle $j$. This yields the highly underdetermined linear system $$A\vec{x} = \vec{b}$$ where we know a priori that $\vec{x}$ is sparse. The sparsest solution to this linear system may obtained in polynomial time by solving the convex optimization $$\min |\vec{x}|_1 \text{ s.t. } A\vec{x}=\vec{b}$$ The number of rats required here is $\mathcal{O}(s \log(N))$ where $s$ is the number of poison bottles. - This problem also goes by the name "nonadaptive combinatorial group testing" and has been around since at least World War II, when the U.S. government was trying to isolate syphilis cases in soldiers. ("Nonadaptive" means you have to specify all the tests in advance, whereas "adaptive" means you can use the results from previous tests before deciding which ones to do next.) The standard reference on group testing appears to be Combinatorial Group Testing and Its Applications, by Du and Hwang. Part II, which comprises Chapters 7-9, is on nonadaptive testing. In particular, finding optimal testing structures when there are two or more "defectives" is still an open problem. However, if $t(d,n)$ is the number of tests required to isolate $d$ defectives out of $n$ total subjects, the bounds $\Omega(\frac{d^2}{\log d} \log n) \leq t(d,n) \leq O(d^2 \log n)$ are known. The Wikipedia article on disjunct matrices has a discussion and some proofs. It might be interesting to compare the solution for the adaptive version of this problem, as we can give a definite answer in this case. Let $n(t)$ denote the maximum number of bottles of wine for which 2 poisoned ones can be identified in $t$ adaptive tests. In "Group testing with two and three defectives" (Annals of the New York Academy of Sciences 576, pp. 86-96, 1989) Chang, Hwang, and Weng give explicit testing procedures that yield the lower bounds $$n(t) \geq 89 \cdot 2^{\frac{t}{2}-6}, t \text{ even, } t \geq 12;$$ $$n(t) \geq 63 \cdot 2^{\frac{t-1}{2}-5}, t \text{ odd, } t \geq 13.$$ In the Du and Hwang text it is shown that, for $t \geq 4$, we have the upper bound $$n(t) \leq 2^{\frac{t+1}{2}} - 1/2.$$ (Note that this is the upper bound on $f(n)$ given in Sergey Norin's answer.) These bounds tell us that $n(18) \leq 723$ but that $n(19) \geq 1008$. Thus 2 poisoned bottles can be identified out of 1000 in 19 adaptive tests but no fewer, using the testing procedure described in the Chang, Hwang, and Weng paper. - I'm kinda confused on how this works. I get log(2) of 1000 ~ 10, but is that only for binary events? Could someone walk me through how out of 1000 bottles, the EXACT poisoned bottle (1/1000) can be found by drinking only 10? Are you breaking up the bottles into groups, and then pouring all of the wine together, effectively poisoning the group? You can only test 1 bottle at a time, so with 10 tests, you will only test 10 bottles right, .1% of the entire group? Thanks! - 2 Hi Tony, 1) answers are not for asking questions, and 2) the original puzzle is not MO-level. Please ask questions about it at math.stackexchange.com instead. – Qiaochu Yuan Dec 27 at 5:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9430123567581177, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/9913/elasticity-of-log-log-regression/9925
# Elasticity of log-log regression In the log-log regression case, $$\log(Y) = B_0 + B_1 \log(X) + U \>,$$ can you show $B_1$ is the elasticity of $Y$ with respect to $X$, i.e., that $E_{yx} = \frac{dY}{dX}(\frac{Y}{X})$? - Migrating to Stats.stackexchange where this is more suitable. – Willie Wong Apr 23 '11 at 21:31 2 – whuber♦ Apr 23 '11 at 21:54 1 BTW, I believe you need to fix the formula for $E_{y,x}$: it should be a quotient, not a product. – whuber♦ Apr 23 '11 at 22:04 ## 1 Answer whuber has made the point in the comment. If $\log_e(Y) = B_0 + B_1\log_e(X) + U$ and $U$ is independent of $X$ then taking the partial derivative with respect to $X$ gives $\frac{\partial Y}{\partial X}\cdot\frac{1}{Y} = B_1\frac{1}{X}$, i.e. $B_1 = \frac{\partial Y}{\partial X}\cdot\frac{X}{Y}$. $E_{y,x} = \lim_{X \rightarrow x} \frac { \Delta Y} { y} / \frac { \Delta X} { x}$, which is the same thing. Take absolute values if you want to avoid negative elasticities. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8991766571998596, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/27420/does-the-garch-approach-model-prices-or-returns
# Does the GARCH approach model prices or returns? I have a simple question about the GARCH model. We know that the $\alpha$ and $\beta$ parameters of the models are fitted for the local volatility of each time $t$ as follows: $$\sigma_t^2= \alpha_0 + \sum_{i=1}^q \alpha_i \varepsilon_{t-i} + \sum_{i=1}^p \beta_i \sigma_{t-i}^2$$ with $\varepsilon_t=\sigma_t z_t$ and $z_t \sim N(0,1)$ However, the wikipedia article says that the process $y$ is behaving as follows: $$y_t=a_0 + \sum_{i=1}^q a_i y_{t-i} + \varepsilon_t$$ I just wanted to make sure that here we assume that $y_t$ is the "original" time series values, and hence that $\varepsilon_t$ was the "return" of the time series at time $t$. Is that correct? In a financial application, would $y_t$ be the price or the return at time $t$? EDIT: As the answers indicates that $y_t$ models the returns, I'm a bit surprised because usually you use the maximum log-likelihood: $$\log L = -\frac{1}{2} \sum_{i=1}^n \left(\log (2 \pi) + \log (\sigma_{i}^2) + \frac{y_i^2}{\sigma_i^2} \right)$$ But this is only true if $y_i \sim N(0,\sigma_i^2)$ Now clearly with the setup presented above $\text{Var}(y_i) = \sigma_i^2$, but if $a \neq 0 ~ \forall i$, then $E[y_i] = a_0 + \sum_{i=1}^q a_i y_{t-i} \neq 0$ Is it because the log-likelihood is computed assuming $a=0 \forall i$? - ## 2 Answers This is pretty common notation: • $y_t$ is the return at $t$. • $\varepsilon_t$ is residual from modeling the returns as an $AR(q)$ process as shown in the equations. Taken together, you have an $AR(q)-GARCH(p,q)$ model there (with slight abuse of notation as we have $q$ twice). - Ok but then, when you perform a fitting from the GARCH model, you should have 3 parameters fitted: $a, \alpha, \beta$ but I never see the $a$ in the output after at fitting. Why is that? – SRKX May 1 '12 at 6:53 For a financial time series, you would say that $y_t$ is the return, not a price? – SRKX May 1 '12 at 7:01 Was my question mislead by the log-likelihood computation included in the edit then? – SRKX May 1 '12 at 19:49 Can someone answer @SRKX question about why some implementations (e.g., `fGarch` in `R`), don't give us the fit for the $a$ parameter? (However, EViews does for some reason). – Jase Dec 9 '12 at 3:08 You want to model returns. The blog post http://www.portfolioprobe.com/2011/01/12/the-number-1-novice-quant-mistake/ explains why you want returns rather than prices in the context of regression. The same sort of logic applies to garch -- except maybe even more so. - Ok, so $y_t$ in GARCH model is supposed to be the price series right? – SRKX May 1 '12 at 12:13 No, the returns. – Patrick Burns May 1 '12 at 17:09 Could you please have a look at my edit? – SRKX May 1 '12 at 19:48 1 Your log likelihood formula is assuming the mean of the returns is zero. The more general formula would replace y_i with y_i - \mu. – Patrick Burns May 2 '12 at 9:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457628130912781, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/46030/regularity-of-asymptotic-cones
## Regularity of asymptotic cones ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Are there any general conditions guaranteeing that the asymptotic cone of a group/graph is "regular" in some sense? E.g. for $\mathbb{Z}^d$ we get $\mathbb{R}^d$ as the asymptotic cone, which is even a manifold, but for general groups we only get a metric space without additional structure. Does knowing that asymptotic cone is regular (e.g. a manifold) imply any properties of the original group? - ## 2 Answers Drutu has shown that if every asymptotic cone of the finitely generated group $G$ is a proper space, then $G$ has polynomial growth; and hence by Gromov's Theorem, it follows that $G$ is nilpotent-by-finite. It seems to be open whether or not the conclusion holds if just one asymptotic cone of $G$ is proper. - I have a preprint about that problem: arxiv.org/abs/1010.1199 – Alessandro Sisto Nov 14 2010 at 23:38 @ Alessandro: your preprint looks very interesting! – Simon Thomas Nov 15 2010 at 1:00 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I would just like to add to the answer by Simon Thomas that -if a group is virtually nilpotent, its asymptotic cones are very regular: they have a Lie group structure and their metric is of Carnot-Caratheodory type (these metrics are described in the wikipedia article "Sub-Riemannian manifold"). Also, the asymptotic cones do not depend on the scaling factor. -if a group is not virtually nilpotent, its asymptotic cones tend to be VERY large objects. For example, the asymptotic cones of each non-virtually cyclic hyperbolic group are real trees with valency `$2^{\aleph_0}$` at each point (those groups have exponential growth, I have to admit that I know very little about asymptotic cones of groups of intermediate growth). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278703927993774, "perplexity_flag": "head"}
http://mathoverflow.net/questions/32515/how-to-write-matlabs-dot-operators-in-mathematical-expressions
## How to write Matlab’s dot operators in mathematical expressions? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Matlab has a set of dot operators, such as .*, ./, .^. Each of these operators consists of a dot and a normal algebraic operator. They perform element-wise algebraic operations on a matrix. For example, consider the following codes ````A = [1 2 3; 3 2 1]; x = [1 2 4]; B = A.^2 y = 1./x ```` The result is ````B = 1 4 9 9 4 1 y = 1.0000 0.5000 0.2500 ```` I find these dot operators very convenient. My question is, how to write these dot operators in mathematical expressions? (By mathematical expressions, I mean the expressions used in proofs.) EDIT - One obvious way is to define the result matrix element-wise. But is there a way to write this result in a more compact manner? - 1 The Matlab notation was heavily influenced by APL. APL was intended originally as a compact notation for doing mathematics. As a result, for the kinds of operations operations you appear to be talking about, the usual mathematical notation is typically going to be longer than the Matlab code. (en.wikipedia.org/wiki/…) But bear in mind, you're free to write $y=1/x$ to mean $y_k=1/x_k$ in a proof, as long as you state your intentions clearly and unambiguously. – Dan Piponi Jul 19 2010 at 18:12 Thanks for the comment. I followed the link to APL; it is interesting! – daizhuo Jul 20 2010 at 2:06 ## 2 Answers Your matrix $B$ is the Hadamard product of $A$ and $A$ which uses the notation $B = A \circ A$. However I don't know of any others, particularly for expressing $y$. See: http://en.wikipedia.org/wiki/Matrix_multiplication#Hadamard_product - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I might write those as $B_{ij} = (A_{ij})^2$ and $y_k = \frac{1}{x_k}$ (where i, j and k range over the rows and columns of A and the elements of x respectively). Of course, I'm sure other ways exist too. - Oh thanks for your response! But are there more compact ways to write these facts? – daizhuo Jul 19 2010 at 17:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.91361004114151, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/170172-factorization-x-z_4-x.html
# Thread: 1. ## Factorization of x in Z_4[x] I was hoping someone could give me a hand with this little problem that's been driving me a bit nuts: Show that there is a factorization x = f(x)g(x) in Z_4[x] (the polynomial ring over the integers mod 4) such that neither f(x) nor g(x) is a constant. Is there some kind of method or good way to find these factors, or is it just a guess and check situation? I've been searching for these factors for a while with no luck (haven't strayed beyond quadratics yet). 2. Originally Posted by mtdim Is there some kind of method or good way to find these factors, or is it just a guess and check situation? I've been searching for these factors for a while with no luck (haven't strayed beyond quadratics yet). Taking into account that $2\cdot 2=0,\;3\cdot 3=1$ we obtain $(2x+3)^2=0x^2+0x+1=1$ So, choose $f(x)=x(2x+3),\;g(x)=2x+3$ Fernando Revilla
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537076950073242, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/9309?sort=newest
## In model theory, does compactness easily imply completeness? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Recall the two following fundamental theorems of mathematical logic: Completeness Theorem: A theory T is syntactically consistent -- i.e., for no statement P can the statement "P and (not P)" be formally deduced from T -- if and only if it is semantically consistent: i.e., there exists a model of T. Compactness Theorem: A theory T is semantically consistent iff every finite subset of T is semantically consistent. It is well-known that the Compactness Theorem is an almost immediate consequence of the Completeness Theorem: assuming completeness, if T is inconsistent, then one can deduce "P and (not P)" in a finite number of steps, hence using only finitely many sentences of T. The traditional proof of the completeness theorem is rather long and tedious: for instance, the book Models and Ultraproducts by Bell and Slomson takes two chapters to establish it, and Marker's Model Theory: An Introduction omits the proof entirely. There is a quicker proof due to Henkin (it appears e.g. on Terry Tao's blog), but it is still relatively involved. On the other hand, there is a short and elegant proof of the compactness theorem using ultraproducts (again given in Bell and Slomson). So I wonder: can one deduce completeness from compactness by some argument which is easier than Henkin's proof of completeness? As a remark, I believe that these two theorems are equivalent in a formal sense: i.e., they are each equivalent in ZF to the Boolean Prime Ideal Theorem. I am asking about a more informal notion of equivalence. UPDATE: I accepted Joel David Hamkins' answer because it was interesting and informative. Nevertheless, I remain open to the possibility that (some reasonable particular version of) the completeness theorem can be easily deduced from compactness. - ## 3 Answers There are indeed many proofs of the Compactness theorem. Leo Harrington once told me that he used a different method of proof every time he taught the introductory graduate logic course at UC Berkeley. There is, of course, the proof via the Completeness Theorem, as well as proofs using ultrapowers, reduced products, Boolean-valued models and so on. (In my day, he used Boolean valued models, but that was some time ago, and I'm not sure if he was able to keep this up since then!) Most model theorists today appear to regard the Compactness theorem as the significant theorem, since the focus is on the models---on what is true---rather than on what is provable in some syntactic system. (Proof-theorists, in contast, may focus on the Completeness theorem.) So it is not because Completness is too hard that Marker omits it, but rather just that Compactness is the important fact. Surely it is the Compactness theorem that has deep applications (or at least pervasive applications) in model theory. I don't think formal deductions appear in Marker's book at all. But let's get to your question. Since the exact statement of the Completeness theorem depends on which syntactic proof system you set up---and there are a huge variety of such systems---any proof of the Completeness theorem will have to depend on those details. For example, you must specify which logical axioms are formally allowed, which deduction rules, and so on. The truth of the Completness Theorem depends very much on the details of how you set up your proof system, since if you omit an important rule or axiom, then your formal system will not be complete. But the Compactness theorem has nothing to do with these formal details. Thus, there cannot be hands-off proof of Completeness using Compactness, that does not engage in the details of the formal syntactic proof system. Any proof must establish some formal properties of the formal system, and once you are doing this, then the Henkin proof is not difficult (surely it fits on one or two pages). When I prove Completeness in my logic courses, I often remark to my students that the fact of the theorem is a foregone conclusion, because at any step of the proof, if we need our formal system to be able to make a certain kind of deduction or have a certain axiom, then we will simply add it if it isn't there already, in order to make the proof go through. Nevertheless, Compactness can be viewed as an abstract Completness theorem. Namely, Compactness is precisely the assertion that if a theory is not satisfiable, then it is because of a finite obstacle in the theory that is not satisfiable. If we were to regard these finite obstacles as abstract formal "proofs of contradiction", then it would be true that if a theory has no proofs of contradiction, then it is satisfiable. The difference between this abstract understanding and the actual Completness theorem, is that all the usual deduction systems are highly effective in the sense of being computable. That is, we can computably enumerate all the finite inconsistent theories by searching for formal syntactic proofs of contradiction. This is the new part of Completness that the abstract version from Compactness does not provide. But it is important, for example, in the subject of Computable Model Theory, where they prove computable analogues of the Completeness Theorem. For example, any consistent decidable theory (in a computable language) has a decidable model, since the usual Henkin proof of Completeness is effective when the theory is decidable. Edit: I found in Arnold Miller's lecture notes an entertaining account of an easy proof of (a fake version of) Completeness from Compactenss (see page 58). His system amounts to the abstract formal system I describe above. Namely, he introduces the MM proof system (for Mickey Mouse), where the axioms are all logical validities, and the only rule of inference is Modus Ponens. In this system, one can prove Completeness from Compactness easily as follows: We want to show that T proves φ if and only if every model of T is a model of φ. The forward direction is Soundness, which is easy. Conversely, suppose that every model of T is a model of φ. Thus, T+¬φ has no models. By Compactness, there are finitely many axioms φ0, ..., φn in T such that there is no model of them plus ¬φ. Thus, (φ0∧...∧φn implies φ) is a logical validity. And from this, one can easily make a proof of φ from T in MM. QED! But of course, it is a joke proof system, since the collection of validities is not computable, and Miller uses this example to illustrate the point as follows: The poor MM system went to the Wizard of OZ and said, “I want to be more like all the other proof systems.” And the Wizard replied, “You’ve got just about everything any other proof system has and more. The completeness theorem is easy to prove in your system. You have very few logical rules and logical axioms. You lack only one thing. It is too hard for mere mortals to gaze at a proof in your system and tell whether it really is a proof. The difficulty comes from taking all logical validities as your logical axioms.” The Wizard went on to give MM a subset Val of logical validities that is recursive and has the property that every logical validity can be proved using only Modus Ponens from Val. And he then goes on to describe how one might construct Val, and give what amounts to a traditional proof of Completeness. - 2 Also, for whatever it's worth, I don't understand your remark about completeness being a foregone conclusion. It is certainly not clear that incompleteness can be remedied by adding further axioms on the fly (c.f. the Incompleteness Theorem!). – Pete L. Clark Dec 18 2009 at 22:13 1 Come to think of it, I guess you can take F a nonprincipal ultrafilter on the set of primes, define for each prime number p an algebraically closed field of characteristic p, and let K be the ultraproduct of the K_p's with respect to F. Then Los' theorem asserts that any first order sentence that is true in every algebraically closed field of positive characteristic is true in every algebraically closed field of characteristic zero. – Pete L. Clark Dec 18 2009 at 23:03 1 Yes, that proof was merely about the finite obstacle, which Compactness provides. The situations where one seems to need Completeness over Compactness, as I mentioned in my answer, have to do with the effectivity of the finite obstacle, for example, when if the question concerns the computability of a theory or model, or whether there is a computable procedure for eliminating quantifiers, and so on. – Joel David Hamkins Dec 18 2009 at 23:38 3 What I meant about Completeness being a foregone conclusion, is that when you start proving Completeness, you periodically need to know various things about the formal system you defined. So, if you are not so interested in having the optimal proof system, then you can simply add them to the system on the fly as the proof proceeds. Of course, this method only works because the theorem is true! But it does mean that you don't have to remember the exact proof system in advance, as long as you remember the essential proof outline. – Joel David Hamkins Dec 18 2009 at 23:42 2 I added a description of A. Miller's entertaining MM system at the end. – Joel David Hamkins Jan 25 2010 at 3:41 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. About the equivalence of the compactness, completeness, prime ideal theorems over ZF: what really matters here is the case when the language $L$ over which the theory $T$ is defined is not well-ordered. Otherwise, the Henkin proof gives a model of $T$ without using any form of axiom of choice. In particular, it is OK when the considered language is countable. Now, the implication compactness $\Rightarrow$ completeness in the general case goes as follows (although it still uses completeness for well-ordered theories, which is a theorem of ZF). Fix a first-order language $L$. Let $T$ be a syntactically consistent theory. Then any finite $F\subseteq T$ is syntactically consistent. Define $L_F$ to be the language whose operational and relational symbols are the ones occurring in $F$. Since $F$ is finite, $L_F$ is finite. Then $F$ is a syntactically consistent theory in the language $L_F$. We have completeness for countable languages, so we have a model $M_F$ of $F$ treated as a theory over $L_F$. The model $M_F$ can easily be extended to a model $M_F'$ of $F$ treated as a theory over $L$ (just give the unused symbols trivial interpretations: empty relations and constant operations). By compactness we now have a model of $T$. - Very Nice. This shows that Compactness provides a clean reduction of Completeness for uncountable languages to finite languages. So if we have Compactness, we can avoid the transfinite issues in Completeness that arise for uncountable languages (which are sometimes difficult issues for students). – Joel David Hamkins Jan 25 2010 at 15:00 I think you're looking for the Fraïssé School of Model Theory, which is based strictly on structures and types as primitives and avoids all syntax. I don't know of a good source for the "extremist Fraïssean approach," but Bruno Poizat's "A Course in Model Theory" is a good bridge (if you can tolerate Poizat's eccentic, and sometimes polemic, style). Poizat starts off defining types (via back & forth) in Chapter 1, then he (apologetically) introduces formulas in Chapter 2. In Chapter 4, he proves the Compactness Theorem using ultrapowers and then presents the Henkin method as an afterthought. (He does more formal deduction later in Chapter 7, but only in order to prove the Incompleteness Theorems.) In the notes at the end of Chapter 4, Poizat writes: The compactness theorem, in the forms of Theorems 4.5 and 4.6, is due to Gödel; in fact, as explained in the beginning of Section 4.3 [Henkin's Method], the theorem was for Gödel a simple corollary (we could even say an unexpected corollary, a rather strange remark!) of his "completeness theorem" of logic, in which he showed that a finite system of rules of inference is sufficient to express the notion of consequence (see Chapter 7). It could also have been taken from [Herbrand 1928] or [Gentzen 1934], in which results of the same sort were proven. This unfortunate compactness theorem was brought in by the back door, and we might say that its original modesty still does it wrong in logic textbooks. In my opinion it is a much more essential and primordial (and thus also less sophisticated) than Gödel's completeness theorem, which states that we can formalize deduction in a certain arithmetic way; it is an error in method to deduce it from the latter. If we do it this way, it is by a very blind fidelity to the historic conditions that witnessed its birth. The weight of this tradition is apparent even in a work like [Chang-Keisler 1973], which was considered a bible of model theory in the 1970s; it begins with syntactic developments that have nothing to do with anything in the succeeding chapters. This approach---deducing Compactness from the possibility of axiomatizing the notion of deduction---once applied to the propositional calculus gives the strangest proof on record of the compactness of $2^\omega$! It is undoubtedly more "logical," but it is inconvenient, to require the student to absorb a system of formal deduction, ultimately quite arbitrary, which can be justified only much later when we can show that it indeed represents the notion of semantic consequence. We should not lose sight of the fact that the formalisms have no raison d'être except insofar as they are adequate for representing notions of substance. There are two key points in there. The first, which comes through rather clearly, is that Model Theory could ultimately be done without any formal syntax and deduction rules. The second, much more subtle point, is present only in the parenthetical remark "and thus also less sophisticated" in the second paragraph. It sounds like Poizat is saying that the Completeness Theorem does not follow from the Compactness Theorem. But it does follow, at least in some abstract sense. The Compactness Theorem does imply that there is some system of finitary rules for deduction which are complete for semantic consequence. The only "sophisticated" part missing is that this set of rules has a simple description. In particular, the Incompleteness Theorems are not consequences of the Compactness Theorem. - Thank you for this answer. I enjoyed reading it. – Pete L. Clark Jan 7 2010 at 6:36 A great answer, Francois! – Joel David Hamkins Jan 7 2010 at 13:37 Thanks! (Although Poizat deserves credit for writing half of it.) – François G. Dorais♦ Jan 7 2010 at 17:47 Details for this answer can be found here: mathoverflow.net/questions/11752/… – François G. Dorais♦ Jan 16 2010 at 18:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458253383636475, "perplexity_flag": "head"}
http://mathoverflow.net/questions/4279/interesting-applications-of-the-pigeon-hole-principle/10973
## Interesting applications of the Pigeon-hole Principle ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm a little late in realizing it, but today is Pigeon-hole Day. Festivities include thinking about awesome applications of the Pigeon-hole Principle. So let's come up with some. As always with these kinds of questions, please only post one answer per post so that it's easy for people to vote on them. Brouwer's fixed point theorem can be proved with the Pigeon-hole Principle via Sperner's lemma. There's a proof in Proofs from The Book (unfortunately, the google books preview is missing page 148) By the way, if you happen to be in Evans at Berkeley today, come play musical chairs at tea! - 1 Not an answer, but Terry Tao's blog post terrytao.wordpress.com/2007/05/23/… features the PHP. – Sonia Balagopalan Nov 5 2009 at 18:38 50 How strange. I had no idea about pigeon-hole day, but I came here to post a very similar topic (analogues of the pigeonhole principle). But I guess at any given moment, there are lots of people thinking about the pigeonhole principle, and there are only so many websites where such thoughts can reasonably be posted, so applying the...you know. – Darsh Ranjan Nov 5 2009 at 19:18 1 Excellent question. Has anyone asked for a list of applications of the inclusion-exclusion principle? The only sexy one that I know frequently appears in probability courses: the probability that a random permutation has no fixed points converges fairly fast to $1/e$ as the size of the set being permuted grows. – Michael Hardy Jun 13 2010 at 0:10 ## 21 Answers Given 5 point on a sphere, there must be a closed hemisphere that contains 4 of them. - 2 I think this generalizes to given n+2 points on a n-dimensional hypersphere there is a closed hemisphere which contains n+1 of them. – Kristal Cantwell Nov 6 2009 at 17:33 @J. H. S.: yes, that's where I first saw it. – Anton Geraschenko♦ Dec 16 2009 at 0:21 1 @J. H. S.: it's problem A2 on the 2002 Putnam. – Michael Lugo Dec 16 2009 at 0:29 @Anton: Do you know where this proposal first appeared? Wasn't it an A-something problem of a Putnam Examination? – J. H. S. May 7 2010 at 16:37 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Kronecker's theorem asserts that if $\lambda$ is irrational then the orbit of $n\lambda$ for $n=1,2,3,\ldots$ is dense in $S^1$ $\simeq$ $\mathbb{R}/\mathbb{Z}$. A proof uses the Pigeon-hole Principle. It relies on the fact that if you divide $S^1$ into $k$ equal (but very small) segments you must hit one of these segments twice, by the Pigeon-hole Principle. - The pumping lemma, which gives a pretty good test to show that some languages are not regular. (There are also more general pumping lemmas that use pigeonhole for their proofs, but I don't know them.) - If I'm not mistaken, the original application of the Pigeonhole Principle - the reason we call it Dirichlet's Pigeonhole Principle - was to Dirichlet's Theorem on Diophantine Approximation, viz., if $\alpha$ is a real irrational then there are infinitely many rationals $p/q$ such that $|\alpha-(p/q)|<1/q^2$. An oldie, but still a goodie. - 1 Yes, this was the first application, and Dirichlet then used it (together with an application of the infinite pigeonhole principle) to prove the existence of solutions of the Pell equation. See Supplement VIII of Dirichlet's Vorlesungen ueber Zahlentheorie. – John Stillwell Aug 19 2010 at 22:55 If we're allowed repeated application of the pigeonhole principle, the infinite version of Ramsey's theorem (finite colourings). - 3 There are a lot of applications of the pigeonhole principle in Ramsey theory. I found a quote by Terence Tao: "Indeed one can view Ramsey thoery as the set of generalizations and repeated applications of the pigeonhole principle." This is from page 254 from the book Additive Combinatorics by Terence Tao and Van Vu. – Kristal Cantwell Nov 6 2009 at 17:41 The easiest proof I know of the Morse Property for word-hyperbolic groups (which says that quasigeodesics are uniformly close to geodesics) uses the pigeon-hole principle several times. - It looks like the following very important example is not still mentioned here. Pigeonhole principle plays a crucial role in K. F. Roth's proof that for any $\kappa>2$ and any algebraic irrational real number $\alpha$ inequality $|\alpha-p/q| < q^{-\kappa}$ has only finitely many solutions for rational fractions $p/q$. (Lemma 9 in: K. F. Roth, Rational approximation to algebraic numbers, Mathematika 2 (1955), 1-20). - Well, the actual author of this application is Siegel (1929). This lemma is known as Siegel's lemma in transcendental number theory. I am surprised it wasn't mentioned earlier. +1 – Wadim Zudilin Dec 3 2010 at 12:58 This lemma itself easily implies Roth's theorem, and so it does not belong to Siegel. But argument does, you are right (Roth's contribution concerns other ideas of the proof). – Fedor Petrov Dec 3 2010 at 13:29 Some problems whose solutions involve the pigeonhole principle are at http://math.mit.edu/~rstan/a34/pigeon.pdf - The provided link is dead. :( – huitseeker Mar 3 2011 at 15:00 This has been fixed. – Richard Stanley Mar 5 2011 at 16:42 It seems to be broken again. – Andreas Blass Jan 26 2012 at 15:09 2 I found the (currently) correct url by googling on "rstan pigeon.pdf." – Barry Cipra Jan 26 2012 at 16:11 I always find this fun to think about: in any group of six people there are either three mutual friends or three mutual strangers. - A simple application is the following: Every sequence of $n^2 +1$ distinct real numbers contains a subsequence of length $n +1$ that is either strictly increasing or strictly decreasing. (Other simple consequences can be found here.) - 3 This is a special case of the Erdos-Szekeres theorem: en.wikipedia.org/wiki/… . – Qiaochu Yuan Nov 5 2009 at 20:21 A couple that I've always found cute even though (or because?) they're completely elementary: 1) Every rational number has an eventually repeating decimal expansion. 2) Every element of a finite group has an order. - The compactness of the Cantor space. This follows directly from Konig's lemma which itself is a direct consequence of the infinite version of the pigeonhole principle. - For most cardinals $\kappa \leq \lambda$, it must happen that the infinite symmetric group $S_\kappa$ satisfies exactly the same first order theory as $S_\lambda$. That is, the groups are elementarily equivalent. This is just because there are only continuum many theories in a countable language, but more cardinals than that. See Elementary equivalence of infinitary symmetric groups. - I remember hearing as an undergrad the "proof" that there are two human beings on the earth with the same number of hairs on their heads. This is done by a few estimations and then applying the pigeonhole principle. A number of examples including a version of this one can be found here: http://www.cut-the-knot.org/do_you_know/pigeon.shtml - 2 A strengthening is this: there are certainly at least two bold people. – Boris Bukh Jun 2 2010 at 9:48 4 @Boris: this may be related to the slogan 'fortune favours the bald'. – Ketil Tveiten Dec 3 2010 at 14:07 I think, this proof of Sergei Ivanov that any $m$-fold cover of Riemann manifold has diameter at most $m$ times greater then the base is spectacular application of the PHP. - Thue's Lemma, which plays a key role in one proof Fermat's theorem on primes that can be written as the sum of two squares, is based on the pigeonhole principle. (The wikipedia does not mention this and I could not find a nice web page on Thue's Lemma to cite here, so I can only suggest LeVeque's Fundamentals of Number Theory.) - 1 I believe you can find it in "Proof from the book", as for Brouwer theorem. – Gian Maria Dall'Ara Nov 7 2009 at 12:44 Miklos Laczkovich's book Conjecture and Proof has some interesting theorems with pigeonhole proofs, such as the classification of primes that are sums of two squares, and bounds on Sidon sequences. - M. A. Lukomskaya's proof of van der Waerden's theorem on arithmetic progressions is a remarkable application of the Schubfachprinzip and induction. - I think the solutions of these questions are very interesting (by using pigeon-hole principle), first question is easy, but second question is more advanced: 1) For any integer $n$, There are infinite integer numbers with digits only $0$ and $1$ where they are divisible to $n$. 2) For any sequence $s=a_1a_2\cdots a_n$, there is at least one $k$, such that $2^k$ begin with $s$. - The impossibility of a lossless compression scheme for binary strings that reduces the size of every input follows easily from the pigeon hole principle. - This, of course, requires some heavier theorems in Cech cohomology: If $X$ is a separated scheme that's covered by $d$ affine opens and $\mathcal{F}$ is a quasi-coherent sheaf on $X$, then $H^p(X,\mathcal{F}) = 0$ for all $p \geq d$. As a corollary: if $X$ is a quasi-projective scheme over a Noetherian ring $A$, and $\mathcal{F}$ is a quasi-coherent sheaf, then $H^p(X,\mathcal{F}) = 0$ for all $p > d$ where $d = \dim(X)$ (note that $X$ is covered by $d+1$ affines). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924235999584198, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/36642/binary-matrices-with-constant-row-and-column-sums/36660
## Binary matrices with constant row and column sums ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) My question is about $m \times n$ binary matrices (aka `$\{0,1\}$`-matrices), whose rows all sum to the same value, and whose columns all sum to the same value (but these two values may be different). The first question is simply: is there a standard name for such matrices? They correspond to the biadjacency matrices of so-called "biregular bipartite graphs", but this terminology doesn't appear to be commonly used. Second, are there any "interesting" constructions of families of such matrices, in particular that are connected to other combinatorial objects? Two simple examples of constructions of these matrices are the $\binom{n}{k} \times n$ matrix whose rows consist of every $n$-bit string with Hamming weight $k$; and the $2^n \times 2^n$ Sylvester-Hadamard matrices with the first row and column removed. I did find a paper by Brualdi titled "Matrices of Zeros and Ones with Fixed Row and Column Sum Vectors", but this seems to be more concerned with the question of existence of these matrices, rather than constructing them. - As to the name, a Google search, and a glance to the references in the papers, confirm that the long but self-explicatory combination that you used in the title is very popular. – Pietro Majer Aug 25 2010 at 9:55 Indeed, the columns ${\bf must}$ sum to a different value than the rows, unless $m=n$ (or the common sum is zero). There is some discussion of these matrices in Ryser's little book on Combinatorial Mathematics. – Gerry Myerson Aug 25 2010 at 13:00 You should investigate papers of A. Barvinok on contingency tables. – Andy B Jan 27 2011 at 6:23 ## 6 Answers To answer your question about interesting combinatorial objects: Your Sylvester-Hadamard matrix example generalizes in at least two ways. 1. The incidence matrix of any balanced incomplete block design or, more generally, $t$-design has constant row and column sums. Specifically, a BIBD$(v,b,r,k,\lambda)$ represented as a $v\times b$ incidence matrix has row sums $r$ and column sums $k$. Normalizing an $n\times n$ Hadamard matrix so that the first row and column consist entirely of 1s, and then removing this row and column while replacing $-1$s with 0s gives a design with parameters $v=b=n-1$, $r=k=n/2-1$, $\lambda=n/4-1$. So among Hadamard matrices, Sylvester-Hadamard matrices are not special in this regard. Finite projective planes are also designs, with parameters $v=b=q^2+q+1$, $r=k=q+1$, $\lambda=1$, where $q$ is the order of the plane. 2. Sylvester-Hadamard matrices have the additional property that if the first row and column are removed, and the remaining rows and columns are suitably permuted, one obtains a circulant matrix. (Most Hadamard matrices do not have this property, but Paley-Hadamard matrices constructed using quadratic residues in $\mathbb{F}_p$, $p\equiv3\pmod{4}$ also do. More generally, Hadamard matrices constructed from difference sets do.) But any $n\times n$ circulant matrix whatsoever will have constant row and column sums (with row sums equal to column sums). (This is not combinatorially so interesting in general.) Addendum: In answer to your first question, design theorists call an incidence structure whose incidence matrix has constant row and column sums a tactical configuration. This is a $t$-design with $t=1$. The balanced incomplete block designs in the first part of my answer are $t$-designs with $t=2$, but any $t$-design is also a $(t-1)$-design. The condition that any two points be incident with exactly $\lambda$ blocks gives BIBDs a lot of additional interesting structure that tactical configurations do not typically have. 3-, 4-, and 5-designs are more interesting still. See the PlanetMath page for more information. - Thanks very much - that's very helpful. – Ashley Montanaro Aug 25 2010 at 21:09 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you fix $m,n$ and the row and column values, and then consider the matrices as points in $\mathbb{R}^{mn}$, their convex hull forms a convex polytope known as a transportation polytope. In the particular case of permutation matrices one obtains the famous Birkhoff polytope for instance. So I would advise you to look up references about those polytopes, since your matrices arise naturally as their vertices; hopefully you can find the names and particular constructions you asked for. (Well this this should probably be a comment, since I do not really answer any of your questions; unfortunately I do not have enough magic points to write one) Edit: As pointed out in a comment, what I wrote above is simply not true. In fact, the $\lbrace 0,1\rbrace$-matrices considered are more precisely the lattice points of the intersection of a transportation polytope with the hypercube $[0,1]^{mn}$; this forms another polytope which may actually be interesting in its own right. - Thanks, this is a nice alternative way to look at these matrices. – Ashley Montanaro Aug 25 2010 at 12:56 "Magic points" -- I love it! – JBL Aug 25 2010 at 13:06 3 "your matrices arise naturally as their vertices": this isn't true due to the restriction to (0,1) matrices. For example, let `$$A=\begin{pmatrix}2 & 0 \\ 0 & 2\end{pmatrix}, B=\begin{pmatrix}0 & 2\\ 2 & 0\\ \end{pmatrix}, C=\begin{pmatrix}1 & 1\\ 1 & 1\end{pmatrix},$$` then the row and column sums of $A,B,C$ are equal to 2, but $C=1/2(A+B),$ so $C$ is not a vertex of the corresponding transportation polytope. You can impose (0,1) restriction by force, but I think that the resulting polytopes would be quite different from the standard transportation polytopes. – Victor Protsak Aug 25 2010 at 19:07 Victor: you're absolutely right of course, I'll edit my so-called answer... – Philippe Nadeau Aug 27 2010 at 9:10 @Phillipe, Philippe, could you expound a little on "transportation polytopes"? Wikipedia has nothing, and the google search led to papers about the "20 year history of ..." etc. Are they akin or similar to stochastic matrices describing the probabilistic evolution of a Markov chain? When limited to zeros and ones, are they just a permutation matrix? Please forgive me if my question seems stupid. I am familiar with stochastic matrices, but not "transportation polytopes". – sleepless in beantown Aug 27 2010 at 9:29 show 1 more comment About your first question: remark that when $m=n$ these matrices are sometimes called "semi-magic squares" (like "magic squares" but without the "sums along the diagonals" condition). About the constructions of families of such matrices, there is another paper by Brualdi: R.A. Brualdi, Algorithms for constructing (0, 1)-matrices with prescribed row and column sum vectors, Discrete Math. 306 (2006), no. 23, 3054-3062. You can see also this paper by Fonseca and Mamede on the same subject: http://www.mat.uc.pt/~cmf/papers/01matrix_fonseca_mamede.pdf Here the construction is related to other combinatorial objects (pairs of semi-standard Young tableaux of conjugate shapes), as you wished. - Thanks; this connection to other types of object is indeed the sort of thing I was looking for. – Ashley Montanaro Aug 25 2010 at 12:59 See this paper by Canfield and McKay. As the title suggests it focuses on the asymptotic enumeration, but it has lots of useful references. Added A simple arithmetic construction that realizes all possible constant line sums is as follows. Let $g=\gcd(m,n)$. Label rows and columns by elements of $\mathbb{Z}/m\mathbb{Z}$ and $\mathbb{Z}/n\mathbb{Z}$ respectively. Fix $0\le s\ge g$. Put a $1$ in each position $(j+k,j)$ where $j$ runs through $\mathbb{Z}$ and $0\le k < s$, interpreting $j+k$ and $j$ as integers modulo $m$ and $n$ respectively. - Thanks for the link; some of those references were useful. In particular, the term "semi-regular bipartite graph" seems to be yet another way of describing these matrices. – Ashley Montanaro Aug 25 2010 at 13:23 An $m \times m$ matrix of this type is called a stochastic matrix if the sum of each of its rows is 1 (a right stochastic matrix) or if the sum of each of its columns is 1 (a left stochastic matrix). A doubly-stochastic matrix has the sums of rows and the sums of columns equal to one. A stochastic matrix is also called a transition matrix as it can represent the transition probabilities of a Markov chain over a finite state space $s$, where in this case $|s|=m$. Each element of such a stochastic matrix is restricted to be $x_{i,j} \in \mathbb{R}, x_{i,j}>0$. Thus in the case of stochastic matrices, since you are asking about binary matrices, every matrix entry is either zero or one: $x_{i,j} \in {0,1}$. This restricts the sum to being $1$, and this type of stochastic binary matrix is also known as a permutation matrix as it represents the transitions in a permuation on a finite state space. - build the matrix $H =(A-J)/2$ from your matrix $A.$ If this new matrix is hadamard and satisfy your condition $H$ it is called regular hadamard", ((google this)). $J$ is the matrix with only $1$ everywhere. luis $* * *$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922614574432373, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1368/is-it-a-good-idea-to-use-bitwise-xor-on-a-set-of-md5-sums/1371
# Is it a good idea to use bitwise XOR on a set of MD5 sums? I have designed an SQL aggregate function in Oracle that bitwise XORs all MD5 sums of the values stored in a column. For example, if my table is: ````+-----+----------+---------+ | Key | Sequence | Value | +-----+----------+---------+ | 1 | 1 | 'Hello' | | 1 | 2 | 'World' | | 2 | 1 | '1234' | | 3 | 0 | (empty) | | 4 | 1 | 'Hello' | | 4 | 3 | 'World' | +-----+----------+---------+ ```` I can run the following query in Oracle: ````with t AS (select 1 key, 1 sequence, 'Hello' value FROM dual union all select 1, 2, 'World' from dual union all select 2, 1, '1234' from dual union all select 3, 0, '' from dual /* ... */ ) select key, md5_agg(value) from t group by key ```` and get (unfortunately aggregate functions in Oracle ignore NULL values and `''` is considered as NULL) ````+---+----------------------------------+ |key| md5_agg(value) | +---+----------------------------------+ | 1 | 7EBD0B1DA67F965F802D31DF25C4B321 | | 2 | 81DC9BDB52D04DC20036DBD8313ED055 | | 3 | 00000000000000000000000000000000 | | 4 | 7EBD0B1DA67F965F802D31DF25C4B321 | +---+----------------------------------+ ```` I would like to use this approach to compare if the contents of some columns are equal when I compare subsets of the same table (think of finding duplicates in a complex structures that spans over multiple rows in the same table). Here with this results I know that I have the same subsets for keys 1 and 4. What are the limits of such an approach? Here are the ones I could list: • This is interesting only if my column contains distinct values. If my columns contains twice the same string, the `xor` operation will be a no-op. • Due to Oracle limitations, if my column contains empty values, they do not count. With those limitations in mind, is it still possible to infer, from two equal `md5_agg` results computed from distinct and non-empty values, that the original values make up the same sets? In order to reformulate, are there odds that the MD5 sums of distinct strings XOR to 0? - 1 One property of XOR is its commutativity (and associativity) - A XOR B XOR C = B XOR C XOR A, i.e. you can't detect changes in order, but at the same time, you don't have to sort before aggregating. No idea if this is positive or negative for your use case. – Paŭlo Ebermann♦ Nov 30 '11 at 10:14 @Palo Ebermann♦: You're right. I've chosen `XOR` exactly because of that, because I want to verify that tables have the same contents, not that `SELECT` statements return the same results in the same order (no DBMS guarantees the order of rows returned unless you specify `ORDER BY`). – Benoit Nov 30 '11 at 10:45 If you do some research in previous questions (in crypt, or in stackoverflow, etc) that there's always a possibility that you have some collision, but it's very unlikely, and then you deal with it with different approaches if having a collision is a no-go for you... – woliveirajr Nov 30 '11 at 10:54 If I understand this right - you're looking to use this method to ensure that two different aggregate values imply that the total source data remains unique? I.e. you're wondering if xoring md5 hashes will result in collisions over the data you're aggregating? If so, it might be worth adding that to the question perhaps? Just a thought - the more detail on these sort of things the better the answers, generally. – Antony Vennard Nov 30 '11 at 11:32 I understood that the OP is looking for a "given my data, I want to extract subsets. Can I use MD5 with XOR to verify if the subset contains the same content, without having to worry the about their order ?". Collisions would be a problem of giving false positives. And this has very little to do with crypto... using MD5 and XOR doesn't mean he wants security, secrecy... – woliveirajr Nov 30 '11 at 12:33 ## 2 Answers There are two points to consider here: How likely is it that two different strings give the same MD5 hash? This is known as a hash collision. A good hash function makes this probability as small as possible (this would be about $1/2^{128}$ for MD5). If you have a larger number of strings to hash, the probability of any collision between any of them grows ... but you need about $2^{64}$ strings to have a non-neglectible chance of collision (This is the birthday paradox). Unfortunately, MD5 is not a good hash function - its collision-resistance is basically broken. It doesn't take much work to create two different strings with the same hash. So if you want to ensure this even in the face of an adversary feeding strings into your database, don't use MD5. Use one of the SHA-2 hash functions instead, they are still considered secure. (They also have a larger output size, which also moves the birthday bound higher.) Assuming an ideal hash function (e.g. a random oracle), we can reduce the question to this: Given two sets $A$, $B$ of $k_A$, $k_B$ random bit-strings of same lengths $n$, how likely is it that we have this? $$\bigoplus_{i=1}^{k_A} A_i = \bigoplus_{j=1}^{k_B} B_j$$ Actually, the XOR of random strings (or even many non-random strings with a random one) is again a random string, so this boils down to how likely is it that two random strings of size $n$ are identical? Here we have the same answer as above: This probability is $1/2^{n}$ for two strings, and after about $2^{n/2}$ such random strings you'll get a good chance for a collision. So, it looks like your scheme is solid, just make sure to choose a good hash function (not MD5, and even SHA-1 has bad reputation nowadays) with a big enough output size $n$ so that the number of individual strings hashed stays small compared to $2^{n/2}$. But if the adversary can control a large enough part of your subset (including a subset of a set of around $n$ such strings is enough, and if he can inject those strings arbitrarily, even less are needed, but a bit more pre-calculation to choose them), he can choose those strings in a way that their XOR will give any value he wishes, which then translates to an arbitrary collision in the final "hash". Use the method highlighted in fgrieu's answer instead. - 2 I agree that the scheme has little chance to fail by accident, short of the issue of duplicate strings in an input. But if one cares to use a safer hash than MD5, then one is after avoiding deliberate collisions, and then the scheme is terminally weak, see my answer. – fgrieu Nov 30 '11 at 12:42 @fgrieu: Thanks for the note. Such occasions are when I wish I could downvote my own answer. – Paŭlo Ebermann♦ Nov 30 '11 at 15:09 @paulo: but the OP doesn't want protection, collison resistance... it's barely a crypto question, it's much more the use (and collison possibility) using MD5 – woliveirajr Nov 30 '11 at 16:25 The short answer is no. The scheme gives poor protection against collisions, that is inputs detected as having the same content (within order) when they have not. As noted in the question, this can occur when entries in an input are duplicated; e.g. ("O","X","O") and ("X") collide. This can also occur for maliciously crafted entries. For a start, MD5 is broken, in the sense that it is now easy to create different strings/entries that hash to the same MD5 result; replacing one by the other will go undetected. And even if one uses a collision-resistant hash such as SHA-256, it is very easy to construct a set of messages such that the XOR of their hashes equals a given value, by mere Gaussian elimination. See the answer to this question. On the other hand, short of the issue of duplicated entries in an input, there is little chance that the scheme fails by accident. A robust scheme would be: • hash each string using SHA-256 • sort the hashes by ascending value • hash the concatenation of the sorted hashes using SHA-256 - 1 Thank you for this answer. I am not concerned about malicious entries for the moment, just about entropy lost when XORing MD5 sums. – Benoit Nov 30 '11 at 15:59 @Benoit: then rest assured, there are only 1 chance out of 10 that there is a collision among $m=2^{63}$ invocations of md5_agg with random distinct arguments (short of obvious issues with duplicates and empties); you likely do not have the computational resources to even approach that; and collisions odds for $m$ much below that are only about $m \cdot (m-1)\cdot 2^{-129}$. – fgrieu Nov 30 '11 at 20:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9175570607185364, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/escape-velocity+newtonian-gravity
# Tagged Questions 2answers 49 views ### Escape velocity to intersection of two gravitational fields Find the minimal velocity needed for a meteorite of mass $m$ to get to earth from the moon. Hint: the distance between the center of earth and the center of moon is $\approx 60 R_E$, and the ... 3answers 110 views ### Projectiles and escape velocity Q: The escape velocity for a body projected vertically upwards from the surface of earth is 11 km/s. If the body is projected at an angle of $45^\circ$ with vertical, the escape velocity will be? ... 3answers 167 views ### Escape velocity from Earth We know the escape velocity from the Earth is 11.2km/s. Isn't it the velocity required to escape from earth if we go normal to the surface of earth? i.e while we derive the formula for the escape ... 1answer 211 views ### How long will it take for a bullet to reach a Geostationary orbit? I'm curious to know this. Neglect air friction and imagine a bullet that were shot normal to the Earth's surface, from the Equator. I will have to consider the Coriolis effect and so I expect the path ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8896193504333496, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/160672-about-accumulation-points.html
Thread: 1. About accumulation points.. Hi, I had a problem proving this theorem: Let {sn} be a bounded sequence of real numbers. Assume S = {sn: n= 1, 2, ...} is infinite. Then {sn} is convergent if and only if S has exactly one accumulation point. But in another page of my notebook, it says: Accumulation points can be more than one or even infinitely many while the limit is unique. Are they opposite? Thanks for any help. 2. Originally Posted by truevein I had a problem proving this theorem: Let {sn} be a bounded sequence of real numbers. Assume S = {sn: n= 1, 2, ...} is infinite. Then {sn} is convergent if and only if S has exactly one accumulation point. But in another page of my notebook, it says: Accumulation points can be more than one or even infinitely many while the limit is unique. Are they opposite? No, they are not opposite. A convergent sequence has exactly one accumulation point. A set can have many accumulation points. The set $[0,1]$ has infinitely many accumulation points. The sequence $\left(\frac{1}{n}\right),~n\in \mathbb{Z}^+$ has exactly one. 3. Thanks! Then the problem is one being a sequence and other a set. I'll try it again. 4. That doesn't quite resolve the issue. For example, the sequence $a_n=(-1)^n(1-1/n)$ has two accumulation points: 1 and -1. 5. Originally Posted by roninpro That doesn't quite resolve the issue. For example, the sequence $a_n=(-1)^n(1-1/n)$ has two accumulation points: 1 and -1. You missed the qualifying adjective, convergent . 6. I was actually responding the post above mine. I thought that the poster did not understand that a sequence can also have multiple accumulation points. Maybe I misunderstood the spirit of the question. Either way, sorry for the confusion. 7. Originally Posted by roninpro I was actually responding the post above mine. This why we hope that you will use the "reply with quote" option.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9588561654090881, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/58756-insecurity-inductive-reasoning.html
Thread: 1. Insecurity of Inductive Reasoning I have not encountered this in any math book, but I guess this is where to ask. A generalisation (more accurately, an inductive generalisation) proceeds from a premise about a sample to a conclusion about the population. The proportion Q of the sample has attribute A. Therefore: The proportion Q of the population has attribute A. How great the support which the premises provide for the conclusion is dependent on (a) the number of individuals in the sample group compared to the number in the population; and (b) the randomness of the sample. If we know the proportion $Q$, the sample/population ratio $P$ and the randomness $R$ (where R=0 means 0% randomness and R=1 means 100% randomness), would it be possible to calculate/find a formula for the insecurity $I$ of the inductive reasoning above, if we measure the insecurity in the interval [0,1], where I=0 means 100% secure and I=1 means 0% secure? The formula for $I$ would most likely have an overall factor $R$, I think. EDIT: I just realized that Q and P are the same. That means that the forumula will most likely be $I=R\cdot Q$, right? 2. Originally Posted by espen180 I have not encountered this in any math book, but I guess this is where to ask. A generalisation (more accurately, an inductive generalisation) proceeds from a premise about a sample to a conclusion about the population. The proportion Q of the sample has attribute A. Therefore: The proportion Q of the population has attribute A. How great the support which the premises provide for the conclusion is dependent on (a) the number of individuals in the sample group compared to the number in the population; and (b) the randomness of the sample. If we know the proportion $Q$, the sample/population ratio $P$ and the randomness $R$ (where R=0 means 0% randomness and R=1 means 100% randomness), would it be possible to calculate/find a formula for the insecurity $I$ of the inductive reasoning above, if we measure the insecurity in the interval [0,1], where I=0 means 100% secure and I=1 means 0% secure? The formula for $I$ would most likely have an overall factor $R$, I think. EDIT: I just realized that Q and P are the same. That means that the forumula will most likely be $I=R\cdot Q$, right? This is a parody of the way statistics is done. We have a sample with proportion Q, from this we either construct a confidence interval which contains the true proportion with a specified probability (frequentist statistics), or we construct a posterior distribution for the true proportion (Baysian statistics). From these we may derive a point estimate, but it is associated with an uncertainty interval or equivalent with an interpretation that depends on your statistical school (but usually comparable). CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251148700714111, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/8895/how-many-bits-are-needed-to-simulate-the-universe/8898
# How many bits are needed to simulate the universe? This is not the same as: How many bytes can the observable universe store? The Bekenstein bound tells us how many bits of data can be stored in a space. Using this value, we can determine the number of unique states this space can be in. Imagine now, we are to simulate this space by enumerating each state along with which states it can transition to with a probability for each transition. How much information is needed to encode the number of legal transitions and the probabilities? Does this question even make sense? If so, is there any indication that any of these probabilities are or are not Computable numbers? Edit Here's a thought experiment. 1. Select your piece of space and start recording all the different states you see. 2. If the Bekenstein bound tells us we can store n bits in our space, wait until you see 2^n different states. Now we've seen all the states our space can be in (otherwise we can violate the Bekenstein bound). 3. For any state, record any other state that the space can legally transition to without violating any physical laws. To simulate this portion of space, take it's state and transition it to a legal state. Repeat. We have only used a finite number of bits and we have modeled a section of space. - 8 What are you going to do with this number? – Vladimir Kalitvianski Apr 19 '11 at 21:50 I'd like to know if the question makes sense and if the number is finite, for use in a thought experiment. – z5h Apr 19 '11 at 22:15 If you know the state of the universe at a point t, you can determine it at any other point t2>t with the schrodinger equation. – Holowitz Apr 20 '11 at 2:22 Are you using a classical computer or a quantum computer? – Peter Shor May 26 '11 at 20:29 @Peter Shor: Classical. I've added a thought experiment in the hopes it might clarify what I'm getting at. – z5h May 26 '11 at 22:20 show 1 more comment ## 7 Answers There's a huge difference between the number of bits you can store in a given space and the number of bits you need to describe that space. Take a single atom of iron with its 26 electrons. For a complete description, you need the many-particle wavefunction $\psi(\vec{x}_1, \vec{x}_2, \vec{x}_3, \dots, \vec{x}_{26})$ (ignoring spin for the moment). Imagine you want to sample it in a given region of space with a very crude grid of 10 points for each direction, so you have $1000$ points in total. This means you need $1000^{26} = 10^{3\cdot 26} = 10^{78}$ numbers to store it. For decent precision, you want to use at least $16$ bits, so you end up with approximately $10^{79}$ to $10^{80}$ bits. This is more than (or of the same order as) there are atoms in the entire universe. Now taking it up from here, for a super-exact simulation of the universe you need the complete wavefunction of the entire universe, so replace the $26$ from the example above with something much higher, and of course you want it to be more precise, so replace the 1000 with something much higher, and then note that due to quantum field theory, the number of particles isn't even fixed, so a simple wavefunction isn't even enough... In a black body, for example, you can have an infinite number of photons. Although the probability for this decays exponentially, you'd still have to include it in an exact simulation... - @lagerbaer basic question: if you wish to encode an m parameter function over n points, why would it be $n^m$? – yayu Apr 20 '11 at 3:17 This is the nasty thing about quantum mechanics. You have to store the value of the wavefunction for each of the possible combinations of your $\vec{x}_i$. You have $n$ for the first, $n$ for the second, $n$ for the third... so in total $n^m$. – Lagerbaer Apr 20 '11 at 3:33 This makes me wonder, isn't there a stochastic (Monte Carlo) way of simulating quantum mechanical systems akin to what is done when solving SDE and Markov processes and such? – Raskolnikov Apr 20 '11 at 8:27 – Raskolnikov Apr 20 '11 at 8:29 "in a black body, for example, you can have an infinite number of photons". If we assume finite energy and finite space in the universe, and we can encode a 1 bit of data per photon, this disagrees with the Bekenstein bound. The other option is that although we have infinite photons, we cannot decode the information stored. So we can discard them from our simulation. no? – z5h Apr 21 '11 at 21:53 show 4 more comments I will add to @Lagerbaer's estimate that the prevailing physics theories do not describe the universe as a LEGO constructable entity. When you see a man figure sculpted by lego bricks, you can ask " how many lego pieces went into simulating that man, because there is a finite size for a man and a finite size for the lego brick. Even though the Universe has a finite size, there are no finite lego bricks that can simulate it. You need some calculus similar to what mathematicians use when counting and manipulating infinities. - But this isn't confirmed. Yes, the prevailing theories require the use of uncountable numbers to describe the universe, but I believe that theoreticians are working furiously to find other theories that don't require that. Yes, a LEGO approach is contradicted by evidence to the extent we're able to measure, but can we possible deal with the implications the construction of the universe actually requiring uncountable numbers? That is a fundamentally existentialist question - uncountable numbers contradict my intuition as much as existence itself does. – AlanSE May 26 '11 at 18:57 1 Note that infinite $\not=$ uncountable. The natural and rational numbers are perfectly countable. – Lagerbaer May 26 '11 at 22:18 See Tipler, The physics of immortality. There you can read an approach. - The number he estimates has an upper bound and definitely is finite. – TheBlastOne Aug 14 '12 at 10:59 Wasn´t it 2^1024^1024? – TheBlastOne Aug 14 '12 at 11:00 Bits, or qubits? Classical or quantum computer? Exact simulation accuracy, or good enough accuracy? If perfect accuracy, the computer can't be part of the universe because no finite entity can simulate itself with perfect accuracy. The measurements also have to be performed at the meta level. - Bits. What is the difference between "exact" and "good enough"? We only need to be accurate enough to distinguish a number of states in agreement with the Bekenstein bound. If our resolution is higher than that, and we can simulate more states as a result, we violate the Bekenstein bound. – z5h May 27 '11 at 14:58 Anyway you look at it, you need an infinite number of bits. This is because if you have only a finite number, then you can't describe the description, as Konard stated. That is unless the description happens from outside of the universe. In that case the question is simply weather the universe has an infinite amount of information in it or not. I don't know any proofs that the universe is composed of infinite information, and some views relate energy and information (a proof that it is possible to convert information to energy can be found in Maxwell's demon thought experiment which was allegedly proved here http://www.livescience.com/8944-maxwell-demon-converts-information-energy.html), so if you believe there is a finite amount of energy, then maybe this means that there is also a finite amount of information, so only a finite amount of information can be used to describe it. However, as far as I know, most physical theories today require continuity and make use of it, so this number of required bits is obviously infinite. So if you believe that, you may be able to describe the universe from within (like Borges' short story "The Aleph" http://www.phinnweb.org/links/literature/borges/aleph.html). It seems to me that in this case the interesting question is weather we have a continuous universe or not. Weather space is enumerable or not. I asked this question yesterday here Can we have non continuous models of reality? Why don't we have them?. - The number of bits in any form is so close to infinite, that it doesn't make much sense to estimate it. Continuing in Lagerbaer's method, let's suppose we can find a ten-parameter fit to electronic wavefunctions for each electron, not using a grid, but using some parameters that describe the center position and spread, and oscillations. The phenomenon of entanglement means that you need a hundred-parameter fit for 2 electrons, and for 10^80 electrons, you need $$10^{10^{80}}$$ numbers, or if you want to be pedantic in terms of bits, assuming double precision is good enough: $$10^{10^{80}+2}$$ This is an order of magnitude which is totally false, since I left out the huger number of photons. If you want to describe the wavefunction of photons (and protons and neutrons), then you need a lot more numbers in the double exponent. This estimate is mind-bogglingly absurd--- the majority of this wavefunction is describing highly entangled superpositions of particle positions that are nothing like what we observe classically. A classical description requires $$10^{80}$$ bits, give or take, since it scales linearly with the number of particles. This mismatch in scaling between quantum mechanics and classical approximation to the universe is what makes a lot of people uneasy with taking quantum mechanics seriously as the final theory. What possible use is there in requiring such a vast number of bits for simulation? Wouldn't it be nicer to have a theory which has the right number of bits? The vast computational space of quantum mechanics is also what makes people interpret it as a many-worlds theory, it is spreading into a space of possibilities that is so staggeringly huge, and our status in the theory only lets us see a tiny little subpart of this enormous space. One can take the view that quantum mechanics is complete, and since it is so much vaster than classical mechanics, even a modest size quantum computer, on the order of 10,000 qubits, can do factoring calculations that exceed the capacity of a classical computer of $10^{80}$ bits. If we build such a computer, it will be hopeless to reduce the description to a classical one. But we haven't done so yet, so a serious question remains: does there exist a theory in which you can reduce quantum mechanics to a managable size? Can you reproduce the dinky quantum mechanics which we see, which is essentially just classical mechanics with occasional quantum effects, with a theory which is fundamentally classical? The one thing we know for sure is that we can't do this locally. If you use a local classical model, you will fail to reproduce Bell's inequality violations. But gravity is known to be nonlocal, and one can (barely) imagine a nonlocal classical computer conspiring to produce something that looks like quantum mechanics for some sort of embedded observers. Nobody has such a theory, but if it makes a computation the size of the classical universe, it will predict that quantum computation will fail when factoring large enough doable numbers. - This is not a correct answer. It is true only if you are speaking about a storage that can operate with variables of no less than 1 bit size. In that case you need $10^{80}$ such variables. But the majority of that space is wasted, because in reality each variable does not need a FULL bit. That is by compressing several such variables into one bit you'll need much less storage. Unfortunately the classical devices cannot independently manipulate quantities of information less than 1 bit. It is easier to estimate the true entropy of the universe from the other considerations, see my answer below – Anixx Dec 29 '12 at 14:46 @Anixx: you are talking about compression, and this is not appropriate for encoded quantum wavefunction. How are you supposed to compress wavefunction information? For the general case of a quantum computer it is certainly impossible. Anyway, I agree that if the universe is classical, it's what you say in your answer, nothing I said disagrees with you, but you are giving the number of qubits, not the number of bits, in the quantum holographic description. – Ron Maimon Jan 4 at 2:44 Multiply the area of the cosmological horizon by 4 - you'll get the needed information quantity in nats. Convert into bits by dividing by $\ln 2$. You'll get the needed value. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305001497268677, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/36471/the-equivalence-of-two-formulae-for-the-laplacebeltrami-operator/36495
# The equivalence of two formulae for the Laplace—Beltrami operator Let $M$ be a (pseudo-)Riemannian manifold with metric $g_{ab}$. Let $\nabla_a$ be the Levi-Civita connection on $M$. It's well-known that the Laplace—Beltrami operator can be defined in this context as $$\nabla^2 = \nabla^a \nabla_a = g^{ab} \nabla_a \nabla_b$$ where $g^{ab}$ is the dual metric and repeated indices are summed. However, we also have the coordinate formula $$\nabla^2 \phi = \frac{1}{\sqrt{| \det g |}} \partial_a \left( \sqrt{| \det g |} g^{ab} \partial_b \phi \right)$$ which, as I understand, comes from using the formula for the Hodge dual. Without invoking advanced machinery, what is the easiest way to directly prove the equivalence of the two definitions? I can see that if the partial derivatives of $g_{ab}$ vanish, then $$\nabla_{a} \left( g^{ab} \nabla_b \phi \right) = \partial_a \left( g^{ab} \partial_b \phi \right) = \frac{1}{\sqrt{| \det g |}} \partial_a \left( \sqrt{| \det g |} g^{ab} \partial_b \phi \right)$$ and in the general case it suffices to prove that $$\Gamma^{b}_{\phantom{b}ab} = \frac{1}{\sqrt{| \det g |}} \partial_a \left( \sqrt{| \det g |} \right)$$ but then it is seems to be necessary to compute the derivative of a determinant. Is there a trick which can be used to avoid this calculation? - I don't know of a trick to avoid this, but (from a linear algebra or introductory differential geometry class) you probably know how to compute the derivative of $det$ - right? – Gerben May 2 '11 at 18:17 @Gerben: Yes, I can finish the proof that way. But I think it's not unfair to say that the derivative of $\det$ is esoteric knowledge in the context this problem arose. (It came up in a undergraduate general relativity exam, and I suspect candidates are not expected to be analysts.) – Zhen Lin May 2 '11 at 18:59 I didn't see this comment before I wrote my answer. Still, I think it is quite reasonable to expect people to know how to differentiate the determinant; it is after all apparent from Cramer's rule. – Glen Wheeler May 3 '11 at 19:54 ## 2 Answers For general relativity students I think this might be a reasonable proof. One needs to know that your second expression for $\nabla^2\phi$ (the one using $\sqrt{|\det g|}$) is independent of the choice of coordinates (*). For any point you can then choose coordinate system such that $\partial_a g_{bc}=0$ and the two formulas (as you notice) give the same result. (*) means (as you also notice - you just want us to restate it in a simpler language - and I'm not sure whether I succeed to do it) that if $u^a$ is a vector-valued density, i.e. if $u^a=v^a \sqrt{|\det g|}$ where $v^a$ is a vector field, then $\partial_a u^a$ is a density, i.e. $\partial_a u^a=f\sqrt{|\det g|}$ for some function $f$ (with $f$ independent of the choice of coordinates - which implies $f=\nabla_av^a$). If we really want to avoid differential forms then we can invoke Gauss theorem (in coordinates), notice that the flow of $u^a$ though a hypersurface is independent of the coordinates, and hence its divergence $\partial_a u^a$ is a well-defined (independent of coordinates) density. edit: here is (really the same, but) a bit more sensible argument why $f$ (see above) is independent of coordinates. If $\psi$ is a function with compact support then $\int (\partial_a\psi)\, v^a \sqrt{|\det g|} dx^1\dots dx^n$ is independent of the choice of coordinates, and it is equal to (by per partes) $$-\int \psi \partial_a\Bigl( v^a \sqrt{|\det g|}\Bigr)\sqrt{|\det g|}^{-1}\,\sqrt{|\det g|} dx^1\dots dx^n$$ which shows (by choosing $\psi$ with smaller and smaller support and with $\int\psi\sqrt{|\det g|} dx^1\dots dx^n=1$) that $$f=\partial_a\Bigl( v^a \sqrt{|\det g|}\Bigr)\sqrt{|\det g|}^{-1}$$ is independent of coordinates. - Your first paragraph seems like the most plausible intended solution. I can't help but feel it's circular reasoning though — in some sense, this proof is how we know the second formula is independent of the choice of coordinates, isn't it? – Zhen Lin May 2 '11 at 19:57 @Zhen Lin: you are certainly right - I tried to mumble something in the second paragraph why the formula is independent of coordinates, and now I added (+-) a proof of the fact – user8268 May 2 '11 at 20:48 This does not need any fancy trickery or complicated machinery. The moral of the story is: do not be scared of differentiating determinants! Formally, the derivative of a determinant is the trace. This actually happens quite often in geometric analysis, because the measure on a Riemannian manifold is given by $d\mu = \sqrt{\det g} \mathcal{H}^n$, where $n$ is the dimension of the manifold and $\mathcal{H}^n$ is $n$-dimensional Hausdorff measure. We have $$\partial_k \det A = (\partial_k A_{ij})A^{ij} \det A,$$ so in particular $$\partial_i \sqrt{\det g} = \frac{(\partial_ig_{pq})g^{pq}}{2} \sqrt{\det g}.$$ Thus $$\begin{align*} \frac{1}{\sqrt{\det g}}\partial_i\Big(g^{ij}\sqrt{\det g}\partial_j\phi\Big) &= (\partial_ig^{ij})\partial_j\phi + g^{ij}\frac{1}{\sqrt{\det g}}\Big(\partial_i\sqrt{\det g}\partial_j\phi\Big) + g^{ij}\partial_{ij}\phi\\ &= (\partial_ig^{ij})\partial_j\phi + g^{ij}\frac{1}{\sqrt{\det g}}\Big(\frac{(\partial_ig_{pq})g^{pq}}{2} \sqrt{\det g}\partial_j\phi\Big) + g^{ij}\partial_{ij}\phi\\ &= (\partial_ig^{ij})\partial_j\phi + \frac{1}{2}g^{ij}(\partial_ig_{pq})g^{pq}\partial_j\phi + g^{ij}\partial_{ij}\phi\\ &= g^{ij}\partial_{ij}\phi - g^{ij}\Gamma_{ij}^p(\partial_ig_{pq})g^{pq}\partial_j\phi, \end{align*}$$ using the standard expression for the coefficients of the Levi-civita connection (the Christoffel symbols) in terms of the metric. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493722319602966, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/59/taking-advantage-of-one-time-pad-key-reuse/64
Taking advantage of one-time pad key reuse? Suppose Alice wants to send encryptions (under a one-time pad) of $m_1$ and $m_2$ to Bob over a public channel. Alice and Bob have a shared key $k$; however, both messages are the same length as the key $k$. Since Alice is extraordinary lazy (and doesn't know about stream ciphers), she decides to just reuse the key. Alice sends ciphertexts $c_1 = m_1 \oplus k$ and $c_2 = m_2 \oplus k$ to Bob through a public channel. Unfortunately, Eve intercepts both of these ciphertexts and calculates $c_1 \oplus c_2 = m_1 \oplus m_2$. What can Eve do with $m_1 \oplus m_2$? Intuitively, it makes sense that Alice and Bob would not want $m_1 \oplus m_2$ to fall into Eve's hands, but how exactly should Eve continue with her attack? - – Mr_CryptoPrime Jul 13 '11 at 7:01 5 Answers There is a great graphical representation of the possible problems that arise from reusing a one-time pad. Reusing the same key multiple times is called giving the encryption 'depth' - and it is intuitive that the more depth given, the more likely it is that information about the plaintext is contained within the encrypted text. The process of 'peeling away' layered texts has been studied, as ir01 mentions, and those methods improve with more layers. - 2 This picture illustrates things beautifully. I guess the spirit of my question was "how would you actually do the statistical analysis once you have $m_1 \oplus m_2$"; a respectable cryptographer would probably say something like "that's trivial". – Elliott Jul 14 '11 at 0:52 There are two methods, named statistical analysis or Frequency analysis and pattern matching. Note that in statistical analysis Eve should compute frequencies for $aLetter \oplus aLetter$ using some tool like this. A real historical example using frequency analysis is the VENONA project. EDIT: Having statistical analysis of $aLetter \oplus aLetter$ like this says: If a character has distribution $X$, the two characters behind $c_1 \oplus c_2$ with probability $P$ are $c_1$, $c_2$. - – starblue Jul 13 '11 at 13:20 A recent (2006) paper that describes a method is "A natural language approach to automated cryptanalysis of two-time pads". The abstract: While keystream reuse in stream ciphers and one-time pads has been a well known problem for several decades, the risk to real systems has been underappreciated. Previous techniques have relied on being able to accurately guess words and phrases that appear in one of the plaintext messages, making it far easier to claim that “an attacker would never be able to do that.” In this paper, we show how an adversary can automatically recover messages encrypted under the same keystream if only the type of each message is known (e.g. an HTML page in English). Our method, which is related to HMMs, recovers the most probable plaintext of this type by using a statistical language model and a dynamic programming algorithm. It produces up to 99% accuracy on realistic data and can process ciphertexts at 200ms per byte on a \$2,000 PC. To further demonstrate the practical effectiveness of the method, we show that our tool can recover documents encrypted by Microsoft Word 2002 - The thing here is: When you just XOR the cyphertexts with each other, what you get is in fact the XOR result of both cleartexts. f(a) ⊕ f(b) = a ⊕ b And after that point, all that's left is to use statistical analysis, as ir01 has mentioned. In fact, the early cell phones used to implement a somewhat similar encryption scheme. They had a one byte (if my memory serves me well) key which was used to XOR the voice in blocks. Thus, an attacker could just XOR the voice message by itself phase shifted by one byte, and get the clear voice communication phase shifted and XOR'd by itself. Which is indeed very easy to crack. Even easier to crack than the XOR result of two separate cleartexts. Also, as Tangurena mentioned, the Soviet message traffic was decrypted due to the fact that one-time-pads had been re-used. See the Wikipedia article on the VENONA Project. Plus, here's an article with a little more insight to the practical side of the subject: Automated Cryptanalysis of Plaintext XORs of Waveform Encoded Speech - If you have $m_1 \oplus m_2$, you can learn about the underlying message format. It is possible to determine patterns in the underlying plaintext and use these patterns to extract data from the ciphertext. - For example, every zero in the output indicates a matching byte in the two inputs. – David Schwartz Aug 23 '11 at 12:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334092736244202, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/204519/is-the-set-of-all-valid-c-programs-countably-infinite
# Is the set of all valid C++ programs countably infinite? I have heard that the set of valid programs in a certain programming language is countably infinite. For instance, the set of all valid C++ programs is countably infinite. I don't understand why though. A programming language has open curly braces and corresponding closing ones. Wouldn't one need a stack to track the braces? Hence, how can one design a DFA that accepts valid C++ programs? - 10 valid c++ $\subset \Sigma^*$ and $\Sigma^*$ (all strings of alphabet $\Sigma$) is countably infinite, – ratchet freak Sep 29 '12 at 20:33 ## 5 Answers Well, a valid C++ program (or really any C++ program) will simply be a finite sequence composed of a finite collection of characters and a few other things (indentation, spaces, etc.). It is a general result that the set of all finite sequences of entries from a finite alphabet will be countably infinite. To show that there are countably infinitely many valid C++ programs, you need only show there is no finite upper bound on the length of valid C++ programs. Addendum: Another approach (an alternative to showing there is no finite upper bound on length) is to actually explicitly define (in a theoretic sense) countably infinitely many valid C++ programs. For example, for a given positive integer, the program that simply prints said integer, then ends (as I mentioned in the comments below). The following program template should do the trick: ```` #include<iostream> using namespace std; int main () { cout << "___________"; return 0; } ```` That "____" part is the spot where you'd type in whatever positive integer you wanted the program to print out--whether that be $1$, or $23234$, or $1763598730987307865$, or whatever--instead of the underscores. Now, obviously, no matter how fast you can type, there are integers big enough that you couldn't finish typing in your lifetime, so in practice, there are programs of this type that you could never finish. Even if such a program were handed to you, you'll certainly run into memory problems for sufficiently large integers (depending on the computer), but should still be valid programs. We can say that such programs all exist in a "theoretical" sense. That is, given sufficient memory and power to store and run it--necessarily a finite (though perhaps prohibitively large) amount--and given sufficient time to program and run it--necessarily a finite (though perhaps prohibitively long) amount--this program will do what it's supposed to do. Please don't give me any grief about the heat death of the universe or anything like that. ;) - 5 I think it is easy to show that the number of valid C++ programs is infinite - there are a number of quite trivial infinite families. The language argument shows that they are countable. How to create an algorithm to recognise a valid C++ program depends on the notion of validity. For example there could be some kind of potentially invalid branch which is provably never called. Is this a valid or invalid program (assuming the rest is OK)? Is the question whether the program can be compiled - in which case one test is to give it to a compiler and see what happens. – Mark Bennet Sep 29 '12 at 20:15 Yeah, if I actually knew more about the language (specifically, validity of a program), I'd have suggested an algorithm. I suppose one could always just refer to the program that just prints a given positive integer, then ends. There are readily countably infinitely many of those, and moreover, they can't be bounded in length.... – Cameron Buie Sep 29 '12 at 21:59 @CameronBuie just with printing a positive integers you're running into the simple problem of finite memory on every machine. So you can only represent a limited amount of numbers. But putting these crazy thoughts aside, your argument obviously still holds. ;-) – stefan Sep 30 '12 at 0:28 2 C++ is not bound to a specific implementation. Every single implementation may (and will) have resource limits, that is, for any C++ implementation there will be a valid C++ program which cannot be handled by the given C++ implementation. So not being able to compile and/or execute a program on any given machine doesn't make it an invalid C++ program (nor the implementation of C++ a non-conforming implementation, as long as it documents its resource limits). Since C++ does not put any upper limit on the size of pointers, it also does not put a limit on the memory a C++ program can use. – celtschk Sep 30 '12 at 11:59 1 @Spenser: Every C++ code is necessarily finite; and its content come from a finite alphabet. Therefore the fact that there is an infinite set of programs means that this set is countably infinite. @Cameron: I think you need a pretty big bignum library for this sort of code `:-)` – Asaf Karagila Oct 27 '12 at 1:27 show 6 more comments Countably infinite doesn't mean regular. The C++ grammar isn't regular. In fact, it isn't even context free. Yet, the set of all valid C++ programs is countably infinite. To see why, first notice that it's infinite. No matter what $n \in \mathbb{N}$ you pick, you can always write a C++ program that is longer than $n$. Next, let $S_n$ be the set of all C++ programs of length $n$. Each $S_n$ is finite. The set of all C++ programs (of all possible lengths) is a countable union of sets $S_n$: $$S = \bigcup_{n=0}^\infty S_n$$ Since the countable union of countable (or finite) sets is at most countable, we conclude that the set of all valid C++ programs is countable. - 1 There are models of ZF in which $\Bbb R$ is a countable union of countable sets, so a countable union of countable sets need not be countable. That's not relevant to the topic at hand, of course, but still true. – Cameron Buie Sep 29 '12 at 19:33 1 @CameronBuie Your comment is slightly misleading, since the reals are countable when viewed outside of the model, but viewed inside the model they are uncountable. Basically the poster's statement is true even in a countable model of ZF when viewed from inside of that model, whereas the reals are not a countable union of countable sets when viewed from inside the model, regardless of the countability of the model. – Tim Nov 2 '12 at 22:55 @Tim: What do you mean by "when viewed outside the model"? – Cameron Buie Nov 2 '12 at 23:56 @CameronBuie Basically, I'm not sure if you understand Skolem's paradox properly. There are countable models of ZF. Using this model it is perfectly possible to proof the uncountability of the real numbers using cantor's diagonal argument. Thus, in the model, the reals are uncountable, even though the model itself is countable. You are confusing statements about the model with statements in the model. – Tim Nov 3 '12 at 0:45 1 @Tim: I'm way late to this, but Cameron is correct. The fact that the countable union of countable sets is countable does use the axiom of choice. There are models of ZF (but not of ZFC!) where $\mathbb{R}$ is (internally) a countable union of countable sets. This has nothing to do with Skolem's paradox. – Jason DeVito Apr 3 at 2:26 show 4 more comments A C++ program is a finite sequence of characters in a specified finite alphabet. The set of all finite sequences of characters in that alphabet is countably infinite. The set of all valid C++ programs is a subset of the set of all finite sequences of characters in that alphabet. An infinite subset of a countably infinite set is countably infinite. (It's infinite because there is no finite upper bound on the lengths of C++ programs.) - I propose the following: 1. Each natural number is a program (a file is nothing but a very large number). 2. Some of these programs are valid C++ programs. If we show now, that for every valid C++ program n, there exists a program n + m that is a valid C++ program as well, the number of C++ programs is countable infinite. 1. Let n_0 be a classical hello world program. 2. for every n, there is a m that adds a trivial line to n ( cout<<"Hello!";) 3. Proofed. - As several posters already have pointed out, the set of valid c++ programs is countably infinite. The OP's concern has some merit though. On an actual computer, the memory is finite, so a valid program is not just a certain finite string, but a finite string of bounded length, and thus the set of valid, parsable programs on a specific computer is finite (but extremely large). - While on any given computer the memory is finite, you still can just build a bigger computer. Also, there is nothing in the C++ standard demanding that a valid C++ program must at some instance of time be stored completely on the computer. – celtschk Sep 30 '12 at 12:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9278780221939087, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/151204-matrix-transformations.html
# Thread: 1. ## Matrix Transformations Hello everyone can anybody tell me if there exists any transformation between matrix addition to matrix multiplication (in any domain)e.g an addition in one domain for fourier is multiplication in the other. ??? i actually want to multiply a random matrix instead of adding it in my equation. what changes/effects do i have to make? regards aliya 2. Well, one option would be exponentiation. You can exponentiate diagonalizable matrices in a rather straight-forward manner: if $A=PDP^{-1}$, where $D$ is diagonal, then $e^{A}=Pe^{D}P^{-1}$, and $e^{D}$ you can compute by simply exponentiating each number on the main diagonal. Because matrix multiplication is not, in general, commutative, you might also need to have some condition on the commutator in order actually to set $e^{A+B}=e^{A}e^{B}.$ 3. ## thanx Thankyou ackbeet for your reply. It indeed was useful to some extent. But it can make my solution complex as this method requires three assumptions: 1. matrix A and B be commutive 2. matrix A be diagonizable 3. matrix B be nilpotent is there any solution which is more general than this? (i.e., without assumptions) 4. Well, taking a step back, what's preventing you from simply multiplying in your equation? Multiplication is not commutative, it is true. So you do have to be careful. But if you have control over the way your equations look, then it seems to me you can just change to multiplication by fiat. I guess what I'm getting at is that a little more context of the problem would be helpful. 5. Thankyou ackbeet . I think i am much clear on this now. 6. You're welcome. Good luck! 7. Hello, i need a bit more help please. with what factor is e^A.e^B is different from A.B? does any close approximation exist? 8. I'm not sure what you're asking. Can you provide more context? 9. suppose two matrices A and B. Now simple matrix multiplication (i.e., A.B) is not equal to multiplication of exponent of matrices (i.e., e^A.e^B). right? i was just curious if these two multiplications have any relationship. ? for example does there any quantity 'X' exist for which i can say A.B=X(e^A.e^B)? or vice versa. 10. I'm not aware of any X that will do that for you. On the RHS of that equation, you have two infinite series being multiplied together. On the LHS, you have simple matrix multiplication. There might conceivably be some sort of operation you could do, something like Fourier analysis, on the RHS in order to get you the LHS. But Fourier analysis on matrices is not something I've even seen. There is such a thing as the Fourier matrix (you can google it), but I've never studied it. I think you've definitely reached the end of my knowledge here, I'm afraid.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475001096725464, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/67645/finitely-presented-groups-which-are-not-residually-amenable/67651
## Finitely presented groups which are not residually amenable ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What are examples of finitely presented but not residually amenable groups? Well, the examples that I want to have are simple f.p. groups as well as examples of non residually amenable groups arise from other reasoning then simplicity. Thank you for all your references! - @Kate: Take a non-residually-finite group $G$ with property $T$. Then $G$ cannot be residually-amenable (since every amenable quotient of $G$ is finite). Examples of fp groups like this are given some some lattices in non-linear Lie groups, see e.g., J.Millson, "Real vector bundles with discrete structure group", Topology 18 (1979) 83–89, and M.Raghunathan, "Torsion in co-compact lattices of Spin(2,N)", Math. Ann. 266 (1984) 403–419. None of these groups is simple (they also do not contain infinite simple subgroups). This answers your 2nd question. – Misha Apr 13 2012 at 4:56 Thank you, Misha, for your answer and the citation! – Kate Juschenko Apr 24 2012 at 7:41 ## 3 Answers Let $G$ be an adjoint Kac-Moody group over a (sufficiently large) finite field $\mathbf F_q$. By results of Caprace-R'emy, $G$ is simple when its diagram is connected and has indefinite type, i.e. neither spherical nor affine, and finitely presented when the diagram does not contain an edge labelled with $\infty$. In this case, $G$ itself is not amenable as it contains the free product of two root groups $U_\alpha * U_\beta$. Varying the ground field and the diagram then gives a two-parameter family of examples. - thanks for your example – Kate Juschenko Jun 15 2011 at 19:35 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Take any finitely presented infinite simple group $G$. It is not residually anything (well, it is residually $G$). Now take such a $G$ that contains a nonabelian free group. For example, take Elizabeth Scott's finitely presented group $G$ that contains $GL_3(\mathbb{Z})$. (See Scott, Elizabeth A. The embedding of certain linear and abelian groups in finitely presented simple groups. J. Algebra 90 (1984), no. 2, 323–332.) - thanks for the citation and example. – Kate Juschenko Jun 15 2011 at 19:35 You're welcome. – Richard Kent Jun 15 2011 at 21:07 Cornulier has a finitely presented sofic group which is not the limit of amenable groups: http://arxiv.org/pdf/0906.3374 - Thanks, Alain, in fact this paper was my starting point... – Kate Juschenko Jun 14 2011 at 12:05 Also, I've received several very good references form Mark Sapir on finitely presented simple groups. The question I wanted to ask is that there could be much more examples of finitely presented (non-simple) groups that are not residually amenable. – Kate Juschenko Jun 14 2011 at 14:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9170600771903992, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/57277/must-a-surjective-isometry-on-a-dual-space-have-a-pre-adjoint/57286
## Must a surjective isometry on a dual space have a pre-adjoint? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Background: Let $X$ be a Banach space. We know a linear map $h$ is a surjective isometry of $X$ if and only if its adjoint `$h^*$` is a surjective isometry of `$X^*$`. In general, a linear map `$g:X^* \to X^*$` need not have a pre-adjoint. But what if $g$ is a surjective isometry? Must there exist $f:X \to X$ such that `$g=f^*$`? If we identify $X$ with its embedding in `$X^{**}$`, this is equivalent to `$g^*(X) \subset X$`. Sorry if this question is trivial; it seems like this should be well-known, but I haven't been able to find a reference or an easy counter-example. - [deleted earlier comment as it was based on a misreading] – Yemon Choi Mar 3 2011 at 22:53 ## 1 Answer Let $X$ be the space of sequences, indexed by the nonzero integers, that tend to $0$ at $-\infty$ and to an arbitrary finite limit at $\infty$, with sup norm (a direct product of $c_0$ and $c$). Then $X^*$ can be identified with $\ell^1$ of $\mathbb{Z}$. If $f$ is in $\ell^1$, then the corresponding functional on $X$ sends $x$ to `$\sum_{n\neq0}f_nx_{n} + f_0\cdot\lim_{n\to\infty}x_n$`. The map on `$\ell^1\cong X^*$` defined by `$(f_n)_{n\in\mathbb Z}\mapsto (f_{-n})_{n\in\mathbb{Z}}$` is a linear surjective isometry with no pre-adjoint. (If there were a preadjoint, it would have to send `$(x_n)_{n\in \mathbb Z\setminus\{0\}}$` to `$(x_{-n})_{n\in\mathbb Z\setminus\{0\}}$`.) - Much simpler than the example I was thinking of – Yemon Choi Mar 3 2011 at 21:02 Thanks, but I am curious about what your example is. – Jonas Meyer Mar 3 2011 at 21:06 Actually, I realize that I'd misread the question, so please ignore my comment! – Yemon Choi Mar 3 2011 at 22:53 Very nice example! Thank you very much. – edward-poon Mar 11 2011 at 4:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271893501281738, "perplexity_flag": "head"}
http://mathoverflow.net/questions/35825?sort=newest
## Equivalent definitions of M-genericity. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm trying to learn about forcing, and have heard that there are several equivalent ways to define genericity. For instance, let M be a transitive model of ZFC containing a poset (P, ≤). Suppose G ⊆ P is such that q ∈ G whenever both p ∈ G and q ≥ p. Suppose also that whenever p,q ∈ G then there is r ∈ G such that r ≤ p and r ≤ q. Then the following are equivalent ways to say that G is generic: (1) G meets every element of M dense in P. That is, for all D ∈ M, if for all p ∈ P there is q ∈ D such that q ≤ p, then G ∩ D is nonempty. (2) G is nonempty and meets every element of M dense below some p ∈ G. That is, for all p ∈ G and all B ∈ M, if for each q ≤ p there is r ∈ B such that r ≤ q, then G ∩ B is nonempty. Proving this equivalence seemed like it would be an easy exercise, but I think I'm missing something. Can someone point me toward a source where I can find a proof? I hope this is an acceptable question; this is my first time posting. EDIT: Typo and omission fixed. - ## 1 Answer If $G$ satisfies (1), then it satisfies (2) because if $p$ is in $G$ and $D$ is dense below $p$, then let $D'$ be the set of conditions $q$ which are either in $D$ or incompatible with $p$. This is dense in $P$ since any condition that is compatible with $p$ will have elements of $D$ below it, and any condition incompatible with $p$ is already in $D'$. But $G$ cannot meet $D'$ in something incompatible with $p$, by your assumption on $G$, and so it must meet it in $D$, as desired. Conversely, if $G$ satisfies (2), then it will satisfy (1) because if $D$ is dense, then it is dense below any $p$, and so $G$ will meet it. - Thanks. D' was exactly what I was missing. Now I see the general strategy for proving such equivalences. – unknown (google) Aug 17 2010 at 2:18 Great! There are several other equivalent characterizations of $M$-genericity: (3) $G$ meets every maximal antichain in $M$; (4) $G$ meets every pre-dense set in $M$; and provided $P$ is a complete Boolean algebra, (5) $G$ is $M$-complete, in the sense that if $M$ has a descending sequence in $G$, then it has a lower bound in $G$. – Joel David Hamkins Aug 17 2010 at 2:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441508054733276, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-math-topics/205455-chessboard-proof.html
1Thanks • 1 Post By johnsomeone Thread: 1. Chessboard Proof Hey there I also need some help on this proof as well. Imagine an infinite chessboard that contains a positive integer in each square. If the value in each square is equal to the average of its four neighbors to the north, east, south and west prove the values in all the squares are equal. I understand this problem conceptually, but I can seem to put this in words. I hope someone can guide me through the problem step by step, so I can understand how to do this problem, thanks. 2. Re: Chessboard Proof $\text{Model a solution as }\phi : \mathbb{Z} \times \mathbb{Z} \rightarrow \mathbb{Z}^+ \ni \phi(x,y) = \frac{\phi(x+1, y) + \phi(x, y+1) + \phi(x-1, y) + \phi(x, y-1)}{4}.$ $\text{Consider the set } \{\phi(x, y) | (x,y) \in \mathbb{Z} \times \mathbb{Z} \} \subset \mathbb{Z}^+.$ $\text{By well ordering, that set has a minimal element, call it }r.$ $\text{Have that there exists } (x_0, y_0) \in \mathbb{Z} \times \mathbb{Z} \text{ such that } \phi(x_0, y_0) = r.$ $\text{Have that } \phi(x_0, y_0) \text{ is the average of its four neighbors, and each of them has value at least } r.$ $\text{Therefore each of them equals } r, \text{ because if even one of them were greater than } r,$ $\text{that would imply that } \phi(x_0, y_0) > r.$ $\text{Thus }\phi(x_0+1, y_0) = \phi(x_0, y_0+1) = \phi(x_0-1, y_0) = \phi(x_0, y_0-1)} = r.$ $\text{Repeat that reasoning with the four neighbors of } (x_0, y_0) \text{ and then with their neighbors, etc., }$ $\text{until it's spread everywhere, to prove that } \phi(x, y) = r \ \forall \ (x, y) \in \mathbb{Z} \times \mathbb{Z}.$ $\text{Knowing the reason it's true, you can hopefully write up a technical proof that it's true,}$ $\text{Hint: show it's true vertically though } (x_0, y_0) \text{ for } (x_0, y_1), \text{ and then horizontally for } (x_1, y_1).$ 3. Re: Chessboard Proof Hey thanks a lot for getting me started on this problem, just one question I'm not exactly sure what those symbols mean like phi and ZxZ. I am hoping you can let me know what they mean just so I can understand what each part means. 4. Re: Chessboard Proof $\mathbb{Z} \text{ is the symbol used to indicate the set of integers, } \mathbb{Z} = \{... -3, -2, -1, 0, 1, 2, 3, 4, ... \}$ $\mathbb{Z}^+ \text{ is the symbol used to indicate the set of positive integers, so } \mathbb{Z}^+ = \{ 1, 2, 3, 4, ... \}$ $\phi \text{ is just the name of a function. You could replace } \phi \text{ by } f \text{ everywhere if you like.}$ $\mathbb{Z} \times \mathbb{Z} \text{ is the "Cartesian product" of those two sets.}$ $\text{That's the set of ordered pairs of elements of those two sets, the 1st from the 1st, and the 2nd from the 2nd.}$ $\text{So ordered pairs like }(-5, 71), (8, 13), \text{ etc. are what are in the set } \mathbb{Z} \times \mathbb{Z},$ $\text{since for each of them, the 1st coordinate (like -5) is in }\mathbb{Z} \text{ and the 2nd coordinate (like 71) is in } \mathbb{Z}.$ $\text{This is function notation: }\phi : \mathbb{Z} \times \mathbb{Z} \rightarrow \mathbb{Z}^+,$ $\text{which means that }\phi \text{ is a function with domain } \mathbb{Z} \times \mathbb{Z} \text{ and range } \mathbb{Z}^+.$ $\text{So }\phi(x, y) \text{ is the value in the square labelled } (x, y), \text{ where } x \text{ and } y \text{ are integers.}$ $\text{Since each labelled square }(x, y) \text{ has a value that's a positive integer, }\phi(x, y) \text{ is a positive integer.}$ $\text{Thus the range of }\phi \text{ is all positive integers. So we write that }\phi(x, y) \in \mathbb{Z}^+.$ 5. Re: Chessboard Proof Oh I see now, thanks I understand it better now
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391047954559326, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/126755-coordinate-geometry-about-parabolas-print.html
# coordinate geometry about parabolas Printable View • February 2nd 2010, 02:46 AM fishlord40 A chord of the parabola that is perpendicular to the axis and 1 unit from the vertex has a length of 1 unit. How far is the vertex from the focus? ***by the way, the answer to this problem is 1/16. I just want to know how did you get the right answer to this problem. thanks!! • February 2nd 2010, 03:51 AM earboth Quote: Originally Posted by fishlord40 A chord of the parabola that is perpendicular to the axis and 1 unit from the vertex has a length of 1 unit. How far is the vertex from the focus? ***by the way, the answer to this problem is 1/16. I just want to know how did you get the right answer to this problem. thanks!! 1. Let the equation of the parabola be $4py = x^2$ where p is the distance of the focus from the vertex. 2. You know the coordinates of 2 points of the parabola: $P(-\tfrac12, 1)$, $Q(\tfrac12, 1)$ 3. Plug in the coordinates of Q into the equation of the parabola and solve for p: $4p \cdot 1 = \left(\frac12\right)^2$ All times are GMT -8. The time now is 09:31 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381921887397766, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2009/01/02/calculating-the-determinant/?like=1&source=post_flair&_wpnonce=f636fc8316
# The Unapologetic Mathematician ## Calculating the Determinant Today we’ll actually calculate the determinant representation of an element of $\mathrm{GL}(V)$ for a finite-dimensional vector space $V$. That is, what transformation does an element of the general linear group induce on the one-dimensional space of antisymmetric tensors of maximal degree? First off, what will the determinant of a linear transformation be? It will be an invertible linear transformation from a one-dimensional vector space to itself. That is, if we have a basis (any single nonzero vector) for the one-dimensional space, the determinant can be described by an invertible $1\times1$ matrix — a single nonzero field element. The action is just to multiply every vector by this field element. So all we have to do is find a vector in our space and see what the representation does to it. But finding a vector is easy. We just pick a basis of $V$ — any basis of $V$ — and antisymmetrize a $d$-tuple of basis elements. The obvious one to pick is $e_1\otimes...\otimes e_d$. Then we have an antisymmetric tensor $\displaystyle A\left(\bigotimes\limits_{k=1}^d e_k\right)$ Now, we could calculate this right away, but let’s not do that. What we’re really interested in is how a group element $T\in\mathrm{GL}(V)$ acts on this tensor $\displaystyle T^{\otimes d}\left(A\left(\bigotimes\limits_{k=1}^d e_k\right)\right)$ But remember that the actions of the symmetric and general linear groups commute $\displaystyle\begin{aligned}A\left(T^{\otimes d}\left(\bigotimes\limits_{k=1}^de_k\right)\right)=A\left(\bigotimes\limits_{k=1}^dT(e_k)\right)\\=A\left(\bigotimes\limits_{k=1}^d\left(\sum\limits_{j=1}^dt_k^je_j\right)\right)\end{aligned}$ Where we’re not using the summation convention for the moment. The next step is the tricky part. What we have is a product of sums, which we want to turn into a sum of products. We walk through the factors, picking one summand from each as we go along. That is, for every $k\in\{1,...,d\}$ we pick some $\pi(k)\in\{1,...,d\}$ to get the term $\displaystyle\bigotimes\limits_{k=1}^dt_k^{\pi(k)}e_{\pi(k)}$ And we can factor out all these constants to make our term look like $\displaystyle\prod\limits_{k=1}^dt_k^{\pi(k)}\bigotimes\limits_{k=1}^de_{\pi(k)}$ We want to sum up over all possible such terms. This is really just a big application of the distributive property — the linearity of tensor multiplication. At the end we have $\displaystyle A\left(\sum\limits_{\pi:\{1,...,d\}\rightarrow\{1,...,d\}}\left(\prod\limits_{k=1}^dt_k^{\pi(k)}\bigotimes\limits_{k=1}^de_{\pi(k)}\right)\right)$ But since antisymmetrization is linear we get $\displaystyle\sum\limits_{\pi:\{1,...,d\}\rightarrow\{1,...,d\}}\left(\prod\limits_{k=1}^dt_k^{\pi(k)}A\left(\bigotimes\limits_{k=1}^de_{\pi(k)}\right)\right)$ And here’s where the properties of the antisymmetrizer come in. First off, if $\pi(j)=\pi(k)$ for any two distinct indices $j$ and $k$, then the antisymmetrization of the term will vanish. Thus our sum is really only over those $\pi\in S_d$ $\displaystyle\sum\limits_{\pi\in S_d}\left(\prod\limits_{k=1}^dt_k^{\pi(k)}A\left(\bigotimes\limits_{k=1}^de_{\pi(k)}\right)\right)$ But now each term involves antisymmetrizing the same collection of basis vectors, but in different orders. So for each one we rearrange the tensorands at the possible cost of picking up a negative sign $\displaystyle\sum\limits_{\pi\in S_d}\left(\left(\prod\limits_{k=1}^dt_k^{\pi(k)}\right)\mathrm{sgn}(\pi)A\left(\bigotimes\limits_{k=1}^de_k\right)\right)$ And now the antisymmetrizer part has nothing to do with the summation over $\pi$. We factor it out to find $\displaystyle\left(\sum\limits_{\pi\in S_d}\mathrm{sgn}(\pi)\left(\prod\limits_{k=1}^dt_k^{\pi(k)}\right)\right)A\left(\bigotimes\limits_{k=1}^de_k\right)$ So in the end we’ve multiplied our antisymmetric tensor by the factor $\displaystyle\sum\limits_{\pi\in S_d}\mathrm{sgn}(\pi)\prod\limits_{k=1}^dt_k^{\pi(k)}$ which is our determinant. For each permutation $\pi$ we take our matrix and walk down the rows. At the $k$th row we multiply by the element in the $\pi(k)$th column, and we sum up these products over all $\pi\in S_d$. ## 10 Comments » 1. [...] Determinant of a Noninvertible Transformation We’ve defined and calculated the determinant representation of for a finite-dimensional vector space . But we can extend this [...] Pingback by | January 14, 2009 | Reply 2. [...] off, if we choose a basis for we have matrix representations of endomorphisms, and thus a formula for their determinants. For instance, if is represented by the matrix with entries , then its determinant is given [...] Pingback by | January 28, 2009 | Reply 3. [...] John Armstrong is currently on an exposition of the determinant, starting here and is now here. This must be about the fifth time I’m relearning the determinant, each time more [...] Pingback by | January 30, 2009 | Reply 4. [...] that we can calculate the determinant by summing one term for each permutation in the symmetric group . Each term is the product of one [...] Pingback by | February 3, 2009 | Reply 5. [...] we can start calculating the determinant of , summing over permutations. Just like we saw with an upper-triangular matrix, if we have a [...] Pingback by | April 2, 2009 | Reply 6. [...] Determinant of the Adjoint It will be useful to know what happens to the determinant of a transformation when we pass to its adjoint. Since the determinant doesn’t depend on any [...] Pingback by | July 30, 2009 | Reply 7. [...] that , and in the fourth line we’ve relabelled . This looks a lot like the calculation of a determinant. In fact, it is times the determinant of the matrix with entries [...] Pingback by | October 30, 2009 | Reply 8. [...] to as the Jacobian, or the Jacobian matrix. Since this matrix is square, we can calculate its determinant, which is also referred to as the Jacobian, or the Jacobian determinant. I’ll try to be clear [...] Pingback by | November 11, 2009 | Reply 9. [...] is where Cramer’s rule comes in, and it starts by analyzing the way we calculate the determinant of a matrix . This [...] Pingback by | November 17, 2009 | Reply 10. [...] in terms of a sum over permutations. And that, of course, is a matter of exterior algebra. The Unapologetic Mathematician has written on determinants, although I’m not sure he discussed the volume definition in [...] Pingback by | January 30, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9199315905570984, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/7911/why-can-different-batteries-with-the-same-voltage-send-different-currents-throu
# Why can different batteries with the same voltage send different currents through the same object? According to an answer in this thread on Skeptics: If you take one of the little 12V garage door opener batteries and short out (directly connect) the two terminals with a piece of wire or something else. You'll get a light current flow through the wire or metal. It may get a little warm. This battery is only capable of supplying a small amount of current. If you take a 12V car battery and short out the two terminals (don't do it, it's not fun), you will be met with a huge current arc that will likely leave a burn mark on whatever was used to short it. This is because the car battery is capable of discharging a large amount of current in a very short period of time. I'm not sure how this could work given Ohm's law V=IR. If we assume the resistance of toucher is constant, then we'd expect the current to be the same as well. 1. Could it be that a car battery has less internal resistance than a garage battery? 2. Does it have anything to do with contact area? If it does, then how would you model it? Generally resistances add in series, but if I only half touch a contact, then neither the battery nor I change, so you'd expect our resistances to stay the same, but somehow our total resistance changes. So which objects resistance would change - mine or the batteries? - – anna v Apr 2 '11 at 7:30 And beyond what everyone else said, it should be explicitly stated that Ohm's law isn't a universal law of nature, like Newton's law of gravity, or the Maxwell equations. It is an empirical observation about how some materials behave in some situations. – Jerry Schirmer Apr 2 '11 at 14:14 And you can always decrease the effective internal resistance by ganging several similar batteries together in parallel. Put twenty car batteries together in parallel, and the short circuit voltage can become frighteningly high. – Omega Centauri Apr 2 '11 at 14:45 – David Cary Apr 2 '11 at 14:53 2 There's a usually unspoken rule of thumb when designing stuff with batteries: The people designing the battery promise they will keep the voltage within a certain range as long as the the current pulled from the battery is less than some maximum current. The people designing other stuff promise to pull less than some maximum current from the battery as long as the voltage applied is in the normal operating range. When these promises are violated, the convenient simple rules of thumb we use ("batteries put out a constant voltage") don't work anymore. – David Cary Apr 2 '11 at 17:15 show 1 more comment ## 4 Answers The answers of Martin and Edward are quite right, but theý lack a componet which plays a role in the case of that "garage opener battery". If current is not limited in the circuit outside the battery, first internal resistance can be the limiting factor. But, often in such small (the measure for "small" is the area/volume of active material at the battery poles) batteries the current is limited by the speed of chemical reaction at the electrodes. This is not a linear function of current, thus it cannot viewed as a part of internal resistance. Does it have anything to do with contact area? If it does, then how would you model it? Yes, contact area is important, but this is a problem at high current densities mostly. Depending on voltage in such a circuit, small contact area (= high current density) can result in contact welding, sparking, or ignition of an arc, or in some strange semiconductor effects of the contact point. (metal surfaces are nearly always coated by some oxide) Did You ever watch when an electric arc is started by a welder? This is main business when designing contacts in relays, swiches in home, or the switches in 230 kV lines. - 2 Ah, so it does have something to do with area -- not the contact area between the wire and the battery terminal, but the internal "wetted area" at the boundary of the solid plates and the liquid electrolyte. I guess this is why "deep cycle" batteries have relatively smooth solid plates (wetted area approximately LxW of the plate), while "starter" batteries have plates "like a sponge" (wetted area many times the LxW of the plate). – David Cary Apr 2 '11 at 13:57 Okay, so what you are saying is similar to David's comment. Once the current gets above a certain range, the chemical reaction can't maintain the voltage – Casebash Apr 3 '11 at 1:02 @Casebash, No, I see a lot of difference. Especially because Davids comments are somewhat erratic. The one above where he jumps from my explanation of small contact areas to the inside of batteries is very strange. This was reason for me not to respond. – Georg Apr 3 '11 at 11:44 First an aside, before getting directly to the question. There are many standard battery types that output 1.5V, with some example common ones being AA,AAA,C,D. You can even buy one of each type with the same exact chemistry inside. So what is the difference between these, and why would a designer require a couple big D cell batteries for a flashlight when a couple AAA have the same voltage rating? There are two main reasons. One is the total energy contained (a D cell is larger in volume, and thus has more room to store chemical energy, and thus can run a device longer than a AAA could). The second is what you are getting at here, and that is the short circuit current. A D cell battery should be able to output more current than a AAA. This leads us to your question, of how this can possibly fit with Ohm's law since the battery voltage rating is the same. From an electrical engineer's standpoint, the open circuit voltage of a battery is only half the story. One needs to know the open circuit voltage as well as the internal resistance rating, to be able to predict the actual output of a battery into some device. You could consider the "equivalent circuit" of a real voltage source as an ideal voltage source in series with a resistor. For example, you can look at the technical documents for typical batteries here: http://www1.duracell.com/oem/productdata/default.asp and here's one for a D cell alkaline http://www1.duracell.com/oem/Pdf/new/MN1300_US_CT.pdf notice it lists the "nominal voltage" and the "impedance". For a D cell the impedance is listed as 136 mOhm. If we look at a AAA using the same chemistry we see http://www1.duracell.com/oem/Pdf/new/MN2400_US_CT.pdf the impedance is about 250 mOhm. While this two parameter description of power sources is intuitive and the first order approach to describing a device, or course the real life devices can be a bit trickier (the impedance can depend on frequency as batteries can only respond so quickly, draining the battery quickly is usually less efficient, etc.). So the technical documents often provide real test curves if you want to see even more detail. So a little 12V battery in a remote control will not be designed to have the low impedance required to start a car engine. The 12V car battery should definitely be able to put out more current when shorted. - "Could it be that a car battery has less internal resistance than a garage battery?" Yes! Let's denote a car battery internal resistance by $r_c$ , a garage battery internal resistance by $r_g$ , resistance of a toucher by $R$ ,voltage of a battery by $U$. Then current through the toucher: $$I=\dfrac{U}{R+r}$$ The toucher is releasing power: $$P=I^2R= \dfrac{U^2R}{(R+r)^2}$$ Let's introduce the ratio: $$\mu=\dfrac{P_c}{P_g}= \left(\dfrac{R+r_g}{R+r_c}\right)^2$$ $P_c$ is the power the toucher is releasing on a car battery and $P_g$ is the power the toucher is releasing on a garage battery. Internal resistance of a car battery is estimated about $r_c=0.001$ ohms and internal resistance of a garage battery around $r_g=0.1$ ohms. For simplicity let's suppose that $R<<r_c$. Then: $$\mu=\dfrac{P_c}{P_g}= \left(\dfrac{r_g}{r_c}\right)^2=100^2$$ - It's all about capacity, not internal resistance. Compare a shotgun shell to a .22. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370360970497131, "perplexity_flag": "middle"}
http://nrich.maths.org/4754
### Lunar Leaper Gravity on the Moon is about 1/6th that on the Earth. A pole-vaulter 2 metres tall can clear a 5 metres pole on the Earth. How high a pole could he clear on the Moon? ### Bridge Builder In this short problem we investigate the tensions and compressions in a framework made from springs and ropes. ### More Bridge Building Which parts of these framework bridges are in tension and which parts are in compression? # Wobbler ##### Stage: 5 Challenge Level: A solid hemisphere of radius $a$ and a solid right circular cone of height $h$ and base radius $a$ are made from the same uniform material and joined together with their circular faces in contact. Find the position of the centre of gravity of the whole body. If the body is placed on a horizontal table with its hemispherical surface in contact with the table in what position will it come to rest if (i) $h=a$ and (ii) $h=2a$? (iii) If the body always rests in equilibrium when it is placed on a horizontal table with a point of the hemispherical surface in contact with the table find $h$ in terms of $a$ and the angle of the cone in this case. (iv) In what position does the body rest in equilibrium for other angles of the cone (other values of $h/a$) and when will the equilibrium be stable? The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9069400429725647, "perplexity_flag": "middle"}
http://www.encyclopediaofmath.org/index.php/Carmichael_number
# Carmichael number From Encyclopedia of Mathematics A composite natural number $n$ for which $a^{n-1} \equiv 1$ modulo $n$ whenever $a$ is relatively prime to $n$. Thus they are pseudo-primes (cf. Pseudo-prime) for every such base $a$. These numbers play a role in the theory of probabilistic primality tests (cf. Probabilistic primality test), as they show that Fermat's theorem, to wit $a^p \equiv a$ modulo $p$, whenever $p$ is prime and $a \not\equiv 0$ modulo $p$, is not a sufficient criterion for primality (cf. also Fermat little theorem). The first five Carmichael numbers are $561,\ 1105,\ 1729,\ 2465,\ 2821$ . R.D. Carmichael [a2] characterized them as follows. Let $\lambda(n)$ be the exponent of the multiplicative group of integers modulo $n$, that is, the least positive $\lambda$ making all $\lambda$-th powers in the group equal to $1$. (This is readily computed from the prime factorization of $n$.) Then a composite natural number $n$ is Carmichael if and only if $\lambda(n) \mid n-1$. From this it follows that every Carmichael number is odd, square-free, and has at least $3$ distinct prime factors. Let $C(x)$ denote the number of Carmichael numbers $\le x$. W.R. Alford, A. Granville and C. Pomerance [a1] proved that $C(x) > x^{2/7}$ for sufficiently large $x$. This settled a long-standing conjecture that there are infinitely many Carmichael numbers. It is believed on probabilistic grounds that $\log C(x) \sim \log x$. [a4]. P. Erdős proved in 1956 that $C(X) < X.\exp(- k \log X \log\log\log X / \log\log X)$ for some constant $k$: he also gave a heuristic suggesting that his upper bound should be close to the true rate of growth of $C(X)$.[a5] There is apparently no better way to compute $C(x)$ than to make a list of the Carmichael numbers up to $x$. The most exhaustive computation to date (1996) is that of R.G.E. Pinch, who used the methods of [a3] to determine that $C\left({10^{16}}\right) = 246,683$. #### References [a1] W.R. Alford, A. Granville, C. Pomerance, "There are infinitely many Carmichael numbers" Ann. of Math. , 140 (1994) pp. 703–722 [a2] R.D. Carmichael, "Note on a new number theory function" Bull. Amer. Math. Soc. , 16 (1910) pp. 232–238 (See also: Amer. Math. Monthly 19 (1912), 22–27) [a3] R.G.E. Pinch, "The Carmichael numbers up to " Math. Comp. , 61 (1993) pp. 381–391 [a4] C. Pomerance, J.L. Selfridge, S.S. Wagstaff, Jr., "The pseudoprimes to " Math. Comp. , 35 (1980) pp. 1003–1026 [a5] P. Erdős, "On pseudoprimes and Carmichael numbers" Publ. Math. Debrecen', 4 (1956) pp.201–206. How to Cite This Entry: Carmichael number. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Carmichael_number&oldid=29473 This article was adapted from an original article by E. Bach (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8775172829627991, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/164826/how-to-prove-that-the-space-omega-1-times-r-has-countable-extent
How to prove that the space $\omega_1\times R$ has countable extent? How to proof that the space $\omega_1\times R$ has countable extent? The topological space $\omega_1$ is the first uncountable ordinal with order topology. A space $X$ has countable extent if every uncountable subset of $X$ has a limit point in $X$. Thanks for any help:) - Do you possibly mean $\omega_1$ is the first uncountable ordinal? $\omega + 1$ is certainly countable. – Ben Millwood Jun 30 '12 at 11:49 Maybe I have made a silly mistake. Yes, because I can't type the $\omega_1$ (now I could); then the proof from Arthur Fischer can't answer the question. – John Jul 1 '12 at 1:52 2 Please indicate such a substantial edit clearly as such in the main body of your question. Now it looks as if Arthur made this rather silly mistake. (Why didn't you just ask a new question with $\omega+1$ replaced by $\omega_1$? Also: I find it a bit bizarre that you even accepted his answer and notice only much later that you had a completely different ordinal in mind...) – t.b. Jul 1 '12 at 2:44 1 Answer By $R$ I assume you are denoting the real line. (Oh, dear; the question seems to have been substantially altered. Please ignore the now silly sounding striked-out paragraph.) Suppose that $A \subseteq (\omega + 1 ) \times R$ is uncountable. Note that there must be a $i \leq \omega$ such that $A_i = \{ x \in R : (i,x) \in A \}$ is uncountable. But as $R$ has countable extent it follows that $A_i$ has a limit point $x$ in $R$. It easily follows that $(i,x)$ is a limit point of $A$ in $(\omega +1 ) \times R$. Let $A \subseteq \omega_1 \times R$ be uncountable. If there is an $\alpha < \omega_1$ such that $A_\alpha = \{ x \in R : (\alpha , x ) \in A \}$ is uncountable, then $A_\alpha$ has a limit point $x$ (as $R$ has countable extent), and it is easy to show that $(\alpha , x )$ is a limit point of $A$. So assume that $A_\alpha$ is countable for each $\alpha < \omega_1$. We may then recursively construct a sequence $\langle (\alpha_i , x_i ) \rangle_{i \in \omega}$ in $A$ such that: • $\alpha _i < \alpha_{i+1}$ for all $i \in \omega$; and • $\langle x_i \rangle_{i \in \omega}$ is a convergent sequence in $R$. Let $\alpha = \sup_{i \in \omega} \alpha_i < \omega_1$ (and note that $\alpha$ is a limit ordinal). Let $x = \lim_{i \in \omega} x_i$. It is easy to show that $( \alpha , x )$ is a limit point of $A$ in $\omega_1 \times R$. - Maybe I have made a silly mistake. Because I can't type the $\omega_1$ (now I could); In your proof, $\omega+1$ is seen as the successor of $\omega$, however, I mean that it is the first uncountable ordinal $\omega_1$. – John Jul 1 '12 at 1:53 @John: Well, I thought that it was too easy.... (gosh darned it) – Arthur Fischer Jul 1 '12 at 2:46 Now it's clear for me:) – John Jul 1 '12 at 10:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9689289927482605, "perplexity_flag": "head"}
http://mathoverflow.net/questions/115289/conformal-blocks-for-beginners
## conformal blocks for beginners ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have given now a couple of talks that involve conformal blocks bundles on the moduli stack $\overline{\mathcal{M}}_{g,n}$, in front of a public of algebraic geometers but not specialists of the field. I have always encountered the same difficulty. The definition itself of bundle of conformal blocks is pretty elaborated and I think that it freightens the audience. What can one do to give the flavour of it without killing the talk? Maybe just introduce the parabolic theta functions on the smooth locus and say that the associated bundle degenerates? Or any other brilliant idea coming from physics? - 1 Conformal blocks can of course be approached in lots of different ways which are more or less intelligible to different groups of people. Could you tell us more about your background or the background of the audience you'd like to present to? – stankewicz Dec 3 at 15:44 I approached CB mainly as vector bundles on the moduli of curves, as spaces of generalized theta functions on the moduli of bundles on curves, but the only definition working also on the boundary that I know is the one as sheaves of covacua. – IMeasy Dec 3 at 15:49 1 Ah, read Beauville! I found this paper math.unice.fr/~beauvill/pubs/Hirz65.pdf to be most helpful, but this may just be my own preference towards Lie algebras. He also has several papers which give a "generalized theta function" approach. – stankewicz Dec 3 at 16:50 I am sorry I did not mention: the hypothesis is "public of algebraic geometers, with some knowledge of moduli spaces, but zero knowledge of CB". this is the enviromment I find the most difficult. – IMeasy Dec 3 at 17:13 Have you tried starting with the "obvious" introduction/motivation: the moduli of vector bundles of fixed det,coprime rank & deg (and n=0) when the coarse moduli space is smooth, with $Pic\simeq \mathbb{Z}$. You can then discuss the Verlinde bundle over Teichmueller space. If you want to be really concrete, you can mention e.g. rank 2 bundles of degree zero and fixed determinant for $g=2$, when the coarse moduli space is $\mathbb{P}^3$. And then you can say that you want to upgrade this to a fancier version, living on $\overline{\mathcal{M}}_{g,n}$? Or is this too trivial for your audiences? – Peter Dalakov Dec 4 at 11:19 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916831374168396, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/158903-help-me-find-k.html
# Thread: 1. ## Help me find k a. the equation 3x^2 + 9x=17+6kx have roots that have equal magnitude but opposite in signs. b. the graph of y=x^2+kx+k+8 intersects the x axis at two distinct points. 2. Originally Posted by hirano a. the equation 3x^2 + 9x=17+6kx have roots that have equal magnitude but opposite in signs. b. the graph of y=x^2+kx+k+8 intersects the x axis at two distinct points. Use the quadratic formula. (a) $3x^2+9x=17+6kx\Rightarrow\ 3x^2+9x-6kx-17=0$ $\displaystyle\ ax^2+bx+c=0\Rightarrow\ x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ $3x^2+(9-6k)x+(-17)=0$ For part (a), you require $\pm(value)$, so the coefficient $b$ needs to be zero. (b) $x^2+kx+(k+8)=0$ The two points will be the same if $b^2-4ac=0$ so you need to discover how $b^2-4ac\ \ne\ 0$ 3. Dude I'm confused. Sorry a. Why the coefficient of b needs to be zero? b. So I will use > or < zero? 4. Originally Posted by hirano Dude I'm confused. Sorry a. Why the coefficient of b needs to be zero? b. So I will use > or < zero? a. Is it possible to get "roots that have equal magnitude but opposite in signs" if $b \neq 0$ ....? Think about it. b. Obviously you use '> 0' (do you understand how the value of the discriminant relates to the number of solutions to a quadratic equation?) 5. a. I got it. b. so do I need to equate (k-8)(k+4)>0? 6. Originally Posted by hirano a. I got it. b. so do I need to equate (k-8)(k+4)>0? Yes, that's it. The discriminant must be positive.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8620160222053528, "perplexity_flag": "middle"}
http://www-fusion.ciemat.es/fusionwiki/index.php/Flux_coordinates
# Flux coordinates ### From FusionWiki Flux coordinates in the context of magnetic confinement fusion (MCF) is a set of coordinate functions adapted to the shape of the flux surfaces of the confining magnetic trap. They consist of one flux label, often termed ψ and two angle-like variables θ,φ whose constant contours on the flux $(\psi({\mathbf x}) = {\textrm constant})$ surfaces close either poloidaly (φ) or toroidallly (θ). In this coordinates, equilibrium vector fields like the magnetic field ${\mathbf B}$ or current density ${\mathbf j}$ have simplified expressions. A particular kind of flux coordinates, generally called magnetic coordinates, simplify the ${\mathbf B}$-field expression further by making field lines look straight in the (θ,φ) plane of that family of coordinates. Some popular choices of magnetic coordinate systems are Boozer coordinates and Hamada coordinates. Sample flux surface of the TJ-II stellarator and a θ-curve (yellow) and φ-curve (red). ## General curvilinear coordinates Here we briefly review the basic definitions of a general curvilinear coordinate system for later convenience when discussing toroidal flux coordinates and magnetic coordinates. ### Coordinates and basis vectors Let ${\mathbf x}$ be a set of euclidean coordinates on ${\mathbb R}^3$ and let $(\psi(\mathbf{x}),\theta(\mathbf{x}),\phi(\mathbf{x}))$ define a change of coordinates, arbitrary for the time being. We can calculate the contravariant basis vectors as $\mathbf{e}^i = \{\nabla\psi, \nabla\theta, \nabla\phi\}$ and the dual covariant basis defined as $\mathbf{e}_i= \frac{\partial\mathbf{x}}{\partial{u^i}} \to \mathbf{e}_i\cdot\mathbf{e}^j = \delta_{i}^{j}~,$ and therefore relates to the contravariant vectors as $\mathbf{e}_i = \frac{\mathbf{e}^j\times\mathbf{e}^k}{\mathbf{e}^i\cdot\mathbf{e}^j\times\mathbf{e}^k} = \sqrt{g}\;\mathbf{e}^j\times\mathbf{e}^k ~,$ where (i,j,k) are cyclic permutations of (1,2,3) and we have used the notation (u1,u2,u3) = (ψ,θ,φ). The Jacobian $\sqrt{g}$ is defined below. Similarly $\mathbf{e}^i = \frac{\mathbf{e}_j\times\mathbf{e}_k}{\sqrt{g}} ~.$ Any vector field $\mathbf{B}$ can be represented as $\mathbf{B} = (\mathbf{B}\cdot\mathbf{e}^i)\mathbf{e}_i = B^i\mathbf{e}_i$ or $\mathbf{B} = (\mathbf{B}\cdot\mathbf{e}_i)\mathbf{e}^i = B_i\mathbf{e}^i ~.$ In particular any basis vector $\mathbf{e}_i = (\mathbf{e}_i\cdot\mathbf{e}_j)\mathbf{e}^j$. The metric tensor is defined as $g_{ij} = \mathbf{e}_i\cdot\mathbf{e}_j \; ; \; g^{ij} = \mathbf{e}^i\cdot\mathbf{e}^j \; ; \; g^j_i = \mathbf{e}_i\cdot\mathbf{e}^j = \delta_i^j ~.$ The metric tensors can be used to raise or lower indices. Take $\mathbf{B} = B_i\mathbf{e}^i = B_i g^{ij}\mathbf{e}_j = B^j\mathbf{e}_j~,$ so that $B^j = g^{ij} B_i~.$ ### Jacobian The Jacobian of the coordinate transformation $\mathbf{x}(\psi, \theta, \phi)$ is defined as $J = \det\left(\frac{\partial(x,y,z)}{\partial(\psi,\theta,\phi)}\right) = \frac{\partial\mathbf{x}}{\partial{\psi}}\cdot\frac{\partial\mathbf{x}}{\partial{\theta}} \times \frac{\partial\mathbf{x}}{\partial{\phi}}$ and that of the inverse transformation $J^{-1} = \det\left(\frac{\partial(\psi,\theta,\phi)}{\partial(x,y,z)}\right) = \nabla{\psi}\cdot\nabla{\theta} \times \nabla{\phi}$ It can be seen that [1] $g \equiv \det(g_{ij}) = J^2 \Rightarrow J = \sqrt{g}$ ### Some surface elements Consider a surface defined by a constant value of φ. Then, the surface element is $d{\mathbf S}_\phi = \mathbf{e}_\psi\times\mathbf{e}_\theta d\psi d\theta = \sqrt{g}\, \nabla\phi d\psi d\theta .$ As for a surface defined by a constant value of θ: $d{\mathbf S}_\theta = \mathbf{e}_\phi\times\mathbf{e}_\psi d\psi d\phi = \sqrt{g}\, \nabla\theta d\psi d\phi ,$ or a constant ψ surface: $d{\mathbf S}_\psi = \mathbf{e}_\theta\times\mathbf{e}_\phi d\theta d\phi = \sqrt{g}\, \nabla\psi d\theta d\phi .$ ### Gradient, Divergence and Curl in curvilinear coordinates The gradient of a function f is naturally given in the contravariant basis vectors: $\nabla f = \frac{\partial f}{\partial u^i}\nabla u^i = \frac{\partial f}{\partial u^i}\mathbf{e}^i~.$ The divergence of a vector $\mathbf{A}$ is best expressed in terms of its contravariant components $\nabla\cdot\mathbf{A} = \frac{1}{\sqrt{g}}\frac{\partial}{\partial u^i}(\sqrt{g}A^i)~,$ while the curl is $\nabla\times\mathbf{A} = \frac{\varepsilon^{ijk}}{\sqrt{g}}\frac{\partial A_j}{\partial u^i}\mathbf{e}_k \Rightarrow \left(\nabla\times\mathbf{A}\right)^k = \frac{\varepsilon^{ijk}}{\sqrt{g}}\frac{\partial A_j}{\partial u^i}$ given in terms of the covariant base vectors, where $\varepsilon^{ijk}$ is the Levi-Civita symbol. ## Flux coordinates A flux coordinate set is one that includes a flux surface label as a coordinate. A flux surface label is a function that is constant and single valued on each flux surface. In our naming of the general curvilinear coordinates we have already adopted the usual flux coordinate convention for toroidal equilibrium with nested flux surfaces, where ψ is the flux surface label and θ,φ are 2π-periodic poloidal and toroidal-like angles. Different flux surface labels can be chosen like toroidal (Ψtor) or poloidal (Ψpol) magnetic fluxes or the volume contained within the flux surface V. By single valued we mean to ensure that any flux label ψ1 = f(ψ2) is a monotonous function of any other flux label ψ2, so that the function f is invertible at least in a volume containing the region of interest. We will denote a generic flux surface label by ψ. To avoid ambiguity in the sign of line and surface integrals we impose dψ(V) / dV > 0, the toroidal angle increases in the clockwise direction when seen from above and the poloidal angle increases such that $\nabla\psi\cdot\nabla\theta\times\nabla\phi > 0$. ### Flux Surface Average The Flux Surface Average (FSA) of a function Φ is defined as the limit $\langle\Phi\rangle = \lim_{\delta \mathcal{V} \to 0}\frac{1}{\delta \mathcal{V}}\int_{\delta \mathcal{V}} \Phi\; d\mathcal{V}$ where $\delta \mathcal{V}$ is the volume confined between two flux surfaces. It is therefore a volume average over an infinitesimal spatial region rather than a surface average. To avoid confusion, we denote volume elements or domains with the calligraphic $\mathcal{V}$. Capital V is reserved for the flux label (coordinate) defined as the volume within a flux surface. Introducing the differential volume element $d\mathcal{V} = \sqrt{g} d\psi d\theta d\phi$ $\langle\Phi\rangle = \lim_{\delta \mathcal{V} \to 0} \frac{1}{\delta \mathcal{V}}\int_{\delta \mathcal{V}} \Phi\; \sqrt{g} d\psi d\theta d\phi = \frac{d\psi}{d V}\int_0^{2\pi}\int_0^{2\pi}\Phi\; \sqrt{g} d\theta d\phi$ or, noting that $\langle 1\rangle = 1$, we have $\frac{dV}{d\psi} = \int_0^{2\pi}\int_0^{2\pi} \sqrt{g} d\theta d\phi$ and we get to a more practical form of the Flux Surface Average $\langle\Phi\rangle = \frac{\int_0^{2\pi}\int_0^{2\pi}\Phi\; \sqrt{g} d\theta d\phi} {\int_0^{2\pi}\int_0^{2\pi} \sqrt{g} d\theta d\phi}$ Note that $dS = |\nabla\psi|\sqrt{g}d\theta d\phi$, so the FSA is a surface integral weighted by $|\nabla V|^{-1}$ : $\langle\Phi\rangle = \frac{d\psi}{d V}\int_0^{2\pi}\int_0^{2\pi}\Phi\; \sqrt{g} d\theta d\phi = \frac{d\psi}{d V}\int_{S(\psi)}\frac{\Phi}{|\nabla\psi|}\; dS = \int_{S(\psi)}\frac{\Phi}{|\nabla V|}\; dS$ Applying Gauss' theorem to the definition of FSA we get to the identity $\langle\nabla\cdot\Gamma\rangle = \lim_{\delta \mathcal{V} \to 0}\frac{1}{\delta \mathcal{V}}\int_{\delta \mathcal{V}} \nabla\cdot\Gamma\; d\mathcal{V} = \lim_{\delta \mathcal{V} \to 0}\frac{1}{\delta \mathcal{V}}\int_{S(\delta \mathcal{V})} \Gamma\cdot \frac{\nabla V}{|\nabla V|}dS = \lim_{\delta \mathcal{V} \to 0}\frac{1}{\delta \mathcal{V}}\left(\langle\Gamma\cdot\nabla V\rangle_{S(V+\delta \mathcal{V})} - \langle\Gamma\cdot\nabla V\rangle_{S(V)} \right) = \frac{d}{dV}\langle\Gamma\cdot\nabla V\rangle~.$ #### Useful properties of FSA Some useful properties of the FSA are • $\langle \mathbf{B}\cdot\nabla f \rangle = \langle \nabla\cdot(\mathbf{B} f) \rangle = 0~,\qquad \forall~ \mathrm{single~valued~} f(\mathbf{x}), ~ \mathrm{if}~ \nabla\cdot\mathbf{B} = 0 ~\mathrm{and}~ \nabla \psi\cdot\mathbf{B} = 0$ • $\langle\nabla\cdot\Gamma\rangle = \frac{d}{dV}\langle\Gamma\cdot\nabla V\rangle = \frac{1}{V'}\frac{d}{d\psi}V'\langle\Gamma\cdot\nabla \psi\rangle$ The two identities above are the basic simplifying properties of the FSA: The first cancels the contribution of 'conservative forces' like the pressure gradient or electrostatic electric fields. The second reduces the number of spatial variables to only the radial one. Further, it is possible to show that, if $\nabla\cdot\Gamma = 0$ then $\langle\Gamma\cdot\nabla V\rangle = 0$ and not simply constant as the above suggests. This can be seen by simply using Gauss' theorem • $\int_{\mathcal{V}}\nabla\cdot\Gamma\; d\mathcal{V} = \langle\Gamma\cdot\nabla V\rangle \qquad \mathrm{where~} \mathcal{V} \mathrm{~is~the~volume~enclosed~by~a~flux~surface.}$ The FSA relates to the conventional volume integral between two surfaces labelled by their volumes V1 and V2 as • $\int_{\mathcal{V}(V_1<V<V_2)} f\; d\mathcal{V} = \int_{V_1}^{V_2} \langle f \rangle\; dV$ whereas the conventional surface integral over a ψ = constant is • $\int_{S(\psi)} f\; dS = \langle f |\nabla V| \rangle$ Other useful properties are • $\langle \nabla \psi\cdot\nabla\times \mathbf{A} \rangle = -\langle \nabla\cdot( \nabla\psi\times\mathbf{A}) \rangle = 0~.$ • $\langle \mathbf{B}\cdot\nabla \theta\rangle =2\pi\frac{d\Psi_{pol}}{dV} \qquad \mathrm{for~any~poloidal~ angle~} \theta ~ (\mathrm{Note:}~ \theta(\mathbf{x})~\mathrm{is~not~single~valued})$ • $\langle \mathbf{B}\cdot\nabla \phi\rangle =2\pi\frac{d\Psi_{tor}}{dV} \qquad \mathrm{for~any~toroidal~ angle~} \phi ~ (\mathrm{Note:}~ \phi(\mathbf{x})~\mathrm{is~not~single~valued})$ • $\langle \sqrt{g}^{-1}\rangle = \frac{4\pi^2}{V'}$ In the above $V' = \frac{dV}{d\psi}$. Some vector identities are useful to derive the above identities. ### Magnetic field representation in flux coordinates #### Contravariant Form Any solenoidal vector field $\mathbf{B}$ can be written as $\mathbf{B} = \nabla\alpha\times\nabla\nu$ called its Clebsch representation. For a magnetic field with flux surfaces $(\psi = \mathrm{const}\; , \; \nabla\psi\cdot\mathbf{B} = 0)$ we can choose, say, α to be the flux surface label ψ $\mathbf{B} = \nabla\psi\times\nabla\nu$ Field lines are then given as the intersection of the constant-ψ and constant-ν surfaces. This form provides a general expression for $\mathbf{B}$ in terms of the covariant basis vectors of a flux coordinate system $\mathbf{B} = \frac{\partial\nu}{\partial\theta}\nabla\psi\times\nabla\theta + \frac{\partial\nu}{\partial\phi}\nabla\psi\times\nabla\phi = \frac{1}{\sqrt{g}}\frac{\partial\nu}{\partial\theta}\mathbf{e}_\phi -\frac{1}{\sqrt{g}}\frac{\partial\nu}{\partial\phi}\mathbf{e}_\theta = B^\phi\mathbf{e}_\phi + B^\theta\mathbf{e}_\theta~.$ in terms of the function ν, sometimes referred to as the magnetic field's stream function. It is worthwhile to note that the Clebsch form of $\mathbf{B}$ corresponds to a magnetic vector potential $\mathbf{A} = \nu\nabla\psi$ (or $\mathbf{A} = \psi\nabla\nu$ as they differ only by the Gauge transformation $\mathbf{A} \to \mathbf{A} - \nabla (\psi\nu)$). The general form of the stream function is $\nu(\psi,\theta,\phi) = \frac{1}{2\pi}(\Psi_{tor}'\theta - \Psi_{pol}'\phi) + \tilde{\nu}(\psi,\theta,\phi)$ where $\tilde{\nu}$ is a differentiable function periodic in the two angles. This general form can be derived by using the fact that $\mathbf{B}$ is a physical function (hence singe-valued). The specific form for the coefficients in front of the secular terms (i.e. the non-periodic terms) can be obtained from the FSA properties . #### Covariant Form If we consider an equilibrium magnetic field such that $\mathbf{j}\times\mathbf{B} \propto \nabla\psi$, where $\mathbf{j}$ is the current density , then both $\mathbf{B}\cdot\nabla\psi = 0$ and $\nabla\times\mathbf{B}\cdot\nabla\psi = 0$ and the magnetic field can be written as $\mathbf{B} = \nabla\chi -\eta\nabla\psi$ where χ is identified as the magnetic scalar potential. Its general form is $\chi(\psi, \theta, \phi) = \frac{I_{tor}}{2\pi}\theta + \frac{I_{pol}^d}{2\pi}\phi + \tilde\chi(\psi, \theta, \phi)$ Sample integration circuits for the current definitions. Sample surface for the definition of the current though a disc. Note that only the current of more external surfaces (those enclosing the one drawn here) contribute to the flux of charge through the surface. Note that I is not the current but μ0 times the current. The functional dependence on the angular variables is again motivated by the single-valuedness of the magnetic field. The particular form of the coefficients can be obtained noting that $\int_S \mu_0\mathbf{j}\cdot d\mathbf{S} = \int_{\partial S}\mathbf{B}\cdot d\mathbf{l} = \oint(\nabla\chi-\eta\nabla\psi)\cdot d\mathbf{l} = \oint(d\chi-\eta d\psi )$ and choosing an integration circuit contained within a flux surface (dψ = 0). Then we get $\int_S \mu_0\mathbf{j}\cdot d\mathbf{S} = \Delta \chi = \frac{I_{tor}}{2\pi}\Delta\theta + \frac{I_{pol}^d}{2\pi}\Delta\phi~.$ If we now choose a toroidal circuit (Δθ = 0,Δφ = 2π) we get $I_{pol}^d = \int_S \mu_0\mathbf{j}\cdot d\mathbf{S}\; ; ~\mathrm{with}~ \partial S ~\mathrm{such~that}~ (\Delta\theta = 0, \Delta\phi = 2\pi)~.$ here the superscript d is meant to indicate the flux is computed through a disc limited by the integration line, as opposed to the ribbon limited by the integration line on one side and the magnetic axis on the other that was used for the definition of poloidal magnetic flux Ψpol above these lines. Similarly $I_{tor} = \int_S \mu_0\mathbf{j}\cdot d\mathbf{S}\; ; ~\mathrm{with}~ \partial S ~\mathrm{such~that}~ (\Delta\theta = 2\pi, \Delta\phi = 0)~.$ ##### Contravariant Form of the current density Taking the curl of the covariant form of $\mathbf{B}$ the equilibrium current density $\mathbf{j}$ can be written as $\mu_0\mathbf{j} = \nabla\psi\times\nabla\eta~.$ By very similar arguments as those used for $\mathbf{B}$ (note that both $\mathbf{B}$ and $\mathbf{j}$ are solenoidal fields tangent to the flux surfaces) it can be shown that the general expression for η is $\eta(\psi,\theta,\phi) = \frac{1}{2\pi}({I}_{tor}'\theta - {I}_{pol}'\phi) + \tilde{\eta}(\psi,\theta,\phi)~.$ Note that the poloidal current is now defined through a ribbon and not a disc. The two currents are related as $\nabla\cdot\mathbf{j} = 0$ implies $I_{pol} + I_{pol}^d = \oint_{\psi=0}\mathbf{B}\cdot d\mathbf{l} \Rightarrow I_{pol}' + (I_{pol}^d)' = 0 ~,$ where the integral is performed along the magnetic axis and therefore does not depend on ψ. This can be used to show that a expanded version of $\mathbf{B}$ is given as $\mathbf{B} = -\tilde\eta\nabla\psi + \frac{I_{tor}}{2\pi}\nabla\theta + \frac{I_{pol}^d}{2\pi}\nabla\phi + \nabla\tilde\chi~.$ ## Magnetic coordinates Magnetic coordinates are a particular type of flux coordinates in which the magnetic field lines are straight lines. In mathematical terms this implies that the periodic part of the magnetic field's stream function is zero in these coordinates so the magnetic field reads $\mathbf{B} = \nabla\psi\times \nabla\left( \frac{\Psi_{tor}'}{2\pi}\theta_f - \frac{\Psi_{pol}'}{2\pi}\phi_f \right) = \frac{\Psi_{pol}'}{2\pi\sqrt{g}}\mathbf{e}_\theta + \frac{\Psi_{tor}'}{2\pi\sqrt{g}}\mathbf{e}_\phi~.$ Now a field line is given by ψ = ψ0 and Ψtor'θf − Ψpol'φf = 2πν0. Note that, in general, the contravariant components of the magnetic field in a magnetic coordinate system $B^{\theta_f} = \frac{\Psi_{pol}'}{2\pi\sqrt{g}}\; ;\quad B^{\phi_f} = \frac{\Psi_{tor}'}{2\pi\sqrt{g}}$ are not flux functions, but their quotient is $\frac{B^{\theta_f}}{B^{\phi_f}} = \frac{\Psi_{pol}'}{\Psi_{tor}'} \equiv \frac{\iota}{2\pi}~,$ ι being the rotational transform. In a magnetic coordinate system the poloidal $\mathbf{B}_P = B^\theta\mathbf{e}_\theta$ and toroidal $\mathbf{B}_T = B^\phi\mathbf{e}_\phi$ components of the magnetic field are individually divergence-less. From the above general form of $\mathbf{B}$ in magnetic coordinates it is easy to obtain the following identities valid for any magnetic coordinate system $\mathbf{e}_\theta\times\mathbf{B} =\frac{1}{2\pi}\nabla\Psi_{tor}~,$ $\mathbf{e}_\phi\times\mathbf{B} = -\frac{1}{2\pi}\nabla\Psi_{pol} ~.$ ### Transforming between Magnetic coordinates systems There are infinitely many systems of magnetic coordinates. Any transformation of the angles of the from $\theta_F = \theta_f +\Psi_{pol}' G(\psi, \theta_f, \phi_f)\; ;\quad \phi_F = \phi_f +\Psi_{tor}' G(\psi, \theta_f, \phi_f)$ where G is periodic in the angles, preserves the straightness of the field lines (as can be easily checked by direct substitution). The spatial function G(ψ,θf,φf), is called the generating function. It can be obtained from a magnetic differential equation if we know the Jacobians of the two magnetic coordinate systems $\sqrt{g_f}$ and $\sqrt{g_F}$. In fact taking $\mathbf{B}\cdot\nabla$ on any of the transformation of the angles and using the known expressions for the contravariant components of $\mathbf{B}$ in magnetic coordinates we get $2\pi\mathbf{B}\cdot\nabla G = \frac{1}{\sqrt{g_F}} - \frac{1}{\sqrt{g_f}}~.$ The LHS of this equation has a particularly simple form when one uses a magnetic coordinate system. For instance, if we write $\mathbf{B}$ in terms of the original magnetic coordinate system we get $(\Psi_{pol}'\partial_{\theta_f} + \Psi_{tor}'\partial_{\phi_f}) G = \frac{\sqrt{g_f}}{\sqrt{g_F}} - 1~.$ which can be turned into an algebraic equation on the Fourier components of G $G_{mn} = \frac{-i}{\Psi_{pol}'m + \Psi_{tor}'n}\left(\frac{\sqrt{g_f}}{\sqrt{g_F}}\right)_{mn}~.$ where $G(\psi, \theta_f, \phi_f) = \sum_{m,n} G_{mn}(\psi) e^{i(m\theta_f + n\phi_f)}$ and G00 = 0 guarantees periodicity is preserved. Particular choices of G can be made so as to simplify the description of other fields. The most commonly used magnetic coordinate systems are: [1] • Hamada coordinates. [2][3] In these coordinates, both the magnetic field lines and current lines corresponding to the MHD equilibrium are straight. Referring to the definitions above, both $\tilde\nu$ and $\tilde\eta$ are zero in Hamada coordinates. • Boozer coordinates. [4][5] In these coordinates, the magnetic field lines corresponding to the MHD equilibrium are straight and so are the diamagnetic lines , i.e. the integral lines of $\nabla\psi\times\mathbf{B}$. Referring to the definitions above, both $\tilde\nu$ and $\tilde\chi$ are zero in Boozer coordinates. ## References 1. ↑ 1.0 1.1 W.D. D'haeseleer, Flux coordinates and magnetic field structure: a guide to a fundamental tool of plasma theory, Springer series in computational physics, Springer-Verlag (1991) ISBN 3540524193 2. ↑ S. Hamada, Nucl. Fusion 2 (1962) 23 3. ↑ J.M. Greene and J.L Johnson, Stability Criterion for Arbitrary Hydromagnetic Equilibria, Phys. Fluids 5 (1962) 510 4. ↑ A.H. Boozer, Plasma equilibrium with rational magnetic surfaces, Phys. Fluids 24 (1981) 1999 5. ↑ A.H. Boozer, Establishment of magnetic coordinates for a given magnetic field, Phys. Fluids 25 (1982) 520
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 113, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8518873453140259, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/207980/global-optimum-of-sum-of-convex-functions
# Global optimum of sum of convex functions Take two real differentiable convex functions, $f_1$ and $f_2$, defined on the unit interval $[0; 1]$. I want to find the global optimum of: $\min_{x \in [0;1]} af_1(x)+bf_2(x)$, for given $a, b \in \mathbb{R}$. Is there a simple solution to this? - It's probably easier to answer if a and b are nonnegative, since $af_1(x)+bf_2(x)$ is convex when $a>0$ and $b>0$. – Snowball Oct 5 '12 at 22:39 Just to add to Snowball's answer more explicitly, if a or b are negative then convexity is not guaranteed so in general it will be difficult to optimize. – Bitwise Oct 5 '12 at 22:56 Moreover, if $a$ and $b$ are nonnegative and $f_1$ and $f_2$ have their global minima at $s$ and $t$, then $f = a f_1 + b f_2$ has a global minimum somewhere in the interval $[\min(s,t),\max(s,t)]$. – Robert Israel Oct 5 '12 at 23:12 To expand on @Bitwise's comment: any $C^2$ function on $[0,1]$ can be written as the difference of two $C^2$ convex functions. – Robert Israel Oct 5 '12 at 23:17 @RobertIsrael thanks Robert, I wasn't aware of that. Can you provide a reference? – Bitwise Oct 6 '12 at 0:13 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9003918170928955, "perplexity_flag": "head"}
http://mathoverflow.net/questions/110211?sort=votes
How many binary operations are associative? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a finite set of $n$ elements, and consider a binary operation $\odot: X \times X \rightarrow X$. There are $n^{n^2}$ such binary operations, as the $n \times n$ table entries can each be filled with one of $n$ elements of $X$. My question is: How many of the $n^{n^2}$ binary operations are associative, i.e., $(x \odot y) \odot z = x \odot (y \odot z)$? Unless I miscomputed this, for $n=2$, exactly half of the $2^4=16$ binary operations are associative. But for $n=3$, only $113$ of the $3^9=19,683$ binary operations are associative, a count I do not trust, because it seems so much smaller than I anticipated. (It is difficult to count among the four billion ($4,294,967,296$) binary operations for $n=4$.) I would be interested in the asymptotic growth rate. Surely this is all well known...? Thanks for pointers! Update. Following MSE link provided by Darij, I reached (via Gerry Myerson's pointer) the OEIS sequence A023814. The $n=4$ number I couldn't easily compute is $3492$. - 2 math.stackexchange.com/questions/45648/… confirms your 113. – darij grinberg Oct 21 at 3:57 2 I should be able to recall this, but my memory is rusty. Two papers come to mind: Ralph Freese's Probability in Algebra, circa 1990, where general algebras besides those with one binary operation are considered (as it turns out, once you go beyond binary,there's not much difference numerically), and work of V.L. Murskii from 1975, showing almost all algebras have a finite basis of identities. Essentially, many properties of finite algebras obey a 0-1 law, and I think associatvity tends toward 0 proportionately as n grows. Gerhard "Ask Someone Else About Associativity" Paseman, 2012.10.20 – Gerhard Paseman Oct 21 at 3:58 Thanks, Gerhard! Perhaps this? Ralph Freese, "On the two kinds of probability in algebra." Algebra Universalis 27 (1990), no. 1, 70--79. math.hawaii.edu/~ralph/papers.html – Joseph O'Rourke Oct 21 at 4:04 Darij's link leads to "Associative Operations on a Three-Element Set," by Friðrik Diego and Kristín Halla Jónsdóttir, which indeed confirms my count of $113$ for $n=3$, via an inclusion-exclusion argument. – Joseph O'Rourke Oct 21 at 4:08 2 Also in the line is oeis.org/A023814, which gives values up to $n = 7$. – Michael Biro Oct 21 at 4:10 show 1 more comment 5 Answers Here is a guide to the intuition. I will not swear that the numerics are exact, but I will bet that the numerical truth is not far off. Look at the diagonal for the multiplication table of a (labeled) groupoid on $n>3$ elements. Of the n^n possibilities, only one of them is idempotent, so with one exception aa=b will happen for some a and some b different from a. Now all we need for associativity to fail in this case is that ab and ba are different, which will happen for all but n of the n^2 possibilities. So we are already looking at associativity happening only on a small fraction of all (non-idempotent) tables, especially as there are often several candidates for a, and only one is needed. Even for idempotent groupoids, one finds a,b,c distinct and needs to consider only d=ab, g=bc, and the ways in which dc and ag can fail to be equal. Again in rough terms we are talking about n^(-2), and this is just by fixing a,b, and c in advance, and that for the 1 out of n^n tables that are idempotent. I'll let someone else tighten up the numerics. For strengthening Joseph's intuition, I hope this will suffice. - @Gerhard: Examining the diagonal is insightful! Just to spell it out, in your scenario, $aaa$ has two possible values: $(aa)a = ba$ and $a(aa) = ab$. This dispels my perplexity---Thanks! – Joseph O'Rourke Oct 22 at 0:54 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For questions like these you can try out alg. It is a program which takes some axioms (it works best for equations) and outputs, or just counts, non-isomorphic models of a given size. It also provides a link to OEIS for you to check the sequence it got. The theory of an associative operation looks like this: ````Theory associative. Binary *. Axiom: (x * y) * z = x * (y * z). ```` The output says: ````./alg.native --size 1-4 --count theories/associative.th # Theory associative Theory associative. Binary *. Axiom: (x * y) * z = x * (y * z). size | count -----|------ 1 | 1 2 | 5 3 | 24 4 | 188 Check the numbers [5, 24, 188](http://oeis.org/search?q=5,24,188) on-line at oeis.org ```` The point is, you can easily experiment (of course someone has counted these things before me). - I did not know about alg. Thanks! – Joseph O'Rourke Oct 21 at 14:10 If you get stuck compiling and using it, let me know. – Andrej Bauer Oct 21 at 23:25 The are bounds known for the number of semigroups on ${1,2,3,\dots,n}$. This is one reference I found (from 1976), no doubt there are better bounds known by now. The Number of Semigroups of Order $n$ - Thanks, Michael! – Joseph O'Rourke Oct 21 at 4:10 Semigroups form a bigger chunk than you might think. Basically you call a symbol 0 and declare xyz=0 for all elements (making associativity trivial). You still have a huge flexibility on how to define the remaining products. This is the content of the paper Michael links. In fact 99% of all semigroups up to isomorphism and anti-isomorphism satisfy xyz=0. A recent paper of Distler and Mitchell count the exact number of these guys up to isomorphism http://www.combinatorics.org/ojs/index.php/eljc/article/view/v19i2p51. I think they also count the number of such multiplication tables. - Thanks, especially, for the remark on "huge flexibility." I remain a bit puzzled at the rarity of associative binary relations, but your remarks are a start at lifting the veil of confusion. – Joseph O'Rourke Oct 21 at 23:37 More specifically, for a large set S, define a small subset Z with z in Z and any member in Z with any member in S has product z. Now in (S- Z)^2, let any two members have product in Z. One can get a lot of semigroups this way, but I am surprised the percentage is so high. Gerhard "I Will Take Ben's Word" Paseman, 2012.10.21 – Gerhard Paseman Oct 21 at 23:47 Gerhard, in the paper Michael links they prove that the number of semigroup multiplication tables is asymptotic with the number of 3-nilpotent semigroup tables. It is a long standing conjecture that I guess still has never been rigorously proven that this remains true up to isomorphism. – Benjamin Steinberg Oct 22 at 1:46 @Benjamin: Maybe I am missing something, but it seems obvious to me that the conjecture trivially follows from the result you claim to have been proved in Michael's link. In other words, if most of tables are 3-nilpotent this is also true up to isomorphism. What am I missing? – boumol Oct 22 at 13:19 2 Boumol, my guess is that it is not known how rigid 3-nilpotent semigroups are. If enough of them had trivial automorphism group, then it would follow. Freese's paper above suggests this, but he is talking relative to all tables, not comparing one subset to another, so it does not imply the desired result. Gerhard "Ask Me About Hyperassociative Semigroups" Paseman, 2012.10.22 – Gerhard Paseman Oct 22 at 13:33 show 2 more comments A few curious observations from a very small case: Define the associativity of a binary operation to be the number of triples $a,b,c$ with $(ab)c=a(bc).$ The counts in the case of $n=3$ elements are $52, 12, 96, 276, 504, 468, 628, 936, 966, 1456, 1290, 1266, 1208$$1350, 1212, 1296, 1008, 1212, 840, 939, 732, 596, 432, 369, 168, 198, 60, 113$ So there are, as noted, $118$ with associativity $27$ but only $60$ with associativity $26.$ Also, there are $52$ with associativity $0$ but only $12$ with associativity $1$. If we count only up to isomorphism/anti-isomorphism (permute $1,2,3$ and/or take the transpose of the table giving the operation) then the counts are. $5, 1, 8, 23, 42, 39, 53, 79, 81, 130, 108, 113, 103$$121, 101, 121, 84, 112, 70, 89, 61, 56, 36, 40, 14, 21, 5, 18$ - If you did even partialsats for n=4, I would find that of interest. I am most intrigued by the columns that have equal counts (1212 and 121l. It makes me wonder if there is a special morphism at work. Gerhard "Ask Me About Numerical Coincidence" Paseman, 2012.10.22 – Gerhard Paseman Oct 22 at 17:31 Oops. Partial Stats. Gerhard "Should Read What I Write" Paseman, 2012.10.22 – Gerhard Paseman Oct 22 at 17:32 Some extended stats are 14, 1212, 101, [[12, 101]] AND 17, 1212, 112, [[6, 22], [12, 90]] Meaning that the associativity 14 laws are in 101 orbits each of size 12 while the associativity 17 laws are in 112 orbits, 22 of size 6 and the rest of size 12. – Aaron Meyerowitz Oct 22 at 19:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220972657203674, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/1176/given-a-sequence-defined-on-the-positive-integers-how-should-it-be-extended-to-b/1180
## Given a sequence defined on the positive integers, how should it be extended to be defined at zero? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is inspired by a lecture Bjorn Poonen gave at MIT last year. I have ideas of my own, but I'm interested in what other people have to say, so I'll make this community wiki and post my own thoughts later. Here are some examples of what I'm talking about: • Why does a^0 = 1? • Why does 0! = 1? • If the Fibonacci number Fn+1 counts the number of ways to tile a board of length n with tiles of length 1 and 2, why does F1 = 1? • What is the determinant of a 0x0 matrix? • What is the degree of the zero polynomial? • What is the direct product of zero groups? • What is the zeroth homotopy group of a space? I want to be very precise about exactly what I'm asking for here. Question 1: What general principles do you apply in a situation like this? Can they be stated as theorems, or do they only exist at the level of intuition? Question 2: Do you know of any examples where there are two different ways to extend a sequence to zero, both of which are reasonable from the perspective of some principle? Feel free to answer at any level of sophistication. - ## 7 Answers My own thought tend to revolve around some subset of the following: --Find a combinatorial definition for the sequence, and see if it makes sense when you extend slightly further. --If you are trying to perform a vacuous task (e.g. tiling an empty board, or counting functions defined on the empty set), you can do it in exactly one way. Most of your examples fall under this class, including a^0 (functions defined on the empty set), 0! (bijections on the empty set), F_1 (tiling an empty board), and the cardinality of the direct product of no groups (choosing one object from each class, so the direct product should be the identity). --An empty sum is equal to 0, an empty product is equal to 1. (again the cardinality of the direct product of 0 groups should be 1). What about the determinant of a 0x0 matrix? Well, it's a sum over all permutations from a 0 element to itself of an empty product. There's one element in the sum (vacuous task), and its an empty product, so should be 1. I don't really know if there's a rigorous statement of this, or if there's not some way it can come into self-contradiction if there's two combinatorial ways of defining a sequence, but it's what seems natural to go by. - Poonen claimed, and I agree, that the determinant of a 0x0 matrix should be equal to 1. Consider what happens when you try to expand the determinant of a 1x1 matrix by minors. – Qiaochu Yuan Oct 19 2009 at 7:38 Yes, I was just rethinking that one as well...see the re-explanation above. – Kevin P. Costello Oct 19 2009 at 7:41 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Given your examples, you don't seem to be asking for a canonical way to extend arbitrary functions defined on positive integers to zero. Instead, you're taking functions whose inputs are sets and asking if they can be defined when some input is the empty set. As long as your sequence defined on positive integers comes equipped with this extra structure, you shouldn't have too much trouble extending it naturally. If you start with an unstructured sequence, the reasons for favoring one extension over another become rather weak (e.g., Kolmogorov complexity). Here's the standard example of a sequence that extends to zero in different ways: the sequence that is identically zero on the positive integers. One extension is the zero function. Other extensions interpret the sequence as n -> k 0n for some nonzero k. Incidentally, you need to choose a base point on your space to define pi0. Once you have that, it is the set of homotopy classes of pointed maps from S0 to your space. Equivalently, it is the (pointed) set of path components. It does not have a natural group structure (although it may if your space comes with some kind of composition law). - The determinant of an endomorphism f of a free R-module of dimension n (R commutative) is the $d \in R$ such that $\bigwedge^n f$ is the homothety of ratio d. Our case corresponds to $n=0$, and $\bigwedge^0 f$ is the identity of R, so d=1. The reasons, already given, why 0^0=1 (m^n is the number of functions from a set of cardinality n to a set of cardinality m) and 0!=1 (n! is the number of bijections of a set of cardinality n), are illustrations of Baez's ideas on counting as decategorification. - For the first three, you can define a recurrence. Run the recurrences backward. Also, 0! = Γ(1) = int_0^\infty e^(-t) = 1 ; here there's nothing special about 0. (But Γ isn't defined for nonpositive integers.) - By considering a^0 and 0^b, it seems reasonable to me to define 0^0 to be 0 or 1 depending on what you're up to. Of course you could argue that you just shouldn't define 0^0 for this reason. This might be considered cheating as an answer to question 2 though because I'm really extending a map for N^2 to (0,0) in two different ways. - 1 I would argue as follows. If you're a combinatorialist who accepts that 0! = 1, you accept that there is one bijection from the empty set to itself, so you accept that there is one function from the empty set to itself, so you should accept that 0^0 = 1. – Qiaochu Yuan Oct 19 2009 at 7:12 When is it actually useful, in practice, to set 0^0 not equal to 1? – Alex Fink Oct 19 2009 at 21:37 This may sound lame, but I'd say you just look at the properties of the sequence you care about, and if you can define it so those properties still hold (exponent rules, recursion, universal properties...), then you do. At least I can't imagine there being a more general answer than this. Regarding 0^0, I'd say 0^0=1 works better "algebraically", since then you can still write 0^0=0^(-0)=1/(0^0), and 0^0=0^(0+0)=(0^0)*(0^0). - This may also sound lame, but how do you know you're looking at the right properties? – Qiaochu Yuan Oct 19 2009 at 7:18 1 I have a utilitarian view on definitions: they're meant to shorten arguments. So whichever properties allow you to shorten your arguments are the "right" ones. This obviously depends on the kind of math you're doing, and how you've been doing it, but I don't think this dependency is meant to be avoided. – Andrew Critch Oct 19 2009 at 14:48 Fair enough. I do like that you mentioned universal properties, since my own response to this question is basically "categorify until it becomes obvious what to do." For example, the product of zero things in a category is a terminal object and the coproduct of zero things is an initial object. – Qiaochu Yuan Oct 19 2009 at 16:48 For a pointed space (X,p), the nth homotopy group πn(X,p) is usually defined as the group of maps of the n-sphere which take (1,0,...,0) to p, modulo homotopy-rel-basepoint. What's potentially weird is that S0 is disconnected, whereas Sn is connected for n>0. But then π0(X) just counts the number of path components of X. Of course, it doesn't have a group structure because S0 isn't a cube with its boundary identified; this is anomalous. On the other hand, this corresponds perfectly with the other characterization of homotopy groups I've seen, where π0(X,p) is defined to be the set of path components of X, and then πn(X,p) is inductively defined as the "loop space" of πn-1(X,p), i.e. the group of homotopy classes of loops starting and ending at the basepoint (rel basepoint, of course), with composition defined simply as composition of loops. So, while in neither setup is π0(X,p) a group, I think this is as well-defined as it's going to get. As far as I know, only in the setting of Lie groups is there a natural way to put a group structure on the path components (just take G/G0, where G is the Lie group and G0 is the path component of the identity). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406653642654419, "perplexity_flag": "head"}
http://mathhelpforum.com/number-theory/185913-exponential-cipher.html
# Thread: 1. ## Exponential cipher I tried to work through this ex in my book which is not typical of the ex I have done and I am confused after first part for part a I worked out that inverse of 5 in Z36 is 29. hope this is correct but I have no clue as to where to and how to start part b - which is attached here. Can someone please guide me in detail as I am studying this for the first type. 2. ## Re: Exponential cipher Originally Posted by vidhi96 I tried to work through this ex in my book which is not typical of the ex I have done and I am confused after first part for part a I worked out that inverse of 5 in Z36 is 29. hope this is correct This is correct : $5\times 29 = 145 = 4\times 36 + 1$ Originally Posted by vidhi96 but I have no clue as to where to and how to start part b - which is attached here. Can someone please guide me in detail as I am studying this for the first type. The table gives you a correspondence $C : \{A,\dots,Z\} \rightarrow \{1,\dots,26\}$. So for a letter $\alpha$, the cipher letter is $C^{-1}(E(C(\alpha)))$ (I didn't check, but in order to apply $C^{-1}$, one must have $m^5 (\mathrm{mod}\, 37) \in \{1,\dots,26\},\ \forall m\in\{1,\dots,26\}$). When you have a word, it's a sequence of letter : you encrypt every letter of this sequence and have this way a sequence of cipher letter. It's your cipher text. 3. ## Re: Exponential cipher i am more confused. What I figured out by now is message ext is <23 8 5 14> spelling WHEN. so now I need to find c - which is chipher text is that correct Originally Posted by pece This is correct : $5\times 29 = 145 = 4\times 36 + 1$ The table gives you a correspondence $C : \{A,\dots,Z\} \rightarrow \{1,\dots,26\}$. So for a letter $\alpha$, the cipher letter is $C^{-1}(E(C(\alpha)))$ (I didn't check, but in order to apply $C^{-1}$, one must have $m^5 (\mathrm{mod}\, 37) \in \{1,\dots,26\},\ \forall m\in\{1,\dots,26\}$). When you have a word, it's a sequence of letter : you encrypt every letter of this sequence and have this way a sequence of cipher letter. It's your cipher text. 4. ## Re: Exponential cipher Originally Posted by vidhi96 i am more confused. What I figured out by now is message ext is <23 8 5 14> spelling WHEN. Now you have to apply $E$ to each of these numbers and then retranslate the result into letters. 5. ## Re: Exponential cipher Originally Posted by pece Now you have to apply $E$ to each of these numbers and then retranslate the result into letters. Ok I'll try and type it and send it to you tomorrow
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9621368646621704, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/267779/proving-equivalence-partial-order-on-a-binary-relation?answertab=votes
Proving equivalence,partial order on a binary relation Let $R$ be a relation on the set $\Bbb R$ of real numbers where real numbers $x,y$ satisfy $xRy$ if and only if $e^{x-y}$ is an integer. Is $R$ an equivalence relation on $\Bbb R$? Is it a partial order? I have proved it's reflexive let $x=1$. for reflexivity $xRx$. so $e^{x-x}= e^{1-1}=e^0 =1$. This also satisfies the result being an integer. Therefore the relation is reflexive. To check is the relation is symmetric it means. $xRy$ then $yRx$. The problem I am facing is I can't seem to find an $x$ and $y$ that satisfies $e^{x-y}$ being an integer. - 3 Maybe you mean $e^{x-y}$ is an integer, maybe you mean $e^x-y$ is an integer. which is it? – André Nicolas Dec 30 '12 at 18:40 1 I think it is about time you try to enhance seriously the way you write your posts: use LaTeX for mathematics (in the FAQ section you canfind some directions), use questions mark, periods where required, etc. – DonAntonio Dec 30 '12 at 18:42 Based on the partial solution I guess that it is $e^{x-y}$. – Sigur Dec 30 '12 at 18:42 Do you really think that to shwo reflexivity it suffices to consider merely the case $x=1$? – Hagen von Eitzen Dec 30 '12 at 18:43 @HagenvonEitzen Thats my own understanding.What way would you have done it? – Jack welch Dec 30 '12 at 18:44 show 1 more comment 2 Answers I will assume that "$e^{x-y}$ is an integer" is intended. Reflexivity is obvious: $e^{x-x}=e^0=1$ for all $x$. About symmetry, note that $e^{y-x}=\dfrac{1}{e^{x-y}}$. So presumably most of the time, when $e^{x-y}$ is an integer, $e^{y-x}$ is not an integer. In fact, if $e^{x-y}$ is an integer, then $e^{y-x}$ is also an integer precisely if $x=y$. You will find this useful in dealing with the question about partial order. For an explicit example, let $x=\ln 2$ and $y=0$. Then $e^{x-y}=e^{\ln 2}=2$, an integer, while $e^{y-x}=e^{-\ln 2}=\dfrac{1}{2}$ is not an integer. Now to deal with the question about order, suppose that $e^{x-y}$ is an integer, and $e^{y-z}$ is an integer. Can we conclude that $e^{x-z}$ is an integer? - ..How do you pick explicit examples like log2 – Jack welch Dec 30 '12 at 19:05 We want to find $u$ such that $e^u$ is an integer. Recall that $e^{\ln t}=t$ for all (positive) $t$. So $\ln 2$ is a good choice, or $\ln(17)$. – André Nicolas Dec 30 '12 at 19:08 @ Andre Thank you – Jack welch Dec 30 '12 at 19:14 You relation is: • reflexive: $\forall x \in \mathbb{R}, e^{x-x} = e^0 = 1$ so $xRx$ • transitive: $\forall x, y, z \in \mathbb{R}$, if you have $xRy$ and $yRz$, you have $e^{x-y}\in \mathbb{N}$ and $e^{y-z}\in \mathbb{N}$ so $e^{x-z} = e^{x-y+y-z} = e^{x-y}e^{y-z}\in \mathbb{N}$ so you also have $xRz$ • not symmetric: $e^{\ln(2)-0} = e^{\ln(2)}=2\in \mathbb{N}$ so $\ln(2)R0$ but $e^{0-\ln(2)} = e^{-\ln(2)} = \cfrac{1}{e^{\ln(2)}} = \cfrac{1}{2} \not\in \mathbb{N}$ so you don't have $0 R \ln(2)$ • antisymmetric: $\forall x, y \in \mathbb{R}$, if you have both $xRy$ and $yRx$, it means that $e^{x-y}\in\mathbb{N}$ and $\cfrac{1}{e^{x-y}}\in\mathbb{N}$. If $e^{x-y}>1$, then $0<\cfrac{1}{e^{x-y}}<1$ so $\cfrac{1}{e^{x-y}}\not\in\mathbb{N}$ so $e^{x-y}=1$ so $x-y=0$ ie $x=y$ - If someone knows how to render `\not R` $\not{R}$ properly, please edit. – xavierm02 Dec 30 '12 at 18:55 1 (+1) although it is "symmetric". – TMM Dec 30 '12 at 19:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9181755185127258, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/68238/list
Return to Answer 5 Found fault, edited to reflect that. [Edit: Attempted to clarify arguments, and to facilitate comparison with Tamas' enlightening answer.explain why this doesn't answer the question!] Now the question is how does this compare (in the case of a Riemann surface X - again considered just as a topological space, in fact a $K(\pi,1)$) to the Quillen desired algebraic construction with curvature the symplectic form (which by Tamas' answer doesn't exist)?The answer is it doesn't at all.. So where is the contradiction with Tamas' very clear counterexample?line bundle, which is pulled back from the determinant line bundle (ie determinant of Dolbeault cohomology) on the moduli of G-bundles (the de Rham space is a twisted cotangent bundle of the latter moduli, twisted exactly by this bundle). So maybe that However this line bundle with desired curvature on the de Rham space is NOT the same as the determinant line of de Rham cohomologyline bundle, as I always naivey assumed. Otherwise it seems we've constructed an algebraic line bundle on and in particular doesn't pass over to the Betti space. EXAMPLE (following Tamas): For X=curve of genus one, the category of local systems is equivalent to that of coherent sheaves on $C^\times x C^\times$. The functor of Ext from the trivial local system becomes Ext from the skyscraper at the identity.In particular if we look at rank one local systems (which are identified by Riemann-Hilbert with points in $C^\times x C^\times$) then we see that the algebraic determinant of de Rham cohomology is canonically trivialized on the complement of the identity - ie off of codimension two!So the corresponding line bundle on is trivial, and certainly notidentified with the de Rham space(analytic) Deligne line bundle (whose curvature is the canonical holomorphic symplectic form... very confusing. Any comments appreciatedtwo-form). OK sorry for the wild goose chase! 4 Rewrote answer to try to be clearer for easier fault finding. For [Edit: Attempted to clarify arguments to facilitate comparison with Tamas' enlightening answer.] First I would like to claim that for any proper smooth variety $X$ there's a tautological construction of group, or more generally topological space/homotopy type X,we can make a "Quillen line bundle $L_V$ on the moduli $M$ of G-local systems on over the character variety, attached to a choice of representation $V$ of G" in the following sense.Namely take Let's consider all representations of the universal G-bundle group (or local systems on X) with the product $M\times X$, which has a flat connection along condition that their Ext from the fibers of $$\pi:M\times X\to M,$$ and let $E_V$ be trivial representation (ie global cohomology/Ext from the corresponding vector bundle associated trivial local system) is a perfect complex. This certainly applies to finite rank local systems on a compact manifold, as in the principal G-bundle case in question. More generally given a group G and a representation $V$. We may now V we can consider all G-local systems with the line bundle $$L_V=\det(R\pi_\ast E_V)$$ same condition on $M$, the determinant of the relative de Rham cohomology associated local system of $E_V$. Concretely, for any fixed point vector spaces in $M$ the representation V. (ie G-local system on X) we calculate In our case we'll take the complex of cohomologies adjoint representation of $X$ with coefficients in an algebraic group and the corresponding local systems on Riemann surfaces). Now for each such local system in representation $V$, and take we get a canonically attached line, the determinant line of this perfect complex (alternating product of top exterior products of the cohomology groups)(Ext) complex in question. For example it's Next, there's a natural to take $V$ to be moduli stack of local systems (or G-local systems) of theadjoint representation above form -though any otherrepresentation gives - for a line bundle which is some rational power of compact manifold this (is just the key word being "Dynkin indexassociated "of character variety", or rather the representation)underlying derived stack. In the case other words, there's a natural notion of curves you recover the Quillen line bundle (algebraic family of local systems on X or as Kevin points outrepresentations of our group. (Concretely it's characterized I believe by asking traces of monodromies to be algebraic functions).Abstractly, in the pullback G-case it's the stack $BG^X$, the derived mapping stack from X to the classifying space of stable G-bundles = compact group local systems to complex group local systems of what's usually known $G$ --- i.e., families over a base $S$ are given by maps from $$S\times X\to BG$$where it's crucial that we consider X as a homotopy type (simplicial set), not a variety!! This is the Quillen line bundle)Betti space of X. EDIT: I'd appreciate it if someone could help resolve The claim is the apparent contradiction between my answer and Tamas'. I can imagine there's above lines naturally form an issue of stack vs GIT algebraic line bundle on this moduli spaceor of algebraic vs analytic (or me being careless . [There's a claim I'm not too comfortable with stacky versions that the choice of things). In any case here's what I thought I understood: • The moduli stack representaiton of flat G-bundles is a twisted cotangent our group $G$ doesn't affect the line bundle of except up to a rational power, related to the moduli stack ratio of G-bundleDynkin indices, twisted by though I don't think it's necessary for this discussion.] Now the determinant line bundle on question is how does this compare (in the latter. • This identification holds case of a Riemann surface X - again considered just as algebraic symplectic spacesa topological space, where in fact a $K(\pi,1)$) to the formerhas Quillen construction?By the Goldman form and Riemann-Hilbert correspondence the latter above moduli stack is identified with the tautological form moduli of flat G-bundles on the algebraic curve X (as a twisted cotangent bundlederived ANALYTIC stacks), though not algebraically. (Here I'm claiming also However it seems clear that the analytic identification line bundle we defined (hopefully) above agrees with the determinant line of Betti and de Rham spacesis symplectic wrt their natural algebraic symplectic forms)cohomology of the universal connection on the moduli space times X. • The (i.e. Riemann-Hilbert preserves Ext from the trivial local system). Moreover the de Rham space carries a holomorphic symplectic form which agrees with the one defined on a twisted cotangent bundle the Betti space by the intersection pairing on the curve. So where is the curvature contradiction with Tamas' very clear counterexample?The symplectic form of thetautological connection on the pullback de Rham space can be described as the curvature of the twisting line bundleto , which is pulled back from the total space. • Finally determinant line bundle (and maybe here's the rub?) the pullback of the ie determinant of Dolbeault cohomologyline bundle ) on the moduli of G-bundles (the de Rham space is a twisted cotangent bundle of the latter moduli, twisted exactly by this bundle). So maybe that line bundle is NOT the same as the determinant of de Rham cohomology line bundle, as I always naivey assumed. Otherwise it seems we've constructed an algebraic line bundle on the moduli of flat G-bundlesBetti space, identified by Riemann-Hilbert with the algebraic line bundle on the de Rham spacewhose curvature is the symplectic form... very confusing. • 3 added 1197 characters in body EDIT: I'd appreciate it if someone could help resolve the apparent contradiction between my answer and Tamas'. I can imagine there's an issue of stack vs GIT moduli space or of algebraic vs analytic (or me being careless with stacky versions of things). In any case here's what I thought I understood: • The moduli stack of flat G-bundles is a twisted cotangent bundleof the moduli stack of G-bundle, twisted by the determinant line bundle on the latter. • This identification holds as algebraic symplectic spaces, where the formerhas the Goldman form and the latter the tautological form as a twisted cotangent bundle.(Here I'm claiming also that the analytic identification of Betti and de Rham spacesis symplectic wrt their natural algebraic symplectic forms). • The symplectic form on a twisted cotangent bundle is the curvature form of thetautological connection on the pullback of the twisting line bundle to the total space. • Finally (and maybe here's the rub?) the pullback of the determinant of Dolbeault cohomology line bundle on the moduli of G-bundles is the determinant of de Rham cohomology on the moduli of flat G-bundles. • Any comments appreciated. 2 fixed tex For any proper smooth variety $X$ there's a tautological construction of a line bundle $L_V$ on the moduli $M$ of G-local systems on the variety, attached to a choice of representation $V$ of G. Namely take the universal G-bundle on the product $M\times X$, which has a flat connection along the fibers of $$\pi:M\times X\to M,$$ and let $E_V$ be the corresponding vector bundle associated to the principal G-bundle and representation $V$. We may now consider the line bundle $$L_V=det(R\pi_* E_V)$ $L_V=\det(R\pi_\ast E_V)$$ on $M$, the determinant of the relative de Rham cohomology of $E_V$. Concretely, for any fixed point in $M$ (ie G-local system on X) we calculate the complex of cohomologies of $X$ with coefficients in the corresponding local system in representation $V$, and take the determinant line of this perfect complex (alternating product of top exterior products of the cohomology groups). For example it's natural to take $V$ to be the adjoint representation - though any other representation gives a line bundle which is some rational power of this (the key word being "Dynkin index" of the representation). In the case of curves you recover the Quillen line bundle (or as Kevin points out, the pullback from the space of stable G-bundles = compact group local systems to complex group local systems of what's usually known as the Quillen line bundle). 1 For any proper smooth variety $X$ there's a tautological construction of a line bundle $L_V$ on the moduli $M$ of G-local systems on the variety, attached to a choice of representation $V$ of G. Namely take the universal G-bundle on the product $M\times X$, which has a flat connection along the fibers of $$\pi:M\times X\to M,$$ and let $E_V$ be the corresponding vector bundle associated to the principal G-bundle and representation $V$. We may now consider the line bundle L_V=det(R\pi_* E_V)$on$M$, the determinant of the relative de Rham cohomology of$E_V$. Concretely, for any fixed point in$M$(ie G-local system on X) we calculate the complex of cohomologies of$X$with coefficients in the corresponding local system in representation$V\$, and take the determinant line of this perfect complex (alternating product of top exterior products of the cohomology groups). For example it's natural to take $V$ to be the adjoint representation - though any other representation gives a line bundle which is some rational power of this (the key word being "Dynkin index" of the representation). In the case of curves you recover the Quillen line bundle (or as Kevin points out, the pullback from the space of stable G-bundles = compact group local systems to complex group local systems of what's usually known as the Quillen line bundle).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9062734246253967, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/270597/mobius-inversion
# Möbius inversion The Möbius inversion formula says given two arithmetic functions $\hat{g}(k)$ and $g(k)$ related by $$\sum_{d\mid k}\hat{g}(d)=g(k)$$ Then $$\sum_{d\mid k}\mu(d)g\left(\frac{k}{d}\right)=\hat{g}(k)$$ Can someone give me a very elementary proof of this? I don't know anything about analytic number theory, though I know the definition of the Möbius function, and have used it before with out ever reading to deeply into it for example: I know that, $\frac{x}{1-x}=x+x^2+x^3+x^4+\cdots$ And that if I subtract the even powers I get, $\frac{x}{1-x}-\frac{x^2}{1-x^2}=x+x^3+x^5+x^7+x^9+\cdots$ And then If I subtract the the powers that are multiples of 3 I get, $\frac{x}{1-x}-\frac{x^2}{1-x^2}-(\frac{x^3}{1-x^3}-\frac{x^6}{1-x^6})=x+x^5+x^7+x^{11}+\cdots$ Continuing in this matter one sees we are essentially yielding combinations of the original sum where the argument is a combination of distinct primes, and the coefficients are determined by weather or not the number of primes is even or odd. So by the definition of the Möbius function I can easily see, $\sum_{k=1}^\infty\frac{\mu(k)x^k}{1-x^k}=x$, although the first theorem I mentioned doesn't seem so obvious to me, and so I would appreciate a simple proof. - – Daniel Pietrobon Jan 4 at 23:11 ## 2 Answers If you want a proof in the spirit of generating functions, we may use (formal) Dirichlet series. We have the Euler product factorization $$\zeta(s)=\prod_p\left(1-\frac{1}{p^s}\right)^{-1}$$ and hence $$\frac{1}{\zeta(s)}=\prod_p\left(1-\frac{1}{p^s}\right)=\sum_{n=1}^\infty\frac{\mu(n)}{n^s}.$$ Set $g(n):=\sum\limits_{d\mid n}\hat{g}(d)$ and use the substitution $n=dm$ to obtain $$G(s)=\sum_{n=1}^\infty \frac{g(n)}{n^s}=\sum_{n=1}^\infty\sum_{d\mid n}\hat{g}(d)n^{-s}=\left(\sum_{d=1}^\infty\frac{\hat{g}(d)}{d^s}\right)\left(\sum_{m=1}^\infty\frac{1}{m^s}\right)=\hat{G}(s)\zeta(s).$$ Multiply both sides by $\zeta(s)^{-1}$ to obtain $$\hat{G}(s)=\sum_{n=1}^\infty\frac{\hat{g}(n)}{n^s}=\frac{1}{\zeta(s)}G(s)=\left(\sum_{d=1}^\infty\frac{\mu(d)}{d^s}\right)\left(\sum_{m=1}^\infty\frac{g(m)}{m^s}\right)=\sum_{n=1}^\infty\frac{1}{n^s}\sum_{dm=n}\mu(d)g(m).$$ Comparing coefficients gives the result. Remark 1. Convergence is not an issue since we are working with formal Dirichlet series. We say a sequence $(\sum_{n=1}^\infty a_{n,m}n^{-s})_{m=1}^\infty$ of series converges to $\sum_{n=1}^\infty c_nn^{-s}$ as $m\to\infty$ if each sequence $(a_{n,m})_{m=1}^\infty$ converges to $c_n$ and (a much stronger condition) is eventually constant. This allows us to interpret many of the operations used above rigorously. Remark 2. This is more or less the same proof as the usual, functional one (involving Dirichlet convolution), only that sequences are encoded as analytic (really, algebraic-number-theoretic) gadgets. Indeed, if $A(s)$ and $B(s)$ are Dirichlet series with coefficients $(a_n)_{n=1}^\infty$ and $(b_m)_{m=1}^\infty$ resp. then $A(s)B(s)$ has coefficients $(a\star b)_n$, the convolution of the two sequences. The utility here is that it is especially easy to see $(\mu\star1)=\delta$ and the associativity of $\star$ follows from associativity of the usual multiplication in the ring of formal Dirichlet series. - If $a(n)$ and $b(n)$ are two sequences, then define the convolution as $$(a \star b)(n) = \sum_{d \vert n} a(d) b(n/d)$$ First note that $$\sum_{d \vert n} \mu(d) = \sum_{d \vert n} \mu(n/d) = (1 \star \mu)(n) = \delta(n) = \begin{cases}1 & n=0\\ 0 & \text{otherwise} \end{cases} \,\,\,\,\,\,\,\, \text{(Why?)}$$ We have that $$\left(\hat{g} \star 1 \right)(n) = g(n)$$ Hence, we have $$\hat{g}(n) = (\hat{g} \star \delta)(n) = (\hat{g} \star(1 \star \mu))(n) = ((\hat{g} \star 1) \star \mu)(n) = (g \star \mu)(n)$$ EDIT Let $n=p_1^{a_1} p_2^{a_2} \cdots p_k^{a_k}$, where $a_l \in \mathbb{Z}^+$. The only terms that contribute to $\sum_{d \vert n} \mu(d)$ is $d$'s of the form $d = p_1^{b_1} p_2^{b_2} \cdots p_k^{b_k}$ where $b_l \in \{0,1\}$. Half of these $d$'s will have $\mu(d)$ as $+1$ while the other half will have $\mu(d) = -1$. Proving the associativity of $\star$ is a nice little exercise, which I will let you work out. - 1 (One also needs to check associativity of Dirichlet convolution.) – anon Jan 4 at 23:10 Im not understanding – Ethan Jan 4 at 23:10 @Ethan Which step is unclear? There are two places where you might find difficulty. The first one is $$\sum_{d \vert n} \mu(d) = \sum_{d \vert n} \mu(n/d) = (1 \star \mu)(n) = \delta(n) = \begin{cases}1 & n=0\\ 0 & \text{otherwise} \end{cases}$$ while the second place where you might find difficulty is to prove that $\star$ is associative (another fact you need to check as anon has rightly pointed out). – user17762 Jan 4 at 23:23 the first one and second – Ethan Jan 4 at 23:29 @Ethan Have updated the solution. Is it clear now? – user17762 Jan 4 at 23:36 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445611834526062, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Indicator_variable
# Dummy variable (statistics) (Redirected from Indicator variable) In statistics and econometrics, particularly in regression analysis, a dummy variable (also known as an indicator variable, design variable, Boolean indicator, categorical variable, binary variable, or qualitative variable[1][2]) is one that takes the value 0 or 1 to indicate the absence or presence of some categorical effect that may be expected to shift the outcome.[3][4] Dummy variables are used as devices to sort data into mutually exclusive categories (such smoker/non-smoker, etc.).[2] For example, in econometric time series analysis, dummy variables may be used to indicate the occurrence of wars or major strikes. A dummy variable can thus be thought of as a truth value represented as a numerical value 0 or 1 (as is sometimes done in computer programming). Dummy variables are "proxy" variables or numeric stand-ins for qualitative facts in a regression model. In regression analysis, the dependent variables may be influenced not only by quantitative variables (income, output, prices, etc.), but also by qualitative variables (gender, religion, geographic region, etc.). A dummy independent variable (also called a dummy explanatory variable) which for some observation has a value of 0 will cause that variable's coefficient to have no role in influencing the dependent variable, while when the dummy takes on a value 1 its coefficient acts to alter the intercept. For example, suppose Gender is one of the qualitative variables relevant to a regression. Then, female and male would be the categories included under the Gender variable. If female is arbitrarily assigned the value of 1, then male would get the value 0.[1] Then the intercept (the value of the dependent variable if all other explanatory variables hypothetically took on the value zero) would be the constant term for males but would be the constant term plus the coefficient of the gender dummy in the case of females. Dummy variables are used frequently in time series analysis with regime switching, seasonal analysis and qualitative data applications. Dummy variables are involved in studies for economic forecasting, bio-medical studies, credit scoring, response modelling, etc. Dummy variables may be incorporated in traditional regression methods or newly developed modeling paradigms.[1] ## Incorporating a dummy independent variable Figure 1 : Graph showing wage = α0 + δ0female + α1education + U, δ0 < 0. Dummy variables are incorporated in the same way as quantitative variables are included (as explanatory variables) in regression models. For example, if we consider a regression model of wage determination, wherein wages are dependent on gender (qualitative) and years of education (quantitative): Wage = α0 + δ0female + α1education + U In the model, female = 1 when the person is a female and female = 0 when the person is male. δ0 can be interpreted as: the difference in wages between females and males, keeping education and the error term 'U' constant. Thus, δ0 helps to determine whether there is a discrimination in wages between men and women. If δ0<0 (negative coefficient), then for the same level of education (and other factors influencing wages), women earn a lower wage than men. On the other hand, if δ0>0 (positive coefficient), then women earn a higher wage than men (keeping other factors constant). Note that the coefficients attached to the dummy variables are called differential intercept coefficients. The model can be depicted graphically as an intercept shift between females and males. In the figure, the case δ0<0 is shown (wherein, men earn a higher wage than women).[5] Dummy variables may be extended to more complex cases. For example, seasonal effects may be captured by creating dummy variables for each of the seasons: D1=1 if the observation is for summer, and equals zero otherwise; D2=1 if and only if autumn, otherwise equals zero; D3=1 if and only if winter, otherwise equals zero; and D4=1 if and only if spring, otherwise equals zero. In the panel data fixed effects estimator dummies are created for each of the units in cross-sectional data (e.g. firms or countries) or periods in a pooled time-series. However in such regressions either the constant term has to be removed, or one of the dummies removed making this the base category against which the others are assessed, for the following reason: If dummy variables for all categories were included, their sum would equal 1 for all observations, which is identical to and hence perfectly correlated with the vector-of-ones variable whose coefficient is the constant term; if the vector-of-ones variable were also present, this would result in perfect multicollinearity,[6] so that the matrix inversion in the estimation algorithm would be impossible. This is referred to as the dummy variable trap. ## ANOVA models Main article: Analysis of variance A regression model in which the dependent variable is quantitative in nature but all the explanatory variables are dummies (qualitative in nature) is called an Analysis of Variance (ANOVA) model. [2] ### ANOVA model with one qualitative variable Suppose we want to run a regression to find out if the average annual salary of public school teachers differs among three geographical regions in Country A with 51 states: (1) North (21 states) (2) South (17 states) (3) West (13 states). Say that the simple arithmetic average salaries are as follows: \$24,424.14 (North), \$22,894 (South), \$26,158.62 (West). The arithmetic averages are different, but are they statistically different from each other? To compare the mean values, Analysis of Variance techniques can be used. The regression model can be defined as: Yi = α1 + α2D2i + α3D3i + Ui, where Yi = average annual salary of public school teachers in state i D2i = 1 if the state i is in the North Region D2i = 0 otherwise (any region other than North) D3i = 1 if the state i is in the South Region D3i = 0 otherwise In this model, we have only qualitative regressors, taking the value of 1 if the observation belongs to a specific category and 0 if it belongs to any other category. This makes it an ANOVA model. Figure 2 : Graph showing the regression results of the ANOVA model example: Average annual salaries of public school teachers in 3 regions of Country A. Now, taking the expectation of both sides, we obtain the following: Mean salary of public school teachers in the North Region: E(Yi|D2i = 1, D3i = 0) = α1 + α2 Mean salary of public school teachers in the South Region: E(Yi|D2i = 0, D3i = 1) = α1 + α3 Mean salary of public school teachers in the West Region: E(Yi|D2i = 0, D3i = 0) = α1 (The error term does not get included in the expectation values as it is assumed that it satisfies the usual OLS conditions, i.e., E(Ui) = 0) The expected values can be interpreted as follows: The mean salary of public school teachers in the West is equal to the intercept term α1 in the multiple regression equation and the differential intercept coefficients, α2 and α3, explain by how much the mean salaries of teachers in the North and South Regions vary from that of the teachers in the West. Thus, the mean salaries of teachers in the North and South is compared against the mean salary of the teachers in the West. Hence, the West Region becomes the base group or the benchmark group,i.e., the group against which the comparisons are made. The omitted category, i.e., the category to which no dummy is assigned, is taken as the base group category. Using the given data, the result of the regression would be: Ŷi = 26,158.62 − 1734.473D2i − 3264.615D3i se = (1128.523) (1435.953) (1499.615) t = (23.1759) (−1.2078) (−2.1776) p = (0.0000) (0.2330) (0.0349) R2 = 0.0901 where, se = standard error, t = t-statistics, p = p value The regression result can be interpreted as: The mean salary of the teachers in the West (base group) is about \$26,158, the salary of the teachers in the North is lower by about \$1734 (\$26,158.62 − \$1734.473 = \$24.424.14, which is the average salary of the teachers in the North) and that of the teachers in the South is lower by about \$3265 (\$26,158.62 − \$3264.615 = \$22,894, which is the average salary of the teachers in the South). To find out if the mean salaries of the teachers in the North and South are statistically different from that of the teachers in the West (the comparison category), we have to find out if the slope coefficients of the regression result are statistically significant. For this, we need to consider the p values. The estimated slope coefficient for the North is not statistically significant as its p value is 23 percent; however, that of the South is statistically significant at the 5% level as its p value is only around 3.5 percent. Thus the overall result is that the mean salaries of the teachers in the West and North are not statistically different from each other, but the mean salary of the teachers in the South is statistically lower than that in the West by around \$3265. The model is diagrammatically shown in Figure 2. This model is an ANOVA model with one qualitative variable having 3 categories.[2] ### ANOVA model with two qualitative variables Suppose we consider an ANOVA model having two qualitative variables, each with two categories: Hourly Wages are to be explained in terms of the qualitative variables Marital Status (Married / Unmarried) and Geographical Region (North / Non-North). Here, Marital Status and Geographical Region are the two explanatory dummy variables.[2] Say the regression output on the basis of some given data appears as follows: Ŷi = 8.8148 + 1.0997D2 − 1.6729D3 where, Y = hourly wages (in \$) D2 = marital status, 1 = married, 0 = otherwise D3 = geographical region, 1 = North, 0 = otherwise In this model, a single dummy is assigned to each qualitative variable, one less than the number of categories included in each. Here, the base group is the omitted category: Unmarried, Non-North region (Unmarried people who do not live in the North region). All comparisons would be made in relation to this base group or omitted category. The mean hourly wage in the base category is about \$8.81 (intercept term). In comparison, the mean hourly wage of those who are married is higher by about \$1.10 and is equal to about \$9.91 (\$8.81 + \$1.10). In contrast, the mean hourly wage of those who live in the North is lower by about \$1.67 and is about \$7.14 (\$8.81 − \$1.67). Thus, if more than one qualitative variable is included in the regression, it is important to note that the omitted category should be chosen as the benchmark category and all comparisons will be made in relation to that category. The intercept term will show the expectation of the benchmark category and the slope coefficients will show by how much the other categories differ from the benchmark (omitted) category.[2] ## ANCOVA models Main article: Analysis of covariance A regression model that contains a mixture of both quantitative and qualitative variables is called an Analysis of Covariance (ANCOVA) model. ANCOVA models are extensions of ANOVA models. They are statistically control for the effects of quantitative explanatory variables (also called covariates or control variables).[2] To illustrate how qualitative and quantitative regressors are included to form ANCOVA models, suppose we consider the same example used in the ANOVA model with one qualitative variable: average annual salary of public school teachers in three geographical regions of Country A. If we include a quantitative variable, State Government expenditure on public schools per pupil, in this regression, we get the following model: Figure 3 : Graph showing the regression results of the ANCOVA model example: Public school teacher's salary (Y) in relation to State expenditure per pupil on public schools. Yi = α1 + α2D2i + α3D3i + α4Xi + Ui where, Yi = average annual salary of public school teachers in state i Xi = State expenditure on public schools per pupil D2i = 1, if the State i is in the North Region D2i = 0, otherwise D3i = 1, if the State i is in the South Region D3i = 0, otherwise Say the regression output for this model is Ŷi = 13,269.11 − 1673.514D2i − 1144.157D3i + 3.2889Xi The result suggests that, for every \$1 increase in State expenditure per pupil on public schools, a public school teacher's average salary goes up by about \$3.29. Further, for a state in the North region, the mean salary of the teachers is lower than that of West region by about \$1673 and for a state in the South region, the mean salary of teachers is lower than that of the West region by about \$1144. Figure 3 depicts this model diagrammatically. The average salary lines are parallel to each other by the assumption of the model that the coefficient of expenditure does not vary by state. The trade off shown separately in the graph for each category is between the two quantitative variables: public school teachers' salaries (Y) in relation to State expenditure per pupil on public schools (X).[2] ## Interactions among dummy variables Quantitative regressors in regression models often have an interaction among each other. In the same way, qualitative regressors, or dummies, can also have interaction effects between each other, and these interactions can be depicted in the regression model. For example,in a regression involving determination of wages, if two qualitative variables are considered, namely, gender and marital status, there could be an interaction between marital status and gender.[5] These interactions can be shown in the regression equation as illustrated by the example below. With the two qualitative variables being gender and marital status and with the quantitative explanator being years of education, a regression that is purely linear in the explanators would be Yi = β1 + β2D2,i + β3D3,i + αXi + Ui where i denotes the particular individual Y = Hourly Wages (in \$) X = Years of education D2 = 1 if female, 0 otherwise D3 = 1 if married, 0 otherwise This specification does not allow for the possibility that there may be an interaction that occurs between the two qualitative variables, D2 and D3. For example, a female who is married may earn wages that differ from those of an unmarried male by an amount that is not the same as the sum of the differentials for solely being female and solely being married. Then the effect of the interacting dummies on the mean of Y is not simply additive as in the case of the above specification, but multiplicative also, and the determination of wages can be specified as: Yi = β1 + β2D2,i + β3D3,i + β4(D2,iD3,i) + αXi + Ui Here, β2 = differential effect of being a female β3 = differential effect of being married β4 = further differential effect of being both female and married By this equation, in the absence of a non-zero error the wage of an unmarried male is β1+ αXi, that of an unmarried female is β1+ β2 + αXi, that of being a married male is β1+ β3 + αXi, and that of being a married female is β1+β2+ β3 + β4+ αXi (where any of the estimates of the coefficients of the dummies could turn out to be positive, zero, or negative). Thus, an interaction dummy (product of two dummies) can alter the dependent variable from the value that it gets when the two dummies are considered individually.[2] However, the use of products of dummy variables to capture interactions can be avoided by using a different scheme for categorizing the data—one that specifies categories in terms of combinations of characteristics. If we let D4 = 1 if unmarried female, 0 otherwise D5 = 1 if married male, 0 otherwise D6 = 1 if married female, 0 otherwise then it suffices to specify the regression Yi = δ1 + δ4D4,i + δ5D5,i + δ6D6,i + αXi + Ui. Then with zero shock term the value of the dependent variable is δ1+ αXi for the base category unmarried males, δ1 + δ4+ αXi for unmarried females, δ1 + δ5+ αXi for married males, and δ1 + δ6+ αXi for married females. This specification involves the same number of right-side variables as does the previous specification with an interaction term, and the regression results for the predicted value of the dependent variable contingent on Xi, for any combination of qualitative traits, are identical between this specification and the interaction specification. ## Dummy dependent variables ### What happens if the dependent variable is a dummy? A model with a dummy dependent variable (also known as a qualitative dependent variable) is one in which the dependent variable, as influenced by the explanatory variables, is qualitative in nature. Some decisions regarding 'how much' of an act must be performed involve a prior decision making on whether to perform the act or not. For example, the amount of output to produce, the cost to be incurred, etc. involve prior decisions on whether to produce or not, whether to spend or not, etc. Such "prior decisions" become dependent dummies in the regression model.[7] For example, the decision of a worker to be a part of the labour force becomes a dummy dependent variable. The decision is dichotomous, i.e., the decision has two possible outcomes: yes and no. So the dependent dummy variable Participation would take on the value 1 if participating, 0 if not participating.[2] Some other examples of dichotomous dependent dummies are cited below: Decision: Choice of Occupation. Dependent Dummy: Supervisory = 1 if supervisor, 0 if not supervisor. Decision: Affiliation to a Political Party. Dependent Dummy: Affiliation = 1 if affiliated to the party, 0 if not affiliated. Decision: Retirement. Dependent Dummy: Retired = 1 if retired, 0 if not retired. When the qualitative dependent dummy variable has more than two values (such as affiliation to many political parties), it becomes a multiresponse or a multinomial or polychotomous model.[7] ### Dependent dummy variable models Analysis of dependent dummy variable models can be done through different methods. One such method is the usual OLS method, which in this context is called the linear probability model. An alternative method is to assume that there is an unobservable continuous latent variable Y* and that the observed dichotomous variable Y = 1 if Y* > 0, 0 otherwise. This is the underlying concept of the logit and probit models. These models are discussed in brief below.[8] #### Linear probability model Main article: Linear probability model An ordinary least squares model in which the dependent variable Y is a dichotomous dummy, taking the values of 0 and 1, is the linear probability model (LPM).[8] Suppose we consider the following regression: Yi = α1 + α2Xi + Ui where X = family income Y = 1 if a house is owned by the family, 0 if a house is not owned by the family The model is called the linear probability model because, the regression is linear. The conditional mean of Yi given Xi, written as E(Yi|Xi), is interpreted as the conditional probability that the event will occur for that value of Xi — that is, Pr(Yi = 1 |Xi). In this example, E(Yi|Xi)gives the probability of a house being owned by a family whose income is given by Xi. Now, using the OLS assumption E(Ui) = 0, we get E(Yi|Xi) = α1 + α2Xi Some problems are inherent in the LPM model: 1. The regression line will not be a well-fitted one and hence measures of significance, such as R2, will not be reliable. 2. Models that are analyzed using the LPM approach will have heteroscedastic disturbances. 3. The error term will have a non-normal distribution. 4. The LPM may give predicted values of the dependent variable that are greater than 1 or less than 0. This will be difficult to interpret as the predicted values are intended to be probabilities, which must lie between 0 and 1. 5. There might exist a non-linear relationship between the variables of the LPM model, in which case, the linear regression will not fit the data accurately.[2][9] #### Alternatives to LPM Figure 4 : A cumulative distribution function. To avoid the limitations of the LPM, what is needed is a model that has the feature that as the explanatory variable, Xi, increases, Pi = E (Yi = 1 | Xi) should remain within the range between 0 and 1. Thus the relationship between the independent and dependent variables is necessarily non-linear. For this purpose, a cumulative distribution function (CDF) can be used to estimate the dependent dummy variable regression. Figure 4 shows an 'S'-shaped curve, which resembles the CDF of a random variable. In this model, the probability is between 0 and 1 and the non-linearity has been captured. The choice of the CDF to be used is now the question. Two alternative CDFs can be used: the logistic and normal CDFs. The logistic CDF gives rise to the logit model and the normal CDF give rises to the probit model .[2] #### Logit model Main article: Logistic regression The shortcomings of the LPM led to the development of a more refined and improved model called the logit model. In the logit model, the cumulative distribution of the error term in the regression equation is logistic.[8] The regression is more realistic in that it is non-linear. The logit model is estimated using the maximum likelihood approach. In this model, P(Y = 1 | X), which is the probability of the dependent variable taking the value of 1 given the independent variable is: $P_i = \frac{1}{1 + e^{-z_i}}\ = \frac{e^{z_i}}{1 + e^{z_i}}\$ where zi = α1 + α2Xi. The model is then expressed in the form of the odds ratio: what is modeled in the logistic regression is the natural logarithm of the odds, the odds being defined as P/(1-P). Taking the natural log of the odds, the logit (Li) is expressed as $L_i = \ln\left(\frac{P_i}{1 - P_i}\right) = z_i = \alpha_1 + \alpha_2 X_i.$ This relationship shows that Li is linear in relation to Xi, but the probabilities are not linear in terms of Xi.[9] #### Probit model Main article: Probit model Another model that was developed to offset the disadvantages of the LPM is the probit model. The probit model uses the same approach to non-linearity as does the logit model; however, it uses the normal CDF instead of the logistic CDF.[8] ## References 1. ^ a b c 2. Gujarati, Damodar N (2003). Basic econometrics. McGraw Hill. p. 1002. ISBN 0-07-233542-4. 3. Draper, N.R.; Smith, H. (1998) Applied Regression Analysis, Wiley. ISBN 0-471-17082-8 (Chapter 14) 4. 5. ^ a b Wooldridge, Jeffrey M (2009). Introductory econometrics: a modern approach. Cengage Learning. p. 865. ISBN 0-324-58162-9. 6. Suits, Daniel B. (1957). "Use of Dummy Variables in Regression Equations". Journal of the American Statistical Association 52 (280): 548–551. JSTOR 2281705. 7. ^ a b Barreto, Humberto; Howland, Frank (2005). "Chapter 22: Dummy Dependent Variable Models". Introductory Econometrics: Using Monte Carlo Simulation with Microsoft Excel. Cambridge University Press. ISBN 0-521-84319-7. 8. ^ a b c d Maddala, G S (1992). Introduction to econometrics. Macmillan Pub. Co. p. 631. ISBN 0-02-374545-2. 9. ^ a b Adnan Kasman,  . Lecture Notes
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9033282399177551, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/02/09/independence-of-intertwinors-from-semistandard-tableaux/?like=1&source=post_flair&_wpnonce=39170bc4ce
# The Unapologetic Mathematician ## Independence of Intertwinors from Semistandard Tableaux Let’s start with the semistandard generalized tableaux $T\in T_{\lambda\mu}^0$ and use them to construct intertwinors $\bar{\theta}_T:\hom(S^\lambda,M^\mu)$. I say that this collection is linearly independent. Indeed, let’s index the semistandard generalized tableaux as $T_1,\dots,T_m$. We will take our reference tableau $t$ and show that the vectors $\bar{\theta}_{T_i}(e_t)\in M^\mu$ are independent. This will show that the $\bar{\theta}_{T_i}$ are independent, since any linear dependence between the operators would immediately give a linear dependence between the $\bar{\theta}_{T_i}(v)$ for all $v\in S^\lambda$. Anyway, we have $\displaystyle\bar{\theta}_{T_i}(e_t)=\theta_{T_i}\left(\kappa_t\{t\}\right)=\kappa_t\theta_{T_i}(\{t\})$ Since we assumed $T_i$ to be semistandard, we know that $[T_i]\triangleright[S]$ for all summands $S\in\theta_{T_i}(\{t\})$. Now the permutations in $\kappa_t$ do not change column equivalence classes, so this still holds: $[T_i]\triangleright[S]$ for all summands $S\in\kappa_t\theta_{T_i}(\{t\})$. And further all the $[T_i]$ are distinct since no column equivalence class can contain more than one semistandard tableau. But now we can go back to the lemma we used when showing that the standard polytabloids were independent! The $\kappa_t\theta_{T_i}(\{t\})=\bar{\theta}(e_t)$ are a collection of vectors in $M^\mu$. For each one, we can pick a basis vector $[T_i]$ which is maximum among all those having a nonzero coefficient in the vector, and these selected maximum basis elements are all distinct. We conclude that our collection of vectors in independent, and then it follows that the intertwinors $\bar{\theta}_{T_i}$ are independent. ## 2 Comments » 1. [...] that we’ve shown the intertwinors that come from semistandard tableaux are independent, we want to show that they span the space . This is a bit fidgety, but should somewhat resemble the [...] Pingback by | February 11, 2011 | Reply 2. [...] the space of all intertwinors from the Specht module to the Young tabloid module . We also know that they’re linearly independent, and so they form a basis of the space of intertwinors [...] Pingback by | February 17, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9054103493690491, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=44747
Physics Forums derivation of the electric field from the potential I am studying for a test and i can't figure out for the life of me how my book derived the solution for this problem I know it has to be basic i just dont see it... An electric dipole consists of two charges of equal magnitude and opposite sign separated by a distance 2a... The dipole is along the x axis and is centered at the origin. calculate V and Ex if point P is located anywhere between the two charges. I understand the concept of this, and have calculated V, which is [ (2*(Ke)*q*x) / ((a^2) - (x^2)) ] and i know how to start the problem of Ex... Ex = - (dV/dx) = - [ (2*(Ke)*q*x) / ( (a^2) - (x^2) ) .... But I can't remember or figure out for the life of me how they got = - 2*(Ke)*q * [ { (a^2) + (x^2) } / { ( (a^2)-(x^2) )^2 } ] PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Science Advisor Your potential should look something like this $$q \left( \frac {1}{ {a-x}} - \frac {1}{ {a+x}}\right)$$ so just combine the fractions. yes that is correct :D.. but i already got past that point and found the answer for the electric potential, V ... which was the sum of that equation... and got (2KeqX) / ( a^2 - x^2) But that just gives me the potential... I needed to take it a step further and find the magnitude of the electric field from that... which is Ex ... which is = -dV/dx ... but i dont see how they derived the answer ... = - 2*(Ke)*q * [ { (a^2) + (x^2) } / { ( (a^2)-(x^2) )^2 } ] THANK YOU VERY MUCH THOUGH! :D Mentor Blog Entries: 1 derivation of the electric field from the potential Quote by stargirl22 and got (2KeqX) / ( a^2 - x^2) But that just gives me the potential... I needed to take it a step further and find the magnitude of the electric field from that... which is Ex ... which is = -dV/dx ... but i dont see how they derived the answer ... Is your problem that you don't know how to take the derivative? You have to your the quotient rule!!!! Sorry, I mean : You have to use the quotient rule!!! Please see the attached file. You will see how the quotient rule is require to get that answer. Attached Thumbnails Thread Tools | | | | |--------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: derivation of the electric field from the potential | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 1 | | | Introductory Physics Homework | 0 | | | Introductory Physics Homework | 4 | | | Introductory Physics Homework | 2 | | | Classical Physics | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317722320556641, "perplexity_flag": "middle"}
http://scicomp.stackexchange.com/questions/5384/averaging-scattered-data
# Averaging scattered data I have multiple sets of measured data that can easily be visualized using a scatter plot (red and black points in the figure). If my measurements were perfect, the red and black points should lie on a single curve. However they are not, thus I would like to "average"/fit the data points in such a way that I obtain the black line in the figure. The black line could be a line or a set of discrete points with minimum distance to all the measurement data. Can someone help me get started? Ideally there is already a python library that can handle this for me, but I'm also happy with algorithm descriptions. - 1 do I understand it correctly that you want to have all red dots on the "same side" of the fit and the black dots on the other side? – GertVdE Feb 26 at 18:57 @GertVdE In case of two measurement sets, yes. For three measurement sets, imagine an extra set of green points with a similar topology. Then some of the points could "jump" sides. Is that clear? – akid Feb 26 at 19:52 1 @GertVdE I removed my answer because it was only valid if one could assume the same number of points on both sides, which is not the case. – Dr_Sam Feb 27 at 13:44 1 @Dr_Sam: ok, but removing it was a bit drastic. Your approach was valid for the case where you would have a certain number of measurements at the same $x$ values. You wouldn't necessarily need the same number of points above and below... – GertVdE Feb 27 at 13:47 1 @GertVdE Ok, it was maybe rough to delete the post, but I did not want to mislead future readers of this question. I revived my answer just for the sake of helping this beta :-) – Dr_Sam Feb 27 at 14:03 show 1 more comment ## 3 Answers Based on your last comment (the fact that you might have multiple measurements and you don't care whether a certain set of measurements is above or below the fit), I think what you are looking for is a spline fit. You can do this using the scipy.interpolate B-spline routines. The script below generates three sets of data based on a function (that you can consider to model your system). For the first set, it adds some random error to the function "above" (red dots), for the second below (blue dots) and for the third just above and below (green dots). The black line is a plot of the exact function (without errors). And then you use the scipy.interpolate.splrep function to generate a B-spline representation for this data. A B-spline is a piecewise combination of polynomial functions, typically cubic polynomials (cubic B-splines) which have some nice properties. If you want to know more about them, I would strongly suggest to read works by C. de Boor and P. Dierckx. The scipy.interpolate.splrep function returns a tuple of the knots, the coefficients and the order. Using the scipy.interpolate.splev function, you can then evaluate the B-spline representation in a number of points in the interval of the fit. The script below plots an evaluation of the B-spline in magenta. import numpy as np import matplotlib.pyplot as plt import scipy.interpolate as ip def f(x): return np.sin(np.pi*2*x)*np.exp(-2*x) N=20 err=0.15 data = np.zeros([3*N,2]) #red dots above curve data[:N,0] = np.sort(np.random.rand(N)) data[:N,1] = f(data[:N,0])+err*np.random.rand(N) #blue dots below curve data[N:2*N,0] = np.sort(np.random.rand(N)) data[N:2*N,1] = f(data[N:2*N,0])-err*np.random.rand(N) #green dots above and below curve data[2*N:,0] = np.sort(np.random.rand(N)) data[2*N:,1] = f(data[2*N:,0])-2*err*(np.random.rand(N)-0.5) plt.plot(data[:N,0],data[:N,1],'ro') plt.plot(data[N:2*N,0],data[N:2*N,1],'bo') plt.plot(data[2*N:,0],data[2*N:,1],'go') data = data[data[:,0].argsort()] x = data[:,0] y = data[:,1] y_exact = f(x) plt.plot(x,y_exact,'k') w = np.ones([len(x),1]) spl = ip.splrep(x,y,w) xn = np.linspace(0,1.0,100) sple = ip.splev(xn,spl) plt.plot(xn,sple,'m') plt.show() - I'm missing SciPy on my workstation (which surprised me, because I do have NumPy), so I need to wait for IT to install it before I can test your solution. – akid Feb 27 at 19:38 @akid: ok. let me know if this is what you expected... – GertVdE Feb 27 at 20:16 Thanks, this seems to be what I need! – akid Mar 9 at 15:22 To reproduce your particular figure given the red and black labeling of the points, you could use a piece of a voronoi diagram. In other words, your black line will be the set of points that is equidistant between the nearest red and the nearest black point. But assuming that you want to do something less contrived, you could use some kind of nonlinear svm instead. For example as shown in figures here. - SVM looks interesting, but @GertVDE's answer is simpler and also does the job I think. – akid Mar 9 at 15:20 A simple idea to obtain a continuous piecewise linear approximation would be to make the average of each pair of points (a red and the corresponding black) and make a piecewise linear interpolation of these averages. That would produce something close to what you show. (However, it does not make use a regression)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8947080969810486, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/160563/sanity-check-is-9-3-2-1-7-7-1-1-a-function
# Sanity check, is $\{(-9,-3),(2,-1),(7,7),(-1,-1)\}$ a function? EDIT#2: Yes, I'm crazy! This IS a function. Thanks for beating the correct logic into me everyone! I'm using a website provided by my algebra textbook that has questions and answers. It has the following question: Determine whether the following relation represents a function: $$\{(-9,-3),(2,-1),(7,7),(-1,-1)\}$$ I answered NO, it is not a function but the website says it is. Am I wrong? If so, what am I missing? EDIT: I was given the following definition in class: Function: A function is a rule which assigns to each X, called the domain, a unique y, called the range. My instructor also said that if you plot the points you can tell if it is not a function if it fails the vertical line test. Here is the graph of the above points, and for example it would fail the vertical line test if I drew one on x = 1, right? Thanks! Jason - Why do you think it is not a function? – Chris Eagle Jun 19 '12 at 23:44 Is a domain specified in an earlier part of the question? Some (many?) definitions of function require that the relation be defined on the entirety of the domain in question. – Austin Mohr Jun 19 '12 at 23:47 @Chris Eagle, I think it is not a function because 1) I have repeating y values (the -1s) and 2) I plotted the points on a graph and it fails the vertical line test. – Jason Jun 20 '12 at 0:04 4 The relation (whether or not it is a function) is defined as those particular four points, not as line segments connecting them. Your graph, therefore, should consist of exactly four dots. Does the resulting figure pass the vertical line test? (That said, you are correct that your green graph is NOT a function. That said, if you'd chosen to connect the dots in order of $x$ value, you'd see a function. But, again, the relation described in the problem only involves the dots.) – Blue Jun 20 '12 at 0:14 3 @Jason: Uh, you should get rid of the "NOT" from edit #2. – Zev Chonoles♦ Jun 20 '12 at 5:01 show 3 more comments ## 10 Answers All first coordinates are distinct. It's the graph of a function. - I thought it had to be all second coordinates had to be distinct, not first. Maybe I'm going insane. :) – Jason Jun 20 '12 at 0:01 1 Do not connect in the order the points are given. Go from left to right. If the second coordinates were distinct as well, the function would be 1-1. – ncmathsadist Jun 20 '12 at 0:16 ah I forgot that! thanks for clearing it up! I'm such a newb! – Jason Jun 20 '12 at 0:22 @Jason: there is a special class of functions where the second coordinates are distinct; these are called “injective”. You might have come across the term elsewhere. (Formally: a function $f$ is injective if $f(x)=f(y)$ implies $x=y$.) – Alex Chan Jun 20 '12 at 9:30 It is a function from the set $\{-9,-1,2,7\}$ into a set containing $\{-3,-1,7\}$. As long as each element of the domain, $\{-9,-1,2,7\}$, gets mapped unambiguously to a value (not necessarily distinct), this is a well-defined function. - You only think it fails the vertical line test at $x=1$ because you drew the graph incorrectly. You plotted the points you were given, but you also plotted many points that you were not given. You drew a bunch of lines, but there was nothing in the question about lines. The correct graph has four isolated points—the four that were given to you—with nothing in between. Your graph includes points at $(1,1)$ and $(1,-\frac{13}{11})$. But there is nothing in the definition of this function that says it has any values at $x=1$. - you also plotted many points that you were not given, that's true, I didn't think of it that way. – Jason Jun 20 '12 at 13:23 A function doesn't have to be from $\mathbb R$ to $\mathbb R$. The domain of a function can be as simple a set as $\{-9,-1,2,7\}$. - why not plot the points and see how how the graph looks - A function cannot have two points that share the same $x$ value. Your $x$ values are -9, -1, 2 and 7. All your $x$ values are unique (i.e. no repetition), and thus we may conclude that this is indeed a function. - One way to precisely define a function is as follows: A function is a collection of ordered pairs, no two of which have the same first term. From this definition, it is immediate that your collection of ordered pairs is a function. - The domain of your function is just 4 discrete numbers. What you do when you draw lines between them is extend the domain to the interval [-9, 7], where every point between the interval's limits would be a member of the set. Like 5, but also 1.25, $\sqrt{2}$ and $\pi$. BTW, notice that connecting the dots does result in a function, if you first order your set: $$\{(−9,−3),(−1,−1),(2,−1),(7,7)\}$$ and that can't be right; the set {−9, 2, 7, −1} is the same as {−9, -1, 2, 7}. - As long as every point of domain is related to exactly one point of co-domain(vice versa need not be true),that relation is a function.Two or more points of domain can be related to a single point in co-domain. you can remember it like this: One can have exactly one father, but a father can have one or more children. Here child is a point of domain and father, a point of co-domain. In the given problem, each point of domain has a unique image and hence it is a function. - It is a function, because every value in the domain has one value in the codomain. You may be confused because you are plotting the points and linking them with lines in the order they are given. A simple test: are there two $y$ values that have the same $x$ value? If the answer is yes, then the relation is not a function; if the answer is no, then the relation IS a function. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520800709724426, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/106710/does-int-1-infty-sinx-log-xdx-converge/106728
# Does $\int_{1}^{\infty}\sin(x\log x)dx$ converge? I'm trying to find out whether $\int_{1}^{\infty}\sin(x\log x)dx$ converges, I know that $\int_{1}^{\infty}\sin(x)dx$ diverges but $\int_{1}^{\infty}\sin(x^2)dx$ converges, more than that, $\int_{1}^{\infty}\sin(x^p)dx$ converges for every $p>0$, so it should be converges in infinity. I'd really love your help with this. Thanks! - ## 3 Answers Since $x\log(x)$ is monotonic on $[1,\infty)$, let $f(x)$ be its inverse. That is, for $x\in[0,\infty)$ $$f(x)\log(f(x))=x\tag{1}$$ Differentiating implicitly, we get $$f'(x)=\frac{1}{\log(f(x))+1}\tag{2}$$ Then $$\begin{align} \int_1^\infty\sin(x\log(x))\;\mathrm{d}x &=\int_0^\infty\sin(x)\;\mathrm{d}f(x)\\ &=\int_0^\infty\frac{\sin(x)}{\log(f(x))+1}\mathrm{d}x\tag{3} \end{align}$$ Since $\left|\int_0^M\sin(x)\;\mathrm{d}x\right|\le2$ and $\frac{1}{\log(f(x))+1}$ monotonically decreases to $0$, Dirichlet's test (Theorem 17.5) says that $(3)$ converges. - This is a version of the Van der Corput lemma, basically. Note that it's enough to find some $n$ for which $\int_n^{\infty} \sin(x\log(x))\,dx$ converges. The key facts about $f(x) = x\log(x)$ that allow this are a) $\lim_{x \rightarrow \infty} f'(x) = \infty$ and b) $f''(x) > 0$ for $x$ large enough. Specifically, we write $$\int_n^{\infty} \sin(f(x))\,dx = \int_n^{\infty} f'(x) {\sin(f(x)) \over f'(x)}\,dx$$ $$= \lim_{N \rightarrow \infty} \int_n^{N} (f'(x) \sin(f(x)){1 \over f'(x)}\,dx$$ Integrating the integral on the right by parts you get $$\int_n^{N} (f'(x) \sin(f(x)){1 \over f'(x)}\,dx = -{\cos(f(N)) \over f'(N)} + {\cos(f(n)) \over f'(n)} + \int_n^N \cos(f(x)){d \over dx}{1 \over f'(x)}$$ $$= -{\cos(f(N)) \over f'(N)} + {\cos(f(n)) \over f'(n)} - \int_n^N \cos(f(x)) {f''(x) \over (f'(x))^2}$$ As $N$ goes to infinity the first term goes to zero since $f'(x)$ goes to $\infty$ as $x$ goes to $\infty$ and $|\cos(f(N))| \leq 1$. The third term is bounded in absolute value by $$\int_n^N\bigg|{f''(x) \over (f'(x))^2}\bigg|\,dx$$ Since $f''(x) > 0$ we can just take off the absolute values to get $$\int_n^N{f''(x) \over (f'(x))^2}\,dx$$ Integrating this becomes $${1 \over f'(N)} - {1 \over f'(n)}$$ Since $f'(N) \rightarrow \infty$ as $N$ goes to $\infty$ this converges as $N$ goes to infinity. Hence the integral $\int_n^{\infty} \cos(f(x)) {f''(x) \over (f'(x))^2}$ converges absolutely, and thus converges. Hence we have shown the original integral converges. - 3 +1. This kind of answer reminds all of us why we fell in love with calculus after mastering it either as honors undergraduates or after relearning it in graduate school. – Mathemagician1234 Feb 7 '12 at 17:21 The graph of the integrand consists of a series of positive and negative humps, each one narrower than the last but with the same height. Therefore if you define the infinite series consisting of the positive and negative areas, it's an alternating series in which the terms decrease in magnitude and also approach zero, so it converges. - 1 +1 for the outline. Of course you need to show that the size of the humps is approaching zero, and not merely getting smaller with each step. – alex.jordan Feb 7 '12 at 16:51 1 also you need to know that the height of the function doesn't ever grow on average, so as to compensate for the decreased width... otherwise the integrals over the humps may not decrease. – Zarrax Feb 7 '12 at 16:53 5 You might try $\int_0^\infty f(\sin(x \ln x))\ dx$ where $f(t) = t^3$ for $t\ge 0$ and $t$ for $t \le 0$. This also has humps of the same height and width, but it won't give you an alternating series. – Robert Israel Feb 7 '12 at 17:41 Thanks for the corrections. I've made appropriate edits. – Ben Crowell Feb 7 '12 at 18:20 @Robert: Furthermore, it diverges since the average of $|\sin^3(x)|$ is $2/3$ the average of $|\sin(x)|$. – robjohn♦ Feb 7 '12 at 18:44 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344269037246704, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/proof-theory+induction
# Tagged Questions 2answers 59 views ### Induction proof [ little-o notation ] I have to prove that $2^n = o(n!)$, that is, $\forall c \gt 0 \quad \exists$ $n_0 \in \mathbb N$ such that $\forall n \ge n_0$ we have $2^n \lt c.n!$ Well, this is what I did so far: First I ... 2answers 148 views ### Induction Proof: $\sum_{i=1}^{n+1} i \cdot 2^i = n \cdot 2^{n+2}+2$ Prove by Mathematical Induction . . . $$\sum_{i=1}^{n+1} i \cdot 2^i = n \cdot 2^{n+2}+2$$ for all $n \geq 0$ I tried solving it, but I got stuck near the end . . . a. Basis Step: \$1\cdot 2^1 ... 1answer 230 views ### Strong induction proofs I'm having trouble understanding strong induction proofs I understand how to do ordinary induction proofs and I understand that strong induction proofs are the same as ordinary with the exception ... 1answer 88 views ### Connected Components Graph proof I am trying to do this one problem for a homework set, and am not entirely sure how I would even start this proof. Here is the question Prove, by induction on k, that a connected component of k nodes ... 2answers 119 views ### is it possible to prove the method of mathematical induction itself? Since the method of mathematical induction follows some sort of 'algorithm', would the method itself be provable? namely, give that the method of mathematical induction is as follows: if S is a ... 3answers 193 views ### Statements true for all integers but not provable by induction Is there any examples of statements P(n) such that "for all $n>1$, P(n)" is provable, but P(n)=>P(n+1) is not provable? (without using some mild deformation of "for all $n>1$, ... 1answer 148 views ### How does adding the full second order induction scheme affect the consistency strength of subsystems of second order arithmetic? Following on from my question about $\omega$-models, I'm interested in the interaction between subsystems of second order arithmetic with restricted induction such as $\mathsf{RCA}_0$ and those which ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437761306762695, "perplexity_flag": "head"}
http://gilkalai.wordpress.com/2008/12/26/lior-aryeh-and-michael/?like=1&source=post_flair&_wpnonce=c589d5bcbb
Gil Kalai’s blog ## Lior, Aryeh, and Michael Posted on December 26, 2008 by Three dear friends, colleagues, and teachers Lior Tzafriri, Aryeh Dvoretzky and Michael Maschler passed away last year. I want to tell you a little about their mathematics. Lior Tzafriri (  1936-2008  ) Lior Tzafriri worked in functional analysis. One of the crowning achievements of Banach space theory is the Lindenstrauss-Tzafriri theorem:  A Banach space, each of whose closed subspaces is complemented, (that is, is the range of a bounded linear projection) must be isomorphic to a Hilbert space. This was a long-standing conjecture and, as the authors wrote in their paper published in the Israel J. of Mathematics,  the proof is surprisingly simple. One of the tools they used was Dvoretzky’s theorem that we will mention below. Next I want to tell you about a theorem of Bourgain and Tzafriri related to the famous Kadison-Singer conjecture. Jean Bourgain and Lior Tzafriri considered the following scenario: Let be a real number. Let be a matrix with norm 1 and with zeroes on the diagonal. An by principal minor is “good” if the norm of is less than . Consider the following hypergraph : The vertices correspond to indices . A set belongs to if the sub-matrix of is good. Bourgain and Tzafriri showed that for every there is so that for every matrix we can find so that . Moreover, they showed that for every nonnegative weights there is so that the sum of the weights in is at least times the total weight. In other words, (by LP duality,) the vertices of the hypergraph can be fractionally covered by $C(a)$ edges. The “big question” is if there a real number so that for every matrix can be covered by good sets. Or, in other words, if the vertices of can be covered by edges. This question is known to be equivalent to an old conjecture by Kadison and Singer (it is also known as the “paving conjecture”). In view of what was already proved by Bourgain and Tzafriri what is needed is to show that the covering number is bounded from above by a function of the fractional covering number. Michael Maschler (  1927-2008  ) Michael Maschler was a game theorist. (I mentioned Michael in a post about Rationality.) A famous theorem by Aumann and Maschler deals with reapeated games with incomplete information. I will describe the simplest possible case. Start with a two-player game. Each players has a several strategies and any pair of strategies leads to payoffs for the two players. Such a game can be described by a payoff matrix where for each pair of strategies we have pair of payoffs for the two players. Now consider several twists to this story. 1) The payoffs are unknown: they can be one out of two possible payoff matrices. A priori each of these matrices is equally likely. 2) The game is repeated:  it is played infinitely many times (with the same payoff matrix.) The overall payoff for a player is his expected payoff and he gets it only “at the end”. The players do not see the payoffs; they only see how the other players play. Now suppose one player has secret information and knows which of the two payoff matrices is being played. Aumann and Maschler described situations where the player is better off ignoring his secret information (in order not to expose it) and situations where he is better off using his secret information, and also intermediate scenarios. But a major insight from their theory is that whatever part of your secret information you use  is eventually revealed! Michael Maschler and Micha A. Perles discovered a very interesting solution concept, the subadditive solution, to the Nash bargaining problem. The Nash bargaining problem can be described as follows: Given a compact convex set $S$ in the positive orthant you consider the following scenario: If two players can agree on a point $(x,y) \in S$ then player I will get x dollars and player II will get y dollars. If they fail to agree they both get 0.  John Nash posed several axioms for a solution concept which lead to a unique solution: the point in $S$ with maximum value of $x \times y$. Another set of Axioms leading to another solution concept was proposed by Ehud Kalai and Meir Smorodinsky. Maschler and Perles introduced the following axiom: The solution for K+L dominates the solution for K + the solution for L. They identified a new and beautiful solution concept. Aryeh Dvoretzky (  1916-2008  ) Aryeh Dvoretzky, my academic great-grandfather, worked in Analysis and Probability Theory. The famous and beautiful Dvoretzky’s theorem asserts that every centrally symmetric convex body in sufficiently high dimension has an almost spherical section. This was a conjecture of Grothendieck (and there is some evidence that the proof by Dvoretzky was one of the reasons Grothendieck moved to other areas.) It is one of the most fundamental and useful results in Banach space theory and convexity. More precisely Dvoretzky’s theorem asserts that a random  $n$ dimensional centrally symmetric convex body has a $k$-dimensional section whose ”distance” from a ball is at most $\epsilon$ and $k \ge C_{\epsilon} \log n$. As a matter of fact, a random section will have this property with high probability. Dvoretzky, Erdos, and Kakutani discovered several seminal properties of Brownian motions. A famous result they proved in 1961 asserts that Brownian motions never increase. (or more precisely, almost surely have no point of increase.) In 1996  Yuval Peres found a simple proof for this theorem on Brownian motions and Aryeh was very happy about it. Some more material:  Aryeh Dvoretzky Obituary in Isr J. Math and in Haaretz (Henbrew).  Lior Tzafriri   obituary by Prof. M. Zippin.  More can be found in the “in memoriam” page of the HU Mathematics Institute.  A drawing of Lior  by a student. Michael Maschler in memoriam. ### Like this: This entry was posted in Obituary and tagged Aryeh Dvoretzky, Lior Tzafriri, Michael Maschler. Bookmark the permalink. ### 7 Responses to Lior, Aryeh, and Michael 1. Andy D says: Thanks for the personal glimpse into these mathematicians’ lives… and thanks for the blog that, in addition to its mathematical interest, is always warm-hearted (and well-illustrated). 2. aravind says: Dear Gil, I heartily second Andy D.’s comment above, and wish you and the blog a happy 2009! 3. Mathematics i thought was the hardest subject and this is the reason why i really have revered those who are masters in this subject. Reading about the mastery of these friends has really caught my attention. This online memorial website gave me a great insight into the lives of these great friends. 4. Gil Kalai says: Aravind and Andy, thanks a lot. Indeed Happy New 2009 everybody! (But a few 2008 posts are still coming.) 5. Gil Kalai says: Yuval Peres told me yesterday a couple of wonderful stories on Shizuo Kakutani, and in particular one story about the Theorem of Dvoretzky, Erdos and Kakutani. At that time Dvoretzky visited NYC and had an appartement there, Erdos stayed with him and Kakutani drove down from Yale. They spent all day working on the problem and by late evening they found a wonderful proof based on subtle generalization of the “reflection principle” that Brownian motions do have almost surely points of increase. Then Kakutani went down to his car but the car did not start. It was too late to do anything about it so Kakutani stayed at the appartement and then the three guys talked about some subtle points in the proof that needed fuller clarification and by 4 a.m. they had a complete proof that Brownian motions almost surely do not have points of increase! Next morning Kakutani went to the car and it did start. 6. Pingback: Vitali Fest « Combinatorics and more 7. elemer elad rosinger says: I had been colleague with Lior during 1955-58 at the Faculty of Mathematics and Mechanics of the University of Bucharest in Romania. Certainly, Lior was very bright and gifted, awfully hard working, incredibly ambitious and competitive. Also, he was rather abrasive and cynical in his social contacts. Later, I met him twice for short moments at the Hebrew University in Jerusalem in Israel, in 1974 and 1977, and I noted that his mentioned earlier features were unchanged. What I am writing about him is by no means to denigrate him in the least. Instead, it is rather meant for outstanding young mathematicians, like Lior himself had been in the 1950s and 1960s, and who just like Lior at the time, may feel nowadays that life is very harsh and treats them in a totally unfair manner. Needless to say, in most cases such a feeling may be perfectly justified. And clearly, so it was in the case of Lior, or for that matter, of myself as well. What is wrong, however, it to allow oneself to react improperly to such a situation, let alone, to over-react. And intelligent and high energy people can hardly, if at all avoid not to react strongly to bad circumstances, even if strong reactions can so often be quite wrong. It may be quite instructive to note that Lior, Michael and Aryeh who passed away in 2008, were respectively 72, 81 and 92. In this respect, Lior was indeed far too young to pass away. But then, it may be that, when compared with the other two, he had been the one who had most strongly reacted to the harsh circumstances of life, and did not know it well enough that such a reaction may be quite dangerous to oneself. Indeed, here is precisely one of the more important paradoxes in the life of exceptional people : how to survive in hard times, and do so as to bring the least damage to oneself by one’s reactions to the general situation. In this regard, one can recall the recent story of the relatively young Grisha Perelman who solved the Poincare Conjecture. Are the conditions of his life very hard ? Yes indeed, they are. And is he reacting to them properly ? Well, I am very much afraid that not … His, and many far less well known similar cases are precisely the issue which I tried to address, and which we, all of us, should try to address … And let us hope that it is not so impossible to do so … • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9604997038841248, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/107929/entropy-of-perfect-cryptosystems
# entropy of perfect cryptosystems I am working on the product of two perfect crypto-systems and I need to prove that the product is secure. $$a -- [\text{system}\ 1] -- b -- [\text{system}\ 2] -- c$$ How can I prove that $H(a) = H(a|c)$ knowing that $H(a|b) = H(a)$ and $H(b|c) = H(b)$? - You need something on system 1 and system 2 being independent, because if you compose (e.g.) 2 one-time pads, the result is not so secure.... – Henno Brandsma Feb 10 '12 at 21:57 Isn't this a markov chain? $p(c|a,b) = p(c|b)$ by markovity, and $p(c|b) = p(c)$ by independence of $b$ and $c$. $p(c|a,b) = p(c)$ also implies $a$ and $c$ are independent – VSJ Feb 10 '12 at 22:09 The two systems are independent I think. – Romain Pignard Feb 11 '12 at 1:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365531802177429, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=242613
Physics Forums Derivative of 10^x using limit definition 1. The problem statement, all variables and given/known data Obtain the first derivative of 10x by the limit definition. 2. Relevant equations f'(x)=limh->0 f(x+h)-f(x)/h 3. The attempt at a solution f'(x)=limh->0 10x+h-10x/h I also know that h=1 as x approaches 0. Now, how do I make it so that you aren't dividing by h=0. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member Science Advisor Staff Emeritus 10x+h= 10x10h so $$\frac{10^{x+h}- 10^x}{h}= 10^x\frac{10^h- 1}{h}$$ Now the question is, what is $$\lim_{h\rightarrow 0}\frac{10^h- 1}{h}$$? To find $$\lim_{h\rightarrow 0}\frac{10^h- 1}{h}$$ I can't plug h=0 in because I would be dividing by 0. Do I plug in 1 since the limit as h approaches 0 is 1? Recognitions: Homework Help Science Advisor Derivative of 10^x using limit definition Quote by cmajor47 To find $$\lim_{h\rightarrow 0}\frac{10^h- 1}{h}$$ I can't plug h=0 in because I would be dividing by 0. Do I plug in 1 since the limit as h approaches 0 is 1? if you plug h = 0 then 10^h = 1. But you want the next order correction which you explect should be proportional to h. So you expect something of the form $$lim_{h\rightarrow 0} ~10^h = 1 + a h + \ldots$$ where a is some numerical value and the dots represent terms of higher powers in h. The problem is to find the value of a. Here is the trick: use that any number x may be written as $$e^{ \ln x}$$. Then use what you know about rules for logs and then Taylor expand the exponential. I don't understand what this means. What is "Taylor expanding" and the "next order correction"? Recognitions: Homework Help Science Advisor Quote by cmajor47 I don't understand what this means. What is "Taylor expanding" and the "next order correction"? Have you ever seen the relation $$e^\epsilon = 1 + \epsilon + \frac{\epsilon^2}{2} + \ldots$$ ? This is a Taylor expansion. Have you ever proved using the limit definition that the derivative of e^x is e^x? Then you must have used something similar to the above. If you haven't proved the e^x case and the expansion I wrote above is not familiar to you then I will let someone else help you because I don't see at first sight any other approach. EDIT: do you know what the limit as h goes to zero of $$\frac{e^h -1}{h}$$ gives? maybe you have been told this without proving it. If you know the result of this limit and are allowed to use it, then I can show you how to finish your problem. If not, I don't see how to help, unfortunately. Best luck! I've never proved e^x. Thank you for trying to help though. Recognitions: Homework Help Science Advisor Quote by cmajor47 I've never proved e^x. Thank you for trying to help though. Sorry. I can tell you that the limit as h goes to zero of 10^h is $$lim_{h \rightarrow 0} 10^h = lim_{h \rightarrow 0} e^{\ln 10^h} = lim_{h \rightarrow 0} e^{h \ln 10} \approx 1 + h \ln 10$$ where I used an identity for logs and then I used the expansion of the exponential I mentioned earlier. Form this you can get the final answer of your question. Hopefully someone else will be able to find a way to show this result in some other way but I can't think of any! Best luck Mentor Blog Entries: 10 Quote by nrqed ... Taylor expand the exponential. Taylor expansion requires knowledge of what the derivative is. But we don't know the derivative, that is what we are supposed to find. Mentor Blog Entries: 10 I don't know if this will be useful, but one might try using the definition of e: $$e = \lim_{N \rightarrow \infty} (1+\frac{1}{N})^N = \lim_{a \rightarrow 0} (1+a)^{1/a}$$ or $$e^A = \lim_{N \rightarrow \infty} (1+\frac{1}{N})^{NA} = \lim_{a \rightarrow 0} (1+a)^{A/a}$$ Also, the fact that $$10^h = e^{h \ln(10)}$$ Recognitions: Gold Member Science Advisor Staff Emeritus I suspect this was given as a preliminary to the derivative of ex so the derivative of ex cannot be used. It is easy to see that the derivative of ax, for a any positive number, is a constant times ax. It is much harder to determine what that constant is! It's not too difficult to show that, for some values of a, that constant is less than 1 and, for some values of a, larger than 1. Define e to be the number such that that constant is 1. That is, define "e" by $$\lim_{h\rightarrow 0}\frac{e^h- 1}{h}= 1$$ As Redbelly98 said, 10h= eh ln(10) so $$\frac{10^h- 1}{h}= \frac{e^{h ln(10)}- 1}{h}[/itex] If we multiply both numerator and denominator of that by ln(10) we get [tex]ln(10)\left(\frac{e^{h ln(10)}-1}{h ln(10)}\right)$$ Clearly, as h goes to 0 so does h ln(10) so if we let k= h ln(10) we have $$ln(10)\left(\lim_{h\rightarrow 0}\frac{e^{h ln(10)}-1}{h ln(10)}\right)= ln(10)\left(\lim_{k\rightarrow 0}\frac{e^k- 1}{k}\right)$$ so the limit is ln(10) and the derivative of 10x is ln(10)10x. That is NOT something I would expect a first year calculus student to find for himself! Recognitions: Homework Help Quote by Redbelly98 Taylor expansion requires knowledge of what the derivative is. But we don't know the derivative, that is what we are supposed to find. Moreover, Taylor series are generally taught in second-semester calculus, while covering infinite sequences and series, while the limit [tex] \lim_{h\rightarrow 0}\frac{e^h- 1}{h}= 1 [/tex] is often demonstrated or proven (if it is not simply stated without proof) in the first-semester course, shortly after having covered limits and while developing the rules of differentiation. I hardly expected that the OP would have seen Taylor series yet... I believe the proof given in textbooks usually revolves around the limit definition of e, which Redbelly98 gives in post #10. Recognitions: Homework Help Science Advisor Quote by dynamicsolo Moreover, Taylor series are generally taught in second-semester calculus, while covering infinite sequences and series, while the limit Agreed. I should not have mentioned Taylor series. They seem so natural to me now that I tend to use them without even thinking about it. $$\lim_{h\rightarrow 0}\frac{e^h- 1}{h}= 1$$ is often demonstrated or proven (if it is not simply stated without proof) in the first-semester course, shortly after having covered limits and while developing the rules of differentiation. I hardly expected that the OP would have seen Taylor series yet... I believe the proof given in textbooks usually revolves around the limit definition of e, which Redbelly98 gives in post #10. This is why I then asked the OP if he/she had seen the formula you wrote just above. I hope he/she has. Because if he/she has to go back to the limit definition and prove the above identity in order to solve the original question, this problem seems much more challenging than I would expect as an assignment problem at that level!! Recognitions: Homework Help I suspect that OP's textbook presents the limit $$\lim_{h\rightarrow 0}\frac{e^h- 1}{h}= 1$$ somewhere in the chapter and that a student is just asked to recognize that they could apply it, in something like the manner Halls suggests in post #11... 10^x=e^(xln10)=e^x, so we are trying to find the derivative of e^x by the limit defination. The math forum has a solution at: http://mathforum.org/library/drmath/view/60705.html Sorry, the dereivative of 10^x is of course ln10(10^x) but the Math forum derivation of the derivative of e^x is still helpful More thoughts on this problem. f(x)=10^X=e^(x*ln10) f(x+h)= e^(ln10(x+h))=e^(ln10*x)*e^(ln10*h) Plugging into definitation of derivative and simplifying gives f'(x)= limit(h goes to 0) 10^x(10^h-1)/h tabulating the limit as h goes to 0 of (10^h-1)/h= ln10 Thread Tools | | | | |----------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Derivative of 10^x using limit definition | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 19 | | | Calculus | 2 | | | Calculus & Beyond Homework | 4 | | | Calculus | 3 | | | Introductory Physics Homework | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435293078422546, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/211229-inverse-laplace-transform.html
# Thread: 1. ## Inverse Laplace Transform I know the inverse laplace of F(s) would be delta (unit impulse) however what is the inverse transform of the value 10F(s)? Thank you in advance Matthew 2. ## Re: Inverse Laplace Transform Here is my answer to B from the three questions I have just want to make sure I'm going in the correct direction.. thanks 3. ## Re: Inverse Laplace Transform Hello Matthew, If $f_1(t) \leftrightarrow F_1(s), f_2(t) \leftrightarrow F_2(s)$ then $a f_1(t) + b f_2(t) \leftrightarrow a F_1(s) + b F_2(s)$ by Superposition use this to find the inverse Laplace of $10 F(s)$ The differential equation is correct except that on RHS it has to be $\delta(t)$ not $\delta(x)$ as independent variable is $t$ Kalyan.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146022200584412, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=334870
Physics Forums ## RMS and the Pythagorean Theorem Today I was thinking about the root mean square, and I figured out a definite relationship with the Pythagorean theorem. Specifically, the root mean square of the legs of a right triangle is equal to the "average leg," i.e. the leg of a square with the hypotenuse as it's diagonal. It appears to me that this is a fairly interesting and important connection, certainly applying to distance on Cartesian coordinates and maybe explaining the usefulness of the RMS. However, when I googled it, nothing came up! I'm trying to see if this relationship has any meaning whatsoever, which I believe it should, and if so, what it means. I'm pretty sure my math isn't wrong. Hi Kotreny, It's a nice observation. The geometric meaning is illustrated in the figure (I modified the figure found at http://mathworld.wolfram.com/PythagoreanMeans.html). Best, Mathador Attached Thumbnails Thanks for the attachment, but there seems to be something wrong. I tried substituting a=4 and b=3. The RMS is then 5/$$\sqrt{2}$$. But the line labeled RMS in the figure should be of length 7/$$\sqrt{2}$$, right? It looks like the diagonal of a square with side A--or the arithmetic mean, which would be 7/2--and the diagonal is always equal to side*$$\sqrt{2}$$. Correct me if I'm wrong. Thanks again! ## RMS and the Pythagorean Theorem Quote by kotreny Today I was thinking about the root mean square, and I figured out a definite relationship with the Pythagorean theorem. Specifically, the root mean square of the legs of a right triangle is equal to the "average leg," i.e. the leg of a square with the hypotenuse as it's diagonal. I should clarify exactly what I mean. Let's say you have a right triangle with legs a and b and hypotenuse c. The Pythagorean Theorem says that a2 + b2 = c2. Now, the root mean square of the two legs is $$\sqrt{(a^2 + b^2)/2}$$. But wait! Combine the two equations to get, RMS of a and b = $$\sqrt{c^2/2}$$ = c /$$\sqrt{2}$$ Now imagine a 45-45-90 triangle with legs equal to c /$$\sqrt{2}$$. The length of the hypotenuse would then be c /$$\sqrt{2}$$ * $$\sqrt{2}$$, which is equal to c. The conclusion is that the RMS of legs a and b gives you the leg of a 45-45-90 triangle with the same hypotenuse c. A little extra work shows that it applies to 3 or more dimensions too. I'll bet this is used to find average vector components, or something, though they probably don't take the time to mention the connection with the RMS. I dunno. Does standard deviation have something to do with this? http://en.wikipedia.org/wiki/Standar...interpretation This is essentially what I'm talking about, although worded differently. Seems strange that nowhere else mentions it; you'd think this is an important fact! So can standard deviation really, formally be visualized like this? If you take a data set, can each data point's deviation be considered as inhabiting its own "dimension"? Does it have mathematical significance at all? I'd love to get some answers, opinions, and especially corrections! Any feedback would be appreciated. Please comment, and thanks very much! Tags pythagorean theorem, root mean square, square Thread Tools | | | | |------------------------------------------------------|----------------------------------|---------| | Similar Threads for: RMS and the Pythagorean Theorem | | | | Thread | Forum | Replies | | | General Math | 13 | | | Introductory Physics Homework | 5 | | | General Math | 9 | | | Precalculus Mathematics Homework | 6 | | | General Math | 19 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481239914894104, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/proof-strategy+propositional-calculus
# Tagged Questions 2answers 64 views ### is this argument true? i had a puzzle and used a logical argument to show a point but some says that my argument is wrong but i can't understand the reason they provide ! the puzzles says , Given four cards laid out on a ... 2answers 89 views ### How to prove this with induction $$(P_0 \lor P_1 \lor P_2\lor\ldots\lor P_n) \rightarrow Q$$ is the same as $$(P_0 \rightarrow Q) \land (P_1 \rightarrow Q) \land (P_2 \rightarrow Q) \land\ldots\land(P_n \rightarrow Q)$$ Do I ... 1answer 92 views ### Prove that a formal system is absolutely inconsistent I'm using the book An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof, and it does not have any solutions and barely any examples. I want to understand how to prove that all ... 4answers 192 views ### Prove equivalence $(P \Rightarrow Q) \land (P \Rightarrow R) \Leftrightarrow P\Rightarrow(Q\land R)$ Prove equivalence $$(P \Rightarrow Q) \land (P \Rightarrow R) \Leftrightarrow P\Rightarrow(Q\land R)$$ What is the step by step for the equivalence of these equations. I can first break down the ... 2answers 138 views ### Providing a counter example for a Logic Statement How do I give a counter-example of the following logic statement (I think the statement is false): There exists $x$ $\geq$ 0 s.t. (For All real $y$, $x$ = $y$$^2$) Since the statement has a "There ... 2answers 75 views ### Is the set of self-dual connectives incomplete? A $n$-ary connective $\$$is called self-dual if$f_\$(x_1^*, \ldots , x_n^*) = (f_\$(x_1, \ldots , x_n))^*$where$0^* = 1$and$1^* = 0\$. How to show that the set of such self-dual connectives ... 0answers 156 views ### propositional logic - substitution Prove: $\varphi_1 =\!\mathrel|\mathrel|\!= \varphi_2 \implies \varphi_1[\psi/p] =\!\mathrel|\mathrel|\!= \varphi_2[\psi/p]$. We've proven that \$\varphi_1 =\!\mathrel|\mathrel|\!= \varphi_2 \implies ... 2answers 509 views ### I want a clear explanation for the Principle of Strong Mathematical Induction I understood the Principle of Mathematical Induction. I know how to make a recursive definition. But I am stuck with how the "Principle of Strong Mathematical Induction (- the Alternative Form)" ... 2answers 128 views ### Predicate Logic Argument Validity My question is whether or not the following is a validly structured argument: (P→T)→Q Q → ¬Q ∴ P I'm getting hung up on the Q→¬Q part by itself as a premise, it doesn't seem like that is ... 3answers 123 views ### Logical Equivalance Determine whether the following pairs of statements are logically equivalent or not. Give a reason. (i) $p \to (q \to r)$ and $(p \to q) \to r$ (ii) $p \to (q \to r)$ and \$q \to (p \to ... 3answers 369 views ### inference rules application (introduction / elimination): two examples Got stuck while trying out how to apply inference rules (introduction and elimination) for the following examples: From $\lnot(P\land Q)$ and $P$ infer $\lnot Q$ From $P\lor Q$ and $Q$ infer \$\lnot ... 3answers 322 views ### Understanding this proof by contradiction Let $c$ be a positive integer that is not prime. Show that there is some positive integer $b$ such that $b \mid c$ and $b \leq \sqrt{c}$. I know this can be proved by contradiction, but I'm not ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9013695120811462, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/41041/if-two-homogeneous-algebraic-varieties-are-isomorphic-are-they-necessarily-rel/41094
## If two “homogeneous” algebraic varieties are isomorphic, are they necessarily related by a linear map? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $I,J$ be homogeneous ideals in the algebra of polynomials in $n$ variables over the complex numbers. Let $V(I)$ and $V(J)$ be the affine algebraic varieties that are determined by $I$ and $J$ (not the projective varieties). Suppose that $V(I)$ and $V(J)$ are isomorphic as algebraic varieties. By this I mean that there are polynomial maps $f$ and $g$ from $\mathbb{C}^n$ to itself, such that $f$ restricted to $V(I)$ is a bijection onto $V(J)$, and such that $g$ restricted to $V(J)$ is its inverse. The question is this: does it follow that there exists a linear map on $\mathbb{C}^n$ that maps $V(I)$ onto $V(J)$? Thanks to discussions with colleagues (thank you David and Mike), I am quite convinced that if we assume that the origin is the only singular point in $V(I)$ then the answer is yes. Is this true in general? I think this question is equivalent to the following (see my partial answer below): Is it true that whenever there is an isomorphism between $V(I)$ and $V(J)$, there is also isomorphism that fixes $0$? - 1 The body of your question is not a question, nor is it related to the title as far as I can see! – Mariano Suárez-Alvarez Oct 4 2010 at 17:59 I apologize, the question was saved before I completed typing. It is now a complete question. – Orr Shalit Oct 4 2010 at 18:03 ## 4 Answers Trying to generalize Torsten's answer: It seems that if the cones are isomorphic then the isomorphism can indeed be chosen to preserve the origin. For a projective variety $V$ let's denote the affine cone by $C(V)$. Torsten says that if $V$ is not a (projective) cone then in $C(V)$ the "cone point" $0$ is the unique point of maximum multiplicity. $V$ is a projective cone if it is the join of a point $\mathbb P^0$ in the ambient projective space with a variety $W$ in a hyperplane (a hyperplane not containing that point). In this case $C(V)$ is the product of $C(\mathbb P^0)=\mathbb A^1$ and $C(W)$. In the general case $V$ is the join of a linear $\mathbb P^{d-1}$ with some $W$ which is not itself a projective cone, and then $C(V)=C(\mathbb P^{d-1})\times C(W)=\mathbb A^d\times C(W)$. Surely Torsten's statement generalizes to say that the points of maximal multiplicity in $C(V)$ are now those in $\mathbb A^d\times 0$. So, given $V_1$ and $V_2$ such that $C(V_1)$ and $C(V_2)$ are isomorphic, the two numbers $d_1$ and $d_2$ must be equal, and if the isomorphism does not carry $0$ to $0$ then it can be adjusted to do so using translations in $\mathbb A^d$. EDIT in response to Orr's comment: Here's what I mean, in your notation. Let $I$ be a homogeneous ideal in $k[x_1,\dots ,x_n]$ corresponding to some "homogeneous" affine variety $V(I)\subset\mathbb A^n$ and let $P(I)\subset \mathbb P^{n-1}$ be the variety that the same ideal defines. (I believe $V(I)$ is called the affine cone on $P(I)$.) It might happen that after some linear change of coordinates $I$ becomes an ideal generated by a homogeneous ideal $I_1$ in $k[x_1,\dots ,x_p]$ and a homogeneous ideal $I_2$ in $k[x_{p+1},\dots ,x_n]$. If so, then $V(I)$ is the product of $V(I_1)\subset \mathbb A^p$ and $V(I_2)\subset \mathbb A^{n-p}$, and I believe that $P(I)$ would be called the join of the projective varieties $P(I_1)\subset \mathbb P^{p-1}$ and $P(I_2)\subset \mathbb P^{n-p-1}$. In particular if $p=n-1$ and $I_2=0$ then $V(I)=V(I_1)\times \mathbb A^1$ and $P(I)$ is called the projective cone on $P(I_1)$. Torsten is arguing that if $P(I)$ is not a cone, i.e. if there is no linear change of variable such $I$ is generated by polynomials not involving the last coordinate, then the origin is intrinsically characterized as the unique point in $V(I)$ of maximal multiplicity. I am saying that one can treat the general case in the same way, as follows: Suppose that $P(I)$ is a cone, or a cone on a cone, or ... as far as you can go. That is, make a linear change of variables so that $I$ is generated by polynomials in the first $p$ coordinates with $p$ as small as possible. Thus $V(I)$ is the product of some $V(I_1)$ with $\mathbb A^{n-p}$ and $P(I)$ is the join of the corresponding $P(I_1)$ with $\mathbb P^{n-p-1}$. Now in $V(I)=V(I_1)\times \mathbb A^{n-p}$ the points of $0\times \mathbb A^{n-p}$ are the points of maximum multiplicity, and furthermore any one of them is carried to $0=(0,0)$ by some automorphism of $V(I)$ since $\mathbb A^{n-p}$ has an automorphism group that acts transitively. The idea of iterated singular locus is not quite so successful. In most cases if the projective variety $S(P(I))$ is $P(J)$ then the homogeneous affine variety $S(V(I))$ will be $V(J)$. In the extreme case when $P(I)$ is smooth, so that $S(P(I))$ is empty, $S(V(I))$ will be $0$, with the exception that if $P(I)$ is a projective space (linearly embedded in $\mathbb P^{n-1}$) then $V(I)$ will be an affine space (linearly embedded and containing the origin) whose singular locus is empty rather than $0$. Thus in the sequence $V(I)$, $S(V(I))$, ... the last nonempty thing will be an affine space, possibly $0$ or possibly bigger. But it will not always be the same thing as before (the maximal affine space such that $V(I)$ is in a linear fashion the product of it with something). For example, if $n=3$ and $P(I)$ is a projective curve which has exactly one singular point but which is not simply the union of lines through that point, then the singular locus of the homogeneous surface $V(I)\subset A^3$ will be a line through the origin but there will be no automorphism of $V(I)$ moving $0$ to another point in that line (or to anything other than $0$). - Thanks! Let me see if I get this right. You are saying that an affine variety $V = V(I)$ (I homogeneous) is always of the form $\mathbb{A}^d \times C(W)$, where $W$ is a projective variety, and that this $\mathbb{A}^d$ is a geometric invariant. Thus, translations along this space leave $V$ invariant, so we get back the case $0$ goes to $0$. I still do not understand why every $V$ has this form. Regarding this, another question: one can look at the singular locus of $V$, $S(V)$, and then look at $S(S(V))$ and so on, until you get a subspace. I this subspace the $\mathbb{A}^d$ from above? – Orr Shalit Oct 9 2010 at 12:51 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Yes, assuming at least that the cone point is the only singular point. Hence any isomorphism will preserve the ideal of that point which is the ideal of elements of positive degree.(I think that the cone should in some sense be the most singular point in general and hence would still be preserved.) You can think of the isomorphism and its inverse as a graded isomorphism where the coordinate rings are graded by the powers of the ideal of the cone point. They then give graded isomorphisms of the associated graded rings. These associated graded rings are however the original coordinate rings. Hence we get a graded isomorphism of the coordinat rings and these isomorphisms are equal to those induced by the linear maps on the degree $1$ part. Addendum: You can considerably weaken the condition that the cone point is the only singular point. Assume that the associated projective scheme to he ideals are varieties which are not cones. Then the multiplicity of any point outside of the cone point is equal to the multiplicity of the image point on the projective variety and that multiplicity is smaller than the degree of the variety. That degree however is just the multiplicity of the cone point. Hence, the cone points are the points of maximum multiplicity on the respective variety and are therefore taken to each other by an isomorphism. It seems likely that the case of the projective varieties being cones can be further analysed. - Thank you for the answer. I understand the first part. It would be interesting to know if the requirement that the cone point be the only the only singular point be removed completely. I do not understand the addendum - is there perhaps an elementary explanation? – Orr Shalit Oct 4 2010 at 22:55 A (projective) line is isomorphic to a conic, but not via a linear map. Do you mean to ask that two projective varieties whose degrees are equal would be isomorphic via a linear map? That still does not seem to be true: take a variety with two non-linearly equivalent very ample divisor of the same degree. The embeddings by the complete linear systems of these two divisors give isomorphic varieties (you can easily make them live in the same projective space) but they cannot be mapped into one another by a linear map, because that would imply the two original divisors being linearly equivalent. The above condition should be easy to satisfy as soon as the Picard number is larger than $1$. - This answer was posted before the correction to the question... – Sándor Kovács Oct 4 2010 at 18:17 I'll share here two proofs of the fact I am interested in under the assumption that the origin is the only singular point. Proof 1: (This is equivalent I think to Torsten's answer, and was a explained to me by a colleagues - thank you David and Mike) Let $F: V(I) \rightarrow V(J)$ be an isomorphism. Then since $0$ is the only singular point in $V(I)$, then it is mapped to $0 \in V(J)$. Now the derivative of $F$ at $0$, call it $DF$, is a linear map in $\mathbb{C}^n$ that takes the tangent cone of $V(I)$ at $0 \in V(I)$ to the tangent cone of $V(J)$ at $0 \in V(J)$. But these tangent cones are $V(I)$ and $V(J)$, respectively. Thus, $DF$ is the required linear map taking $V(I)$ onto $V(J)$. Proof 2: There are some technical details missing here. Let $F: V(I) \rightarrow V(J)$ be an isomorphism. Again, $F$ must take $0$ to $0$, under the assumption that $0$ is the only singular point. Thus, $F$ has the form $$F(z) = Az + \textrm{ higher order terms .}$$ Now define $F_t$ by $$F_t(z) = tF(z/t) .$$ Since $I$ and $J$ are homogeneous, $V(I)$ and $V(J)$ are invariant under scalings, so $F_t$ is again an isomorphismof $V(I)$ and $V(J)$. But $F_t$ has the form $$F_t(z) = Az + \frac{1}{t}(\textrm{higher order terms}).$$ Taking $t \rightarrow \infty$, we converge to the isomorphism (hopefully) $z \mapsto Az$. That's for the case when $0$ is the only singular point. In fact, all that is used is that there is an isomorphism taking $0$ to $0$. Is it true that whenever there is an isomorphism between $V(I)$ and $V(J)$, there is also isomorphism that fixes $0$? (here $I$ and $J$ are assumed homogeneous, of course). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 167, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570712447166443, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/differential-forms+riemannian-geometry
# Tagged Questions 0answers 35 views ### A $k$-form is thought of as measuring the flux through an infinitesimal k-parallelepiped On the wikipedia has written "A $k$-form is thought of as measuring the flux through an infinitesimal k-parallelepiped." How does a $k$-form do this? if this sentence is right, then the flux of which ... 1answer 90 views ### Relating volume elements and metrics. Does a volume element + uniform structure induce a metric? AFAIK a metric uniquely determines the volume element up to to sign since the volume element since a metric will determine the length of supplied vectors and angle between them, but I do not see a way ... 1answer 85 views ### Diffeomorphic riemannian manifolds and volume forms Maybe the question will be stupid, but I'm a beginner in riemannian geometry... We have two riemannian manifolds $(M,g)$, $(\overline M,\overline g)$ and a diffeomorphism $F:M\rightarrow\overline M$ ... 0answers 76 views ### Existence of Spin Group "In mathematics the spin group Spin(n) 1[2] is the double cover of the special orthogonal group SO(n), such that there exists a short exact sequence of Lie groups As a Lie group Spin(n) therefore ... 0answers 201 views ### Differential forms and a chain rule Let $U$ be a Riemann surface and let $z:U\longrightarrow B(0,1)$ be a diffeomorphism, where $B(0,1)$ is the open unit disc in $\mathbf{C}$. So $z$ is a coordinate around $P=z^{-1}(0)$. Let $Q\in U$ ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8993010520935059, "perplexity_flag": "head"}
http://cms.math.ca/10.4153/CJM-2012-037-2
Canadian Mathematical Society www.cms.math.ca | | Site map | CMS store location:  Publications → journals → CJM Abstract view # Explicit models for threefolds fibred by K3 surfaces of degree two Read article [PDF: 293KB] • Alan Thompson, Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, AB, Canada, T6G 2G1 Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: LaTeX MathJax PDF ## Abstract We consider threefolds that admit a fibration by K3 surfaces over a nonsingular curve, equipped with a divisorial sheaf that defines a polarisation of degree two on the general fibre. Under certain assumptions on the threefold we show that its relative log canonical model exists and can be explicitly reconstructed from a small set of data determined by the original fibration. Finally we prove a converse to the above statement: under certain assumptions, any such set of data determines a threefold that arises as the relative log canonical model of a threefold admitting a fibration by K3 surfaces of degree two. Keywords: threefold, fibration, K3 surface MSC Classifications: 14J30 - $3$-folds [See also 32Q25] 14D06 - Fibrations, degenerations 14E30 - Minimal model program (Mori theory, extremal rays) 14J28 - $K3$ surfaces and Enriques surfaces
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8296014666557312, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/176738-center-mass-solid.html
# Thread: 1. ## Center of mass of a solid Hi, I'm confused on how to calculate the center of mass of a solid that's located out of one sphere and into another. The problem goes on like this: Determine the center of mass of a solid located out of a sphere, of radius 1, centered at the origin, and inside a sphere, of radius 1, centered at point $\left(0,0,1\right)$. All I get from my textbook is the mass formula: $m=\int\int\int_{E} \rho(x,y,z)\mathrm{d}V$ And then if the center of mass is actually the same thing as a center of inertia, I would have to calculate the moments related to each plan of coordinates. How can I do that when I've got two spheres one above the other. I believe their equations are respectively: $x^2+y^2+z^2=1$ and $x^2+y^2+(z-1)^2=1$ As plotted below. Can you please give me a hint? I'm blocked there. Thanks a lot, Bazinga Attached Thumbnails 2. Hi, The center of mass G of a solid is given by the formula $\int\int\int_{E} \rho(x,y,z) \vec{OM}\mathrm{d}V = \left(\int\int\int_{E} \rho(x,y,z) \mathrm{d}V\right) \vec{OG}$ If the solid is homogeneous, $\rho(x,y,z)$ is constant (independent with respect to x, y and z) and therefore can be pulled out of the integral $\int\int\int_{E}\vec{OM}\mathrm{d}V = \left(\int\int\int_{E} \mathrm{d}V\right) \vec{OG}$ In your example, as far as I have understood, the solid is axysimmetric with respect to the z axis therefore G is on the z axis The projection of the formula on the z axis gives : $\int\int\int_{E} z r \mathrm{d}r \mathrm{d}\theta \mathrm{d}z = \left(\int\int\int_{E} r \mathrm{d}r \mathrm{d}\theta \mathrm{d}z) z_G$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383077025413513, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/115138/list
## Return to Question 5 Fixed mathematical typo Two particles start out at random positions on a unit-circumference circle. Each has a random speed (distance per unit time) moving counterclockwise uniformly distributed within $[0,1]$. How long until they occupy the same position? In the example below, the red particle catches the green particle at $t=5.9$, i.e., nearly six times around the circle: The distribution of overtake-times is quite skewed, indicating perhaps the mean could be $\infty$. For example, in one simulation run, it took more than $3$ million times around the circle before one particle finally caught the other. So I don't trust the means I am seeing (about $25$). What is the distribution of overtake-times? I was initially studying $n$ particles on a circle, but $n=2$ seems already somewhat interesting... Update (2Dec12). Alexandre Eremenko concisely established that the expected overtake-time (the mean) is indeed $\infty$. But I wonder what is the median, or the mode? Simulations suggest the median is about $1.58$ and the mode of rounded overtake-times is $1$, reflecting a distribution highly skewed toward rapid overtake. (The median is suspiciously close to $\pi/2$ ...) Update (3Dec12). Fully answered now with Vaughn Climenhaga's derivation of the distribution, which shows that the median is $\frac 1 + \frac 1{\sqrt{3}} \approx 1.577$. 4 added 173 characters in body Two particles start out at random positions on a unit-circumference circle. Each has a random speed (distance per unit time) moving counterclockwise uniformly distributed within $[0,1]$. How long until they occupy the same position? In the example below, the red particle catches the green particle at $t=5.9$, i.e., nearly six times around the circle: The distribution of overtake-times is quite skewed, indicating perhaps the mean could be $\infty$. For example, in one simulation run, it took more than $3$ million times around the circle before one particle finally caught the other. So I don't trust the means I am seeing (about $25$). What is the distribution of overtake-times? I was initially studying $n$ particles on a circle, but $n=2$ seems already somewhat interesting... Update (2Dec12). Alexandre Eremenko concisely established that the expected overtake-time (the mean) is indeed $\infty$. But I wonder what is the median, or the mode? Simulations suggest the median is about $1.58$ and the mode of rounded overtake-times is $1$, reflecting a distribution highly skewed toward rapid overtake. (The median is suspiciously close to $\pi/2$ ...) Update (3Dec12). Fully answered now with Vaughn Climenhaga's derivation of the distribution, which shows that the median is $\frac 1{\sqrt{3}} \approx 1.577$. 3 Update on Alexandre's mean, and questioning median & mode. Two particles start out at random positions on a unit-circumference circle. Each has a random speed (distance per unit time) moving counterclockwise uniformly distributed within $[0,1]$. How long until they occupy the same position? In the example below, the red particle catches the green particle at $t=5.9$, i.e., nearly six times around the circle: The distribution of overtake-times is quite skewed, indicating perhaps the mean could be $\infty$. For example, in one simulation run, it took more than $3$ million times around the circle before one particle finally caught the other. So I don't trust the means I am seeing (about $25$). What is the distribution of overtake-times? I was initially studying $n$ particles on a circle, but $n=2$ seems already somewhat interesting... Update (2Dec12). Alexandre Eremenko concisely established that the expected overtake-time (the mean) is indeed $\infty$. But I wonder what is the median, or the mode? Simulations suggest the median is about $1.58$ and the mode of rounded overtake-times is $1$, reflecting a distribution highly skewed toward rapid overtake. (The median is suspiciously close to $\pi/2$ ...) 2 added 49 characters in body Two particles start out at random positions on a unit-circumference circle. Each has a random speed (distance per unit time) moving counterclockwise uniformly distributed within $[0,1]$. How long until they occupy the same position? In the example below, the red particle catches the green particle at $t=5.9$, i.e., nearly six times around the circle: The distribution of overtake-times is quite skewed, indicating perhaps the mean could be $\infty$. For example, in one simulation run, it took more than $3$ million times around the circle before one particle finally caught the other. So I don't trust the means I am seeing (about $25$). What is the distribution of overtake-times? I was initially studying $n$ particles on a circle, but $n=2$ seems already somewhat interesting... 1 # Particles chasing one another around a circle Two particles start out at random positions on a unit-circumference circle. Each has a random speed (distance per unit time) moving counterclockwise uniformly distributed within $[0,1]$. How long until they occupy the same position? In the example below, the red particle catches the green particle at $t=5.9$, i.e., nearly six times around the circle: The distribution of overtake-times is quite skewed, indicating perhaps the mean could be $\infty$. For example, in one simulation run, it took more than $3$ million times around the circle before one particle finally caught the other. So I don't trust the means I am seeing (about $25$). I was initially studying $n$ particles on a circle, but $n=2$ seems already somewhat interesting...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476779699325562, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/150480/bounding-int-01-fx-dx-given-int-01-fx2-dx-leq-1-and-f0-0/150481
# Bounding $\int_0^1 f(x) dx$, given $\int_0^1 f'(x)^2 dx \leq 1$ and $f(0) = 0$. Let $S$ be the set of all differentiable function $f \colon [0,1] \rightarrow \mathbb{R}$ such that $\int_0^1 f'(x)^2 dx \leq 1$ and $f(0) = 0$. Define $J(f) := \int_0^1 f(x) dx$. Show that $J$ is bounded on $S$, find its supremum and see if there is a function $f_0$ in $S$ at which $J$ attains its maximum value. - Looks alot like a calculus of variations minimization problem. – toypajme May 27 '12 at 19:14 ## 1 Answer To get you started: 1. Integration by parts yields $J(f)=\int_0^1 (1-x)f\,'(x)\,dx$. 2. Cauchy-Schwarz will appear sooner or later. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8900848627090454, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/87291-find-characteristic-roots-characteristic-vector-print.html
# Find the characteristic roots and characteristic vector Printable View • May 3rd 2009, 10:27 PM zorro Find the characteristic roots and characteristic vector Find the characteristic roots and characteristic vectors of the matrix $A$ given by $<br /> A= \begin{bmatrix}<br /> 1 & 2 & 3 \\<br /> 0 & 2 & 3 \\<br /> 0 & 0 & 2<br /> \end{bmatrix}<br />$ • May 4th 2009, 01:10 AM Gamma This matrix is already in diagonal form. You find the roots of the characteristic polynomial det(A-Ix). In this case, they are just the things on the diagonal. To find the "characteristic vectors" I assume you are talking about the eigenvectors. Simply put the eigenvalue in for x in A-Ix (just subtract the value from the diagonal). Then you set that equal to 0 and find a basis for this solution space (the null space). This is the basis for the kernel of $(A-I\lambda)x$. • May 4th 2009, 01:28 AM mr fantastic Quote: Originally Posted by Gamma This matrix is already in diagonal form. [snip] I'm sure you meant to say upper triangular form. (By the way, welcome and thanks for all your help so far at MHF (Clapping)) • May 4th 2009, 01:43 AM Gamma Whoa! whoops. Yeah exactly. it is 4:42 am here, I think it is bedtime, lol. Take care. • December 19th 2009, 10:23 PM zorro I am stuck here Quote: Originally Posted by mr fantastic I'm sure you meant to say upper triangular form. (By the way, welcome and thanks for all your help so far at MHF (Clapping)) I have got $\lambda = 1,2$ Taking $\lambda = 1$ $\begin{bmatrix} 0 & 2 & 3\\ 0 & 1 & 3 \\ 0 & 0 & 1 \end{bmatrix}$ $\begin{bmatrix} x \\ y \\ z \end{bmatrix}$ = $\begin{bmatrix}0 \\ 0 \\ 0 \end{bmatrix}$ $\Rightarrow$ $\begin{cases} 2y + 3z = 0 \\ y + 3z =0 \\z = 0 \end{cases}$ I am stuck here what should i do know from here onwards ????? • December 19th 2009, 10:42 PM CaptainBlack Quote: Originally Posted by zorro I have got $\lambda = 1,2$ Taking $\lambda = 1$ $\begin{bmatrix} 0 & 2 & 3\\ 0 & 1 & 3 \\ 0 & 0 & 1 \end{bmatrix}$ $\begin{bmatrix} x \\ y \\ z \end{bmatrix}$ = $\begin{bmatrix}0 \\ 0 \\ 0 \end{bmatrix}$ $\Rightarrow$ $\begin{cases} 2y + 3z = 0 \\ y + 3z =0 \\z = 0 \end{cases}$ I am stuck here what should i do know from here onwards ????? That should tell you that $y=0$, $z=0$ and $x$ is arbitrary. So take: $\bold{x}_{(\lambda=1)}=\left[\begin{array}{c}1\\0\\0 \end{array}\right]$ as the eigenvector corresponding to eigen value $\lambda=1$ When you work this for the other eigen value (which has multiplicity 2) you will find two solutions for the eigen vector (defined up to an arbitrary constant). CB • December 20th 2009, 12:39 AM zorro Is this what u are talking Quote: Originally Posted by CaptainBlack That should tell you that $y=0$, $z=0$ and $x$ is arbitrary. So take: $\bold{x}_{(\lambda=1)}=\left[\begin{array}{c}1\\0\\0 \end{array}\right]$ as the eigenvector corresponding to eigen value $\lambda=1$ When you work this for the other eigen value (which has multiplicity 2) you will find two solutions for the eigen vector (defined up to an arbitrary constant). CB $\lambda = 2$ $\begin{bmatrix} -1 & 2 & 3 \\ 0 & 0 & 3 \\ 0 & 0 & 0 \end{bmatrix}$ $\begin{bmatrix} x \\ y \\ z \end{bmatrix}$ = $\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \Rightarrow \begin{cases} -x + 2y + 3z = 0 \\ 3z = 0 \\ \end{cases}$ $\therefore$ $z = 0$ and $- x + 2y = 0$ U are correct so What should i do now ????? • December 20th 2009, 01:49 AM dedust Quote: Originally Posted by zorro $\lambda = 2$ $\begin{bmatrix} -1 & 2 & 3 \\ 0 & 0 & 3 \\ 0 & 0 & 0 \end{bmatrix}$ $\begin{bmatrix} x \\ y \\ z \end{bmatrix}$ = $\begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \Rightarrow \begin{cases} -x + 2y + 3z = 0 \\ 3z = 0 \\ \end{cases}$ $\therefore$ $z = 0$ and $- x + 2y = 0$ U are correct so What should i do now ????? let $y=t$, $t \in \mathbb{R}$, then $x=2t$, and the solution is $(x,y,z) = (2t, t, 0) = t(2, 1, 0)$ so, the eigen vector is $\bold{x}_{(\lambda=2)}=\left[\begin{array}{c}2\\1\\0 \end{array}\right]$ All times are GMT -8. The time now is 04:54 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 46, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9079430103302002, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/62859/simpler-statements-equivalent-to-conpa-or-conzfc/63031
## “Simpler” statements equivalent to Con(PA) or Con(ZFC)? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given any reasonable formal system F (e.g., Peano Arithmetic or ZFC), we all know that one can construct a Turing machine that runs forever iff F is consistent, by enumerating the theorems of F and halting if it ever proves 0=1. However, what interests me here is that the "obvious" such Turing machine will be an extremely complicated one. Besides the axioms of F, it will need to encode the symbols and inference rules of first-order logic, which (among other things) presumably requires writing a parser for context-free expressions. If you actually wrote the Turing machine out, it might have millions of states! Even in a high-level programming language, the task of writing a program that enumerates all the theorems of ZFC is not one that I'd casually give as homework. Notice that this situation stands in striking contrast to that of universal Turing machines, which we've known since the 1960s how to construct with an extremely small number of states (albeit usually at the price of a complicated input encoding). It also contrasts with the observation that very small Turing machines can already exhibit "complicated, unpredictable" behavior: for example, the 5th Busy Beaver number is still unknown, and it seems like a plausible guess that the values of (say) BB(10) or BB(20) are independent of ZFC. Thus my question: Is any "qualitatively simpler" class of computer programs known, which can be proved to run forever iff ZFC is consistent? Here, by "qualitatively simpler," I mean doing something that looks much more straightforward than enumerating all the first-order consequences of the ZFC axioms, but that can nevertheless be proved by some nontrivial theorem to be equivalent to such an enumeration. Feel free to replace ZFC by ZF, PA, or any other system to which Gödel's Theorem applies if it makes a difference. This question is clearly related to the well-known goal of finding "natural" or "combinatorial" statements that are provably independent of PA of ZFC, but it's not identical. For one thing, I don't demand that your statement have any independent mathematical interest---just that the computer program corresponding to your statement be easier to write than a program that enumerates all ZFC-theorems! One concrete goal would be to find the smallest n for which you can prove that the value of BB(n) (the nth Busy Beaver number) is independent of ZFC. (It's clear that BB(n) is independent of ZFC for all n≥n0, where n0 is the number of states in a Turing machine that enumerates all ZFC-proofs and halts if it proves 0=1.) As a first step, though, I'll be delighted to learn of any theorem that simplifies the task of writing proof-enumerating programs. (Even if the programs are still expressed in a high-level formalism, and are still horrendously complicated when compiled down to Turing machines.) - 1 Scott, isn't the enumeration of all theorems of ZFC almost surely non-computable, though definable? I.e. the hard to program Turing Machine, you are suggesting would be of infinite length? – Halfdan Faber Apr 24 2011 at 23:44 4 No. There's certainly a finite TM that enumerates all the theorems of ZFC (for example, in increasing order of their Godel numbers). It's just a large finite TM. – Scott Aaronson Apr 24 2011 at 23:51 1 Scott: Ok, I figured you were after $\Pi^0_1$ statements. That has actually been subject of recent work (by Friedman and others). The link I posted is then precisely in this direction (though perhaps still unsatisfactory). I'll try to expand this into an answer a bit later. – Andres Caicedo Apr 25 2011 at 0:49 1 Scott: RCA0 is weaker than PA. "SRP" stands for "stationary Ramsey property" and is a large cardinal axiom, not provable in ZFC. – Timothy Chow Apr 25 2011 at 14:33 2 @Scott: In your first comment (replying to Halfdan Faber), the part in parentheses, about enumerating the theorems in order of Gödel numbers, is wrong. If you could do that, you'd have a decision procedure; to tell whether a given sentence S is a theorem of ZFC, start the enumeration, wait until something with larger Gödel number than S is enumerated, and see whether S has been enumerated by then. What can be done is to enumerate the theorems of ZFC in order of the (first) Gödel numbers of their proofs. – Andreas Blass Apr 26 2011 at 21:49 show 9 more comments ## 3 Answers The discussion in the comments has helped clarify your question for me. I believe that it is closely related to the following remark by Harvey Friedman: I am convinced that trying to take consistency statements like Con(ZFC + measurable cardinals) or Con(ZFC + rank into itself), Con(ZF + inaccessible rank into itself), etc., and force them into smaller and smaller Turing machines not halting, with demonstrable equivalence in an extremely weak system, is an open ended project, in the practical sense, that will create a virtually unlimited opportunity, in the practical sense, for a stream of ever and ever deeper mathematical ideas. Ideas that could come from unexpected sources, ideas that could have independent deep ramifications, ideas that -- well who knows what to expect. The benchmark is completely clear - how many quadruples? At the very least, deep ideas about set theory and large cardinals, but probably much more diverse deep ideas about, well, the unexpected. Any branch of mathematics whatsoever might prove useful, or even crucial, here. Whereas, we don't think that "any branch of mathematics" might be useful in logic problems, normally. This is different. Other relevant postings from the FOM archives may be found here and here. So Friedman's work, that Andres Caicedo alluded to in the comments, is probably the closest thing to what you want. I don't know of any other people who are working actively on this kind of project. As a side remark, I believe that your intuition is correct that there is some kind of intrinsic "complexity" to the statement "ZFC is consistent," and roughly speaking it is because the totality of mathematical knowledge is a "complex" entity. - Btw, are we all certain that the required state count of a theorem-enumerating TM is essentially that of any context-free grammar generator? Might the first-order logic inference rules be easier/harder to apply? – Liron Apr 26 2011 at 14:21 Timothy, thanks so much for digging up that quote! Friedman beautifully expresses precisely what I was getting at with this question. – Scott Aaronson Apr 26 2011 at 16:15 As a side comment, I believe that Friedman's intuition is that the "complexity" here lies with the axioms, rather than with the rules of inference of classical first-order logic. It might be interesting to see how small a Turing machine is needed to check that there aren't two validities of first-order logic that are negations of each other. (Though thinking about it now, I guess we might want to pick a base theory that doesn't prove the consistency of first-order logic, and such a theory might be so weak that we can't prove much at all in it. I'm not sure.) – Timothy Chow Apr 27 2011 at 14:13 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. http://mathoverflow.net/questions/32892/does-anyone-know-a-polynomial-whose-lack-of-roots-cant-be-proved there is a polynomial in several variables that has an integer root iff ZFC is inconsistent. In other words, if the polynomial has $n$ variables, you can enumerate all $n$-tuples of integers and plug them into the polynomial until the result is 0. Since ZFC is consistent (as we all know), the computer program will run forever. - 1 Ah, yes -- but as the author points out in the conclusion, that polynomial is extremely complicated (just like the Turing machine)! I'm looking for ideas/results from logic or other areas that could be used to cut down the complexity. – Scott Aaronson Apr 24 2011 at 21:52 2 To clarify: I don't see any reason to believe that writing a program to search for solutions to the polynomial would be simpler than the original task, of writing a program to search for contradictions in ZFC. (Maybe it will be, but I don't see why!) I know that there are many ways to transfer the complexity of the ZFC axioms + first-order logic from one setting to another. What interests me is that I've never seen a clear-cut way to reduce the complexity (but also know no reason why there shouldn't be one). – Scott Aaronson Apr 24 2011 at 22:01 2 I agree. The program is conceptually simple, just find the root of a given polynomial, but the complexity has been shifted into the list of coefficients. – Stefan Geschke Apr 24 2011 at 22:34 1 Closely related to Stefan's answer: mathoverflow.net/questions/51987/… – Andres Caicedo Apr 25 2011 at 0:19 3 Andrey Bovykin has a research program involving short Diophantine expressions equiconsistent with various theories, you might want to check it out. – Emil Jeřábek Apr 26 2011 at 14:01 show 11 more comments Maybe I understand the question wrong, but I think you should specify the logic in which the equivalence is proven. This is a little bit similar to relative consistency. Then you have three logics. The two logics that you compare and the logic in which you prove the relative consistency. If your logic that proves the equivalence, can prove Con(PA), then the equivalent Turing machine is quite simple. Just a Turing machine that runs forever. About the question itself. One reason why making a program that enumerates the theorems is hard, is because a logic is non-deterministic (you have choices in the axioms and inference rules you use) and a programming language (and a Turing machine) is deterministic. The task becomes much easier if you take a non-deterministic language. To show that with a simple example. Take the last theorem of Fermat. To encode that as a halting problem, you have to diagonalize over a, b, c and n. This require some amount of coding. However, in a non-deterministic language, you just take a, b, c and n non-deterministic. A much simpler program. Finally, the tricky part of programming the axioms and inference rules of a logic, is the variable substitution and the problem when variables are used multiple times. I think it should be possible to get fully rid of variables, which makes it easier to program, but less readable for humans. Lucas - Lucas, in order to talk about Turing machines we need something like PA or even a weak fragment of it. Most results in mathematics (like the incompleteness theorems, for example) can actually be proved within such a framework. That would be a reasonable starting point if you want to formalize things. – Stefan Geschke Apr 28 2011 at 20:02 Stefan you write "want to formalize things". The point I tried to make is that the question requires the formalization. Without it, it is not a proper question. – Lucas K. Apr 28 2011 at 20:11 I was imagining the equivalence would be provable in a weak fragment of PA, like most other "ordinary" mathematical statements. – Scott Aaronson Apr 29 2011 at 4:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419580698013306, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/208436/finding-an-inverse-injection-for-a-surjection?answertab=votes
# Finding an inverse injection for a surjection Question: Let $X$ and $Y$ be sets, and let $f\colon X\to Y$ be a surjection. Prove that there is an injection $g\colon Y\to X$ such that $f(g(y)) = y$ for every $y\in Y$. I do not have an idea how to prove this theorem. I could not even find a starting point. Could you please give me a hint? Regards - ## 2 Answers This cannot be proved "naively". Indeed this question is an equivalent formulation of an axiom known as The Axiom of Choice. The axiom of choice says that given a family of non-empty sets, we can choose one element from each member of the family. Using the axiom of choice, note that for every $y\in Y$ the set $f^{-1}(y)$ is non-empty. We therefore have a function which chooses one element from each preimage, call it $g$. This function is as wanted, since $g(y)\in f^{-1}(y)$ and therefore $f(g(y))=y$ as wanted, this also implies $g$ is injective because if $y_1\neq y_2$ then $g(y_1)$ and $g(y_2)$ are taken from disjoint sets. - thank you once more for your reply. I am wondering if there exists a book which includes theorems and proofs about sets&functions that I can check for help. Because for example I spent to much time to prove this question naively but if I had a book like that I would just look up and find the answer. – Amadeus Bachmann Oct 6 '12 at 23:09 @Zxy: Well, that depends on what you are looking to do with set theory. I think that you would do fine with an introductory chapter in calculus/algebra books or so. If you wish to study more set theory, there are some questions on the site about references for introductory set theory. – Asaf Karagila Oct 6 '12 at 23:16 Just pick any $g(y)\in f^{-1}\{y\}$ for every $y$. - 1 But why can you pick such $g(y)$? – Asaf Karagila Oct 6 '12 at 22:58 @AsafKaragila I was going to ask the same question. – Amadeus Bachmann Oct 6 '12 at 23:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9560803771018982, "perplexity_flag": "head"}
http://mathoverflow.net/questions/14944/have-people-successfully-worked-with-the-full-ring-of-diferential-operators-in-ch/14947
## Have people successfully worked with the full ring of diferential operators in characteristic p? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is inspired by an earlier one about the possibility of using the full ring of differential operators on a flag variety to develop a theory of localization in characteristic $p$. (Here by the full ring of differential operators I mean the same thing as the ring of divided-power differenial operators, which is the terminology used in the cited question.) My question is: Do people have experience using the full ring of differential operators successfully in characteristic $p$ (for localization, or other purposes)? I always found this ring somewhat unpleasant (its sections over affines are not Noetherian, and, if I recall correctly a computation I made a long time ago, the structure sheaf ${\mathcal O}_X$ is not perfect over ${\mathcal D}_X$). Are there ways to get around these technical defects? (Or am I wrong in thinking of them as technical defects, or am I even just wrong about them full stop?) EDIT: Let me add a little more motivation for my question, inspired in part by Hailong's answer and associated comments. A general feature of local cohomology in char. p is that you don't have the subtle theory of Bernstein polynomials that you have in char. 0. See e.g. the paper of Alvarez-Montaner, Blickle, and Lyubeznik cited by Hailong in his answer. What I don't understand is whether this means that (for example) localization with the full ring of differential ops is hopeless (because the answers would be too simple), or a wonderful prospect (because the answers would be so simple). - You probably knew this already, but this [conference](sites.google.com/site/frobeniussplitting) sounds like a cool place to be for this question. – Hailong Dao Feb 12 2010 at 20:20 Can you give a sense what kind of simplicity results you get for (presumably pushforwards of) D-modules from these results? – David Ben-Zvi Feb 19 2010 at 15:00 ## 4 Answers Smith and Van den Bergh worked with it, and got some lovely results, in this paper. For example, they show that direct summands of polynomial rings have simple rings of differential operators in positive characteristic. This is still open (as far as I know) in characteristic 0. It's a particularly interesting paper for the connections it makes with representation types. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This might not be what you are looking for, as they use the actual full ring of differential operators (in Berthelot's theory, your "full ring" would be $D^{(0)}$, if I understand correctly), but the following papers are very beautiful in my opinion: Gieseker, D. - Flat vector bundles and the fundamental group in non-zero characteristics. dos Santos, João Pedro Pinto - Fundamental group schemes for stratified sheaves. J. Algebra 317 (2007), no. 2, 691--713. Hélène Esnault, Vikram Mehta - Simply connected projective manifolds in characteristic $p>0$ have no nontrivial stratified bundles You'll of course notice very quickly that in all cases, the D-Module flavor is lost, as a $O_X$-coherent D-module can be translated into the world of vector bundles thanks to Frobenius descent. - By the full ring I mean Berthelot's ${\mathcal D}^{\infty}$, if I remember the notation correctly. Thank you for the references! – Emerton Feb 12 2010 at 13:11 Oh, good, then we are talking about the same thing. I was confused because you wrote "divided power differential operators", which are D^(0) in Berthelots language (and PD-Diff in the language of his older writings on crystalline cohomology), if I understand correctly. The higher D^(n) are constructed via "partially-divided powers". – Lars Feb 12 2010 at 14:27 Yes, sorry about this; I was copying the terminology of the earlier question that I linked to. I'm never sure what notation/terminology to use when discussing these various rings, since many non-arithmetic geometers don't know Berthelot's notation. – Emerton Feb 12 2010 at 21:09 Dear Matt: The people who are actively working with this whom I know are Genady Lyubeznik and Manuel Blickle. The key point seems to be that certain $R[F]$-modules become simpler when viewed as $D_R$-modules (here $F$ is the Frobenius). It has been applied to show that certain local cohomology modules over regular local rings in positive characteristic have finitely many associated primes. Examples are in the following papers: D-module structure of R[F]-modules and Generators of D-modules in positive characteristic and the references there in. There is also this new preprint which might be of interest. - Thanks! I guess this is the aspect of the theory that I know best, but a lot of weight is carried by the unit Frobenius action. I wonder if this is expected to be true in other contexts, like localization? – Emerton Feb 11 2010 at 0:42 Dear Matt: I don't know much of other contexts. Are there some specific properties of $D_R$ you want to be true? – Hailong Dao Feb 11 2010 at 0:58 Dear Hailong, I'm mostly just curious to hear what people have to say. My impression is that there are at least three different groups of mathematicians thinking about $\mathcal D_R$ in some form: $p$-adic cohomology people (of which I am, or at least once was, one); commutative algebraists; and representation-theorists. There is some interaction between the first two groups, but I don't know of as much interaction between either of them and the third group. I'm hoping this question might draw input from all three groups. – Emerton Feb 12 2010 at 19:15 Certainly the fact that the ring of differential operators is non-Noetherian is an inconvenience but it is not clear if it is more than that. For instance one can define the notion of holonomic module. It is not a direct translation of the characteristic zero definition (and this is certainly related to this inconvenience) but once given it seems to work as well as in characteristic zero: MR1918185 (2003h:14030) Bögvad, Rikard(S-STOC) An analogue of holonomic D-modules on smooth varieties in positive characteristics. (English summary) The Roos Festschrift volume, 1. Homology Homotopy Appl. 4 (2002), no. 2, part 1, 83–116. 14F10 (16S32 32C38) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.93744957447052, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/236318/question-about-a-stochastic-process
# Question about a stochastic process Let $X_n$ denote the population size in a branching process $\{X_n, n\geq 0\}$. Assume that $X_1=Y_1$ has the distribution $P(Y_1=0)=P(Y_1=1)=1/2$. Answer the following: a. Find the probability generating function of $Y_1$. b. Show that the probability generating function of $X_n$ is $G_n(s)=1-1/2^n +s/2^n$, $s\in\mathbb{R}$. c. Compute the extinction probability of the population starting from one individual at time $0$. - 1 – Did Nov 13 '12 at 9:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8384469151496887, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/36160/tracking-down-the-locality-assumption-in-a-chsh-inequality-derivation?answertab=active
# Tracking down the locality assumption in a CHSH inequality derivation CHSH inequality requires both locality and realism. I will equate here realism with counterfactual definiteness. Now counterfactual definiteness tells us that given two different measurements on the same object, described by random variables $C_1$ and $C_2$, there exists a joint probability distribution for $C_1$ and $C_2$ (this is not always the case, search for the marginal problem and indeed we know that outcomes of measurements of noncommuting observables do not posses a joint probability distribution). Now if we can assume the existence of a joint probability distribution, then expectation values $E(C_1) + E(C_2)$ may be joined together to have $E(C_1 + C_2)$. So suppose that we now have four random variables $A_1$ and $B_1$ local to Alice and $A_2$ and $B_2$ for Bob, which can take values $\pm 1$. The expression in the CHSH inequality is $$|E(A_1 A_2) + E(A_1 B_2) + E(B_1 A_2) - E(B_1 B_2)|$$ Now if we can assume realism (counterfactual definiteness), there exists a joint probability distribution for the outcomes of all four random variables, and we can join the expectation values in the above together to get $$\Bigl\lvert E\bigl(A_1 (A_2 + B_2) + B_1 (A_2 -B_2)\bigr)\Bigr\rvert$$ So now we do the standard trick where either $A_2$ and $B_2$ are equal or they are opposite, so that the first term is either $\pm 2$ and same for the second term. Now to my question. I obviously used the realism assumption in the above. I assume locality means that the marginal distributions for $A_1$ and $B_1$ are independent of the choice of random variables Bob makes, $A_2$ or $B_2$ (but I otherwise allow for the correlation of outcomes, as long as it's independent of measurement choice, as there may be hidden variables that were encoded at the source of the state that produce correlations). Where did I use the locality assumption in this derivation? I would like if you either point out precisely where this assumption is needed in this calculation or argue that it is not needed with a convincing justification. Edit: Given the answer below, it seems likely that the locality assumption may have been used at the point where we take $E(C_1) + E(C_2) = E(C_1 + C_2)$. However, it is still not understood why realism is not sufficient to reach this conclusion without the assumption of locality. As stated above, the question is why is the above reasoning not correct and locality assumption is needed, or alternatively I am also open to the idea that it might not be needed. Edit 2: I have received a satisfactory answer to this question over at MathOverflow from @Steven Landsburg. The existence of a joint probability distribution for all four variables is definitely sufficient to get the inequality. The existence of joint probability distributions for (A1,B1) and (A2,B2) (i.e. "realism") is definitely not sufficient to imply a joint probability distribution for all four. Locality is a sufficient additional assumption. He also shows there that counterfactual definiteness for local outcomes is not a sufficient condition to obtain the CHSH inequality. - ## 1 Answer I didn't think about it long, but it's surely the assumption $E(C_1)+E(C_2)=E(C_1+C_2)$ is the culprit, and CHSH is just the same as Von-Neumann's argument against hidden variables. The failure is analyzed by Bell, in relation to Bohm's theory, in the book Speakable and Unspeakable in Quantum Mechanics. The reason $E(C_1+C_2) = E(C_1) + E(C_2)$ is local is because you are assuming the measurements of 1 and 2 are independent to conclude that the expected values add up like this. Otherwise the measurment of C_1 could affect C_2. Bohm's theory is enough to see there is no argument independent of locality. I don't like CHSH inequality, because it is a rehash of Von-Neumann, it's not original, and it fails in the same way that was explained in detail by Bell using Bohm. - But if $C_1$ and $C_2$ are counterfactually definite random variables with some joint distribution $p(c_1, c_2)$ then $E(C_1 + C_2) = E(C_1) + E(C_2)$ is a basic mathematical fact, following from the linearity of the expectation operator. You can apply this here because there exists a single distribution governing both observables. – SMeznaric Sep 11 '12 at 16:56 Also, local measurements need not be independent - this is a very strong assumption, stronger than local realism. Bell's integral expression for $\int d\lambda p(x_1|\lambda) p(x_2|\lambda) \rho(\lambda)$ for example allows for classical correlations between measurement outcomes. States such as $\rho = \sum_k p_k |k><k| \otimes |k><k|$ for example do not violate the CHSH inequality, but do posses non-independence between some measurement outcomes of Alice and Bob. – SMeznaric Sep 11 '12 at 17:07 @SMeznaric: This is not so--- you need to consider measuring "C1" measuring "C2" and measuring "C1+C2", these are three different measurements. This is discussed at length by Bell. When I said "independent", I didn't mean "uncorrelated", I just meant "causally independent". – Ron Maimon Sep 11 '12 at 19:39 – SMeznaric Sep 11 '12 at 20:01 @SMeznaric: The assumption is not justified because measuring the sum of quantum operators is not measuring the sum of classical observables representing hidden variables. The correspondence does not respect addition of operators. This is discussed by Bell at length in Speakable and Unspeakable, it's the same assumption that fails in Von-Neumann. – Ron Maimon Sep 11 '12 at 20:22 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440268278121948, "perplexity_flag": "head"}
http://mathoverflow.net/questions/58963?sort=votes
Is the pre-image of a regular subscheme with respect to a universal homeomorphism of regular schemes regular? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $f:X\to Y$ be a universal homeomorphism of regular (excellent finite-dimensional) schemes, $Z\subset Y$ be a regular subscheme. Is $f^{-1}(Z)$ necessarily regular? - 2 Answers Well, $f^{-1}Z$ could easily be non-reduced (for example, take the relative Frobenius morphism $\mathbb A^1_k \to \mathbb A^1_k$, defined by the embedding $k[y] = k[x^p] \subseteq k[x]$, where $k$ is a field of characteristic $p > 0$, and let $Z \subseteq \mathbb A^1$ be defined by $y = 0$), so I would guess that the question should be interpreted as asking whether $f^{-1}Z$ with its reduced structure is regular. But the answer is negative even in this case. For example, take the morphism $\mathbb A^2_k \to \mathbb A^2_k$ defined by the embedding $k[y,t] = k[x^p, t] \subseteq k[x, t]$, and let $Z \subseteq \mathbb A^2_k$ be defined by $y + t^{p+1} = 0$. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Can't we base-change $f : X \to Y$ with $Z$ and obtain: $g : f^{-1}(Z) \to Z$? This also is a universal homeomorphism by construction, right? So now we have a universal homeomorphism to a regular scheme, but a regular scheme is weakly normal, see A. Andreotti and E. Bombieri, Sugli omeomorfismi delle varietà algebriche''. Therefore $g$ is an isomorphism at least as long as $f^{-1}(Z)$ is reduced and the map $g$ is birational. EDIT: My argument that $f^{-1}(Z)$ was reduced was junk. I shouldn't have tried to do math while on the run. But as long as $f^{-1}(Z)$ is reduced and $g$ is birational, then I think things are ok. - Sorry, why $f$ being a universal homeomorphism contradicts its generic inseparablity? – Mikhail Bondarko Mar 20 2011 at 13:09 I was being dumb. Nevermind. – Karl Schwede Mar 20 2011 at 18:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457439184188843, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/39287/particle-sources-and-particle-detectors-in-quantum-field-theory
# Particle sources and particle detectors in quantum field theory I am looking for a resource that clearly exposes the concepts of a particle source and a particle detector in the context of Quantum Field theory. I want to understand Irreversibility in this context. - 1 Why close this? This is a perfectly reasonable and clear question. – Arnold Neumaier Oct 7 '12 at 17:31 There is a study by J. Schwinger and the corresponding books called "Particles, Sources, and Fields". – Vladimir Kalitvianski Oct 8 '12 at 15:54 Do you know of any specific papers of Schwinger that discuss this. I dont have access to these books currently. – Prathyush Oct 9 '12 at 7:05 ## 1 Answer Typically one thinks of the sources as being at infinite past, and the detection at infinite future; then a reversible S-matrix description applies. For photons, a corresponding treatment of sources and detectors can be found in Mandel & Wolf's treatise on quantum optics. But their treatment doesn't give any hint on irreversibility. Detection is always irreversible; nothing counts as detected unless there is an irreversible record of it. There is no really good account from first principles how an irreversible detection event is achieved. From the 1999 article ''Some problems in statistical mechanics that I would like to see solved'' by Elliot Lieb http://www.sciencedirect.com/science/article/pii/S0378437198005172: The measurement process in quantum mechanics is not totally understood, even after three quarters of a century of thought by the deepest thinkers. At some level, the problems of quantum mechanical measurement are related, distantly perhaps, to the problems of non-equilibrium statistical mechanics. Several models (e.g. the laser) indicate this, but the connection, if any, is unclear and I would like to see more light on the subject. But see http://arxiv.org/pdf/quant-ph/0702135 http://arxiv.org/pdf/1107.2138 A field theoretic discussion of irreversibility necessitates a statistical mechanics treatment. This more detailed modeling is done in practice in a hydrodynamic or kinetic approximation. They treat sources as generators of beams with an extended distribution in space or phase space, respectively. The dynamics of both descriptions is irreversible, and may be computed in terms of $k$-particle irreducible ($k$PI) Feynman diagrams for $k=1$ and $k=2$, respectively. The kinetic description is based on the Kadanoff-Baym equations in the 2PI Schwinger-Keldysh (CTP) formalism. The Kadanoff-Baym equations are dynamical equations for the 2-particle Wightman functions and their ordered analoga, and are used in practice to model high energy heavy ion collision experiments. See, e.g., http://arxiv.org/abs/hep-th/9605024 and the discussions in Good reading on the Keldysh formalism What is known about quantum electrodynamics at finite times? The hydrodynamic description is based on the simpler 1PI approach, but it is (to my knowledge) used mainly theoretically; see, e.g., Reviews of Modern Physics 49, 435 (1977) and the papers http://arxiv.org/pdf/hep-ph/9910334 http://arxiv.org/pdf/hep-ph/0101178 http://arxiv.org/abs/gr-qc/9805074 - Can you briefly explain what goes into these two approaches? I thought there would be a much simpler explanation, where one can understand sources as excitations of Atomic substances, and Detectors as reactions of Light sensitive atoms. – Prathyush Oct 9 '12 at 7:01 Oh, the latter can be found in Mandel & Wolf's treatise on quantum optics, but their treatment doesn't give any hint on irreversibility. I thought you were mainly interested in the latter. A discussion of the origins of irreversibility necessitates a statistical mechanics treatment. – Arnold Neumaier Oct 9 '12 at 7:22 Detection(in this context a chemical reaction) is an irreversible process right? I will look into the both resources I found this statement very interesting "They treat sources as generators of beams with an extended distribution in space or phase space". How do they treat detectors? The papers you linked to are a little technical for me right now, But let me try working towards them. – Prathyush Oct 9 '12 at 7:55 2 Please upvote this comment to encourage Arnold Neumaier to link to abstract pages rather than pdf files in his (in all other respect) nice answer. – Qmechanic♦ Oct 9 '12 at 20:20 1 @Qmechanic: My time is limited, and I often don't have the time to beautify my answers. Writing them is time-consuming enough. But you are welcome to improve the links. (Though I prefer to find directly the pdf as it saves a scan on my part to where the pdf link is, and one fetching process through the web.) – Arnold Neumaier Oct 10 '12 at 6:53 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913382887840271, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/32964/quaternary-quadratic-forms-and-elliptic-curves-via-langlands
## Quaternary quadratic forms and Elliptic curves via Langlands? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The content of this note was the topic of a lecture by Günter Harder at the School on Automorphic Forms, Trieste 2000. The actual problem comes from the article A little bit of number theory by Langlands. The problem is about a connection between two quite different objects. The first object is the following pair of positive definite quadratic forms: $$P(x,y,u,v) = x^2 + xy + 3y^2 + u^2 + uv + 3v^2$$ $$Q(x,y,u,v) = 2(x^2 + y^2 + u^2 + v^2) + 2xu + xv + yu - 2yv$$ The second object is the elliptic curve $$E: y^2 + y = x^3 - x^2 - 10x - 20.$$ To each of our objects we now associate a series of integers. For each integer $k \ge 0$ define $$n(P,k) = | {(a,b,c,d) \in {\mathbb Z}^4: P(a,b,c,d) = k} |,$$ $$n(Q,k) = | {(a,b,c,d) \in {\mathbb Z}^4: Q(a,b,c,d) = k} |.$$ As a matter of fact, these integers are divisible by $4$ for any $k \ge 1$ because of the transformations $(a,b,c,d) \to (-a,-b,-c,-d)$ and $(a,b,c,d) \to (c,d,a,b)$. For any prime $p \ne 11$ we now put $$a_p = |E({\mathbb F}_p)| - (p+1),$$ where $E({\mathbb F}_p)$ is the elliptic curve over ${\mathbb F}_p$ defined above. Then Langlands claims For any prime $p \ne 11$, we have $4a_p = n(P,p) - n(Q,p).$ The "classical" explanation proceeds as follows: Given the series of integers $n(P,p)$ and $n(Q,p)$, we form the generating series $$\Theta_P(q) = \sum \limits_{k=0}^\infty n(P,k) q^k = 1 + 4q + 4q^2 + 8q^3 + \ldots,$$ $$\Theta_Q(q) = \sum \limits_{k=0}^\infty n(Q,k) q^k = 1 + 12q^2 + 12q^3 + \ldots.$$ If we put $q = e^{2\pi i z}$ for $z$ in the upper half plane, then $\Theta_P$ and $\Theta_Q$ become ${\mathbb Z}$-periodic holomorphic functions on the upper half plane. As a matter of fact, the classical theory of modular forms shows that the function f(z) = \frac14 (\Theta_P(q) - \Theta_Q(q)) = q - 2q^2 - q^3 + 2q^4 + q^5 + 2q^6 - 2q^7 - 2q^9 - 2q^{10} + q^{11} - 2q^{12} \ldots is a modular form (in fact a cusp form since it vanishes at $\infty$) of weight $2$ for $\Gamma_0(11)$. More precisely, we have $f(z) = \eta(z)^2 \eta(11z)^2,$ where $\eta(z)$ is Dedekind's eta function, a modular form of weight $\frac12$. Now we have connected the quadratic forms to a cusp form for $\Gamma_0(11)$. This group has two orbits on the projective line over the rationals, which means that the associated Riemann surface can be compactified by adding twocusps: the result is a compact Riemann surface $X_0(11)$ of genus $1$. Already Fricke has given a model for this Riemann surface: he found that $X_0(11) \simeq E$ for the elliptic curve defined above. Now consider the space of cusp forms for $\Gamma_0(11)$. There are Hecke operators $T_p$ acting on it, and since it has dimension $1$, we must have $T_p f = \lambda_p f$ for certain eigenvalues $\lambda_p \in {\mathbb Z}$. A classical result due to Hecke then predicts that the eigenvalue $\lambda_p$ is the $p$-th coefficient in the $q$-expansion of $f(z)$. Eichler-Shimura finally tells us that $\lambda_p = a_p$. Putting everything together gives Langlands' claim. Way back then I asked Harder how all this follows from the general Langlands conjecture, and he replied that he did not know. Langlands himself said his examples came "from 16 of Jacquet-Langlands". So here's my question: Does anyone here know how to dream up concrete results like the one above from Langlands' conjectures, or from "16 of Jacquet-Langlands"? - This is probably not what you're looking for, but: a generalization of this result is true for all elliptic curves of prime conductor, if I understood correctly Emerton's paper "Supersingular Elliptic Curves, Theta Series and Weight Two Modular Forms". Eichler proved that (special) theta series span the space of modular forms with level equal to the conductor. How does one dream of this... Is there a freudoverflow? – Dror Speiser Jul 22 2010 at 17:38 1 @Franz: Your curve $E:y^2+y=x^3−x^2−10x−20$ is isogenous to the curve $y^2+y=x^3-x^2$, for which see mathoverflow.net/questions/11747/… – Chandan Singh Dalawat Jul 23 2010 at 2:52 Perhaps the right question is How to dream up Langlands' programme from results like the one above? – Chandan Singh Dalawat Jul 23 2010 at 3:01 ## 2 Answers Chapter 16 of Jacquet--Langlands is about the Jacquet--Langlands correspondence, which concerns the transfer of automorphic forms from quaternion algebras to the group $GL_2$. The modularity of the theta series that you write down is a (very) special case of this correspondence. But probably Langlands more had in mind going the other way, in the following sense: in Chapter 16, Jacquet and Langlands not only show the existence of transfer, they characterize its image. In particular, their results show that the modular form $f$ is in the image of transfer from the definite quaternion algebra $D_{11}$ ramified at $\infty$ and 11. Thus one knows {\em a priori} that there has to be a formula relating $f$ to the $\Theta$-series of some rank four quadratic forms associated to $D_{11}$; it is then a simple matter to compute them precisely (here one uses the fact that $f$ is a Hecke eigenform, and the compatibility of transfer with the Hecke action), and hence obtain the formula $f = (\Theta_P - \Theta_Q)/4.$ The question of characterizing the image of transfer is an automorphic interpretation of what is classically sometimes called the Eichler basis problem: the problem of computing the span of the theta series arising from definite quaternion algebras. The name comes from the fact that the particular case considered here (but with 11 replaced by an arbitrary prime $p$), namely the fact that modular forms of weight two and prime conductor $p$ are spanned by theta series coming from $D_p$, was I think first proved by Eichler in 1955. - Ah! So there are such $\Theta$-formulas for every weight-$2$ cuspidal eigenform of prime level ? – Chandan Singh Dalawat Jul 23 2010 at 2:14 1 There is also a $\Theta$-formula for the weight-$1$ level-$23$ modular form $\eta_{1,23}=q\prod_{n>0}(1-q^n)(1-q^{23n})$ which appears in your answer mathoverflow.net/questions/11747/… Indeed, $\eta_{1,23}=(\Theta_P-\Theta_Q)/2$, where $P=x^2+xy+6y^2$ and $Q=2x^2+xy+3y^2$, as I learnt from Serre (BAMS, 2003). – Chandan Singh Dalawat Jul 23 2010 at 3:55 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I would like to add to Emerton's answer that Eichler gave a proof for square free level in Modular functions of one variable I, Springer Lecture Notes 320. Further classical work on the basis problem is in: MR0960090 (90d:11056) Hijikata, Hiroaki; Pizer, Arnold K.; Shemanske, Thomas R. The basis problem for modular forms on $\Gamma_0(N)$. Mem. Amer. Math. Soc. 82 (1989), no. 418 Yet another reference: MR0333081 (48 #11406) Shimizu, Hideo Theta series and automorphic forms on ${\rm GL}_{2}$. J. Math. Soc. Japan 24 (1972), 638--683 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.908816397190094, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/4503/how-do-you-explain-the-volatility-smile-in-the-black-scholes-framework/4505
# How do you explain the volatility smile in the Black-Scholes framework? Does anyone have an explanation for the currently naturally forming volatility smile (and the variations) in the market? - The part of this question appropriate to this site is whether the assumptions of the Black-Scholes equation lead to the solution they reach. That question has been vetted by many and the conclusion is that the analysis is correct. Whether these assumptions are correct is not a mathematical question. – Ross Millikan Nov 9 '12 at 4:09 After the 1987 crash people realized that extreme events were more likely than the log normal distribution suggests. They developed better option models, leading to the out of the money options to be priced more expensively to account for the greater risk. People still talk and think in terms of BS implied vol because 1) it is convenient, 2) many other models can be considered extensions of Black-Scholes, and 3) they can use the volatility surface from the market to price exotic options. – John Nov 9 '12 at 4:32 @JoeCoderGuy, yes I think you make it a lot harder than it is. Trading volatility is not that whole lot different from trading other asset classes. The crash of 1987 simply showed market participants that the IVs far away from the money (particularly downside puts) were not fairly valued (not in terms of pricing model because the pricing model just translates IVs -> currency denominated prices but in terms of absolute level) but please see my answer for details. – Freddy Nov 12 '12 at 2:52 ## 5 Answers The Black-Scholes model is based on the assumption of lognormal returns of the underlying asset. There is much evidence and argument that stock market returns are not normal on a logarithmic basis, and there is no particular reason to assume a normal distribution, either. In particular an implied volatility smile is evidence of "fat tails" in the returns expected by market participants---excess kurtosis, if you will. It's been pointed out by Taleb that with over 100 years of history, we simply don't have enough data to estimate the tails or the higher moments of the distribution of stock market returns. Edit: in R: ````> library("moments") > SPX <- read.csv("http://ichart.finance.yahoo.com/table.csv?s=%5EGSPC&d=10&e=10&f=2012&g=d&a=10&b=10&c=1992&ignore=.csv") > kurtosis(diff(log(SPX$Adj.Close))) [1] 11.36604 ```` That's for the last twenty years less a day of data, and does not include the crash of 1987. For a normal distribution, we would expect the kurtosis to be 3. The excess kurtosis in this case is $11.36604-3=8.36604$, which is still quite significantly not normal. Excess kurtosis means that a distribution has a more peaked center and fatter tails than a normal distribution, which means in the case of options a higher probability that the underlying will take on values far away from its current value come expiration time. This is why, when excess kurtosis is not taken into account, options further from the money appear to imply a higher volatility for the underlying asset. - 1 Just did what you said --- I was curious myself --- the returns are quite leptokurtic, not normal. – justin-- Nov 10 '12 at 1:05 The double-sided exponential distribution happens to be the fattest-tailed distribution that has a moment-generating function---but just barely---its mgf has vertical asymptotes which may impair one's ability to derive an equivalent martingale distribution (of the same exponential family) suitable for pricing options as in the Black-Scholes model. – justin-- Nov 10 '12 at 1:40 1 @JoeCoderGuy You might want to check this article out: Shalom Benaim and Peter Friz. Smile Asymptotics II: Models with Known Moment Generating Functions. Journal of Applied Probability , Vol. 45, No. 1 (Mar., 2008), pp. 16-32 – justin-- Nov 13 '12 at 5:49 Consider a more financially plausible model than Black-Scholes: one where the stock can suddenly go bankrupt due to fraud, and the volatility varies over time. Neither model is perfect, but the new one (call it SVJ) will be "less wrong". Mathematically, we no longer have the Black-Scholes SDE based on a single stochastic generator $W$ $$\frac{dS}{S} = \mu dt + \sigma dW$$ but rather an SDE with 3 generators: $W,Z$ and a jump process $J$ $$\frac{dS}{S} = \mu dt + \sigma dW - dJ \\ d\sigma^2= \kappa(\bar{\sigma}^2-\sigma^2) dt + \eta \sigma dZ$$ It is possible (though not particularly easy) to fit this more complicated, realistic model to the market. Big banks do it all the time. Any model, including both BS and SVJ, can be run "backwards", by which I mean that it can start with an option price and derive an implied parameter. If the model has $M$ parameters $p_1, p_2, \dots, p_M$ that are normally used to find a model price $V$, then we can also choose any one of the parameters, call it $p_n$, to derive from an observed price $P$ (normally by root-finding techniques). Let's say we run backwards from market prices to get implied values of $\sigma$ for both Black-Scholes and SVJ. You will observe a far flatter skew for the SVJ. This is true even if we remove either the jumps $J$ or stochastic volatility $Z$. Here, for example, is a skew in Black-Scholes volatilities arising from pricing an array of options in a proprietary jump-diffusion model with flat (by strike) volatility of 20% We see that a constant volatility parameter in the jump-diffusion is equivalent to a skew of Black-Scholes volatilities. Conclusion: the smile comes from the model being too strong a simplification of reality. - 1 Oh thank God you agree with me. – SRKX♦ Nov 12 '12 at 20:30 The volatility smile is made out of implied volatilities. This means that you take as input $K,S,r,T$ and the price of the option $p$ and you use it to find $\sigma$ such that $$p = BS(K,S,r,T,\sigma)$$ But $p$ is defined by the market, so the $\sigma$ you find are the estimated volatilities by market participants, if they believe in the Black Scholes framework. The shape of the graph (a smile) shows that, in the BS framework, market participants estimate different volatilities for a same asset depending on the moneyness of the option. This means that the BS framework does not hold in reality (at least market participants don't believe in it), as for the same asset, it assumes that $\sigma$ is constant. - I don't think it's only missing a variable, there are various weak points to the theory (normality of returns for example). Your question was about interpreting implied volatilities, and I'm answering that IV can't really be interpreted because they rely on a "wrong" model yielding contradictory results. – SRKX♦ Nov 10 '12 at 11:11 I am not sure I agree with you saying "IV can't really be interpreted because they rely on a "wrong" model...". IV are not underlying any model at all. IVs across the smile is how market consensus prices risk. I has nothing to do with any model in a very similar way than absolute stock prices have everything to do with market consensus and very little to do with any pricing model. Please see my answer for what I mean with that in more detail. – Freddy Nov 10 '12 at 11:22 2 @Freddy you compute IV using the BS formula.... if that's not assuming a model then I don't know what it is. It means "what would be the volatility taken by market participants if they used the BS formula to price their option?". Well that's pretty much depending on BS to me. – SRKX♦ Nov 10 '12 at 12:00 @SRKX, sorry but I strongly disagree. Pretty much all direct market participants who manage risk on the volatility side trade volatility not options prices. The translated prices are an agreed faulty way of paying for the bought or sold volatility. The asset that is traded and priced is volatility. Nobody cares about BS when considering whether to take risk in buying or selling an option, BS comes into play when looking to calculate the price that is paid/received for the volatility that is traded. – Freddy Nov 10 '12 at 12:14 just in case the following causes confusion: I did not want to say "IV are not underlying any model at all" but wanted to say "IV are not relying on any model at all". Sorry for the confusion but I could not edit the original comment anymore – Freddy Nov 10 '12 at 12:23 Implied volatility has very little to do with any particular pricing model, especially not much with BS. BS is a translation tool between prices and volatility, with its own many model deficiencies. I won't get into such model assumptions because my point is an entirely different one. Even the smile/smirk is entirely unrelated to the Black-Scholes model and I read your question that you like to understand why IVs, far from the money, are higher than close to the money ones. My point is that IVs are entirely supply and demand driven. Whether its the IVs of an option that expires a year from now or tomorrow, whether its the IV of a 90 put (equity option), 30 delta strike (fx), or any other rates option, commodity option, what have you... Someone correctly pointed out that most of the origin of the smile/smirk can be traced back to the October 1987 shock ( http://en.wikipedia.org/wiki/Black_Monday_(1987) ) Before pretty much all IVs regardless of moneyness were priced equally. However, the market after the stock market crash found that primarily downside protection was priced way too cheaply. Now I guess the core of the question is why: Most can be attributed to those who generally wrote such options such as sell-side trading desks but especially also floor traders who made markets in such options. They suffered steep losses and found out that they were not enough compensated for writing options such deep out of the money. It has to do with the fact that the market under-estimated the probability of such extreme events happening. However, another important finding that causes many market makers to trade IVs far away from the money higher than at the money ones has to do with the belief that return volatility far away from current price levels tends to be a lot higher than current return volatility. For example, return volatility is expected to be higher at 20% lower market levels than current return volatility. I guess the rational behind that is in a sense a paradox: Nobody generally would need to care about an instantaneous move 10% or so lower from current levels, thus in a sense it does not matter which implied volatilies are attached to a 90 put. However, should the unexpected happen than the expectation is for sharply higher return volatility 10% below current levels. Keep in mind the smile/smirk is dynamic and only reflects current expectations. I have seen on numerous occasions "inverted smirks" in equity index and stock options in that out of the money calls exhibited higher IVs than out of the money puts. But generally in this asset class space out of the money puts exhibit higher IVs than equally spaced out of the money calls. Another point that supports this general shape of the "smirk" is that empirically stock markets exhibit much higher down-side return volatility than up-side return volatility. Panic and fear in a sense causes more irrational behavior in people than exuberance. So, in summary, all has to do with how the market prices the probability of shocks (to the down and upside) occurring and their severity. EDIT: Please keep in mind that most all IVs away from current spot/forward levels exhibit the property of being more richly priced than ATM IVs. On the call side this is the case because nowadays options are often utilized as leveraged bets instead of buying the underlying outright. Put IVs that are struck below current spots/forwards are bought more aggressively because of the desire to protect long positions in the underlying (buy-side fund industry especially). IVs away from the money are simply more demanded than ATM ones because of the above reasons. I recommend you do not read a whole lot more into it than this because trading volatility in the end of the day is nothing else than trading any other asset, prices are set as a pure function of supply and demand, the only difference is that most underlying assets exhibit linear pay off functions where as volatility is non-linear in nature and market participants can exactly define how this non-linearity is structured by trading different strikes or option combinations. Research suggests that there is a relationship between how pronounced the smirk is and future return expectations in the underlying. This hardly surprises me because in a sense it reinforces what I explained above in that people either protect with down side puts if they expect the underlying to exhibit future negative returns and equally buy upside calls (often as a leveraged bet instead of the underlying) if they expect future returns in the underlying to be positive, hence the smile = higher IVs away from the money. - I have to disagree that down-side return volatility is much higher than up-side return volatility---historical returns are in fact negatively skewed, but only slightly. If you look at actual data, you will find many extreme upside moves to balance the downside. – justin-- Nov 10 '12 at 17:52 @Freddy do you agree that IV are computed using market prices and some formula $f(...,\sigma)=p$? – SRKX♦ Nov 10 '12 at 18:43 @justin, feel free to run your own analysis over a time series that comprises a longer period of time. You find that realized vol was the highest when markets crashed and not when market topped out (on average). This is pretty much an established fact and you can feel free to google numerous academic papers that investigated this. – Freddy Nov 11 '12 at 1:39 @SRKX, absolutely disagree, and I disagreed in my comment to your answer already. Implied vols are not the outcome of any pricing model, implied vols are a market consensus and, for example, the BS model is a mere translation tool from implied vols -> currency denominated prices. Listed options prices are not a result of supply and demand of an option but supply and demand for implied volatility. If you insist IVs are the result of some formula then I highly doubt you ever worked as volatility trader. – Freddy Nov 11 '12 at 1:43 I agree with some arguments above. One can find several explanattion to volatility smile: • Against the BS framework assumption, volatility is not constant and traders don't expect it to be constant • This is a matter of option supply and demand • volatility smile incorporates the kurtosis seen in the underlying One can find hints on this issue in P.Wilmott's Frequently asked question in quantitative finance -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458982348442078, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/48618/how-many-finite-simple-groups-of-order-p1/48646
## How many finite simple groups of order $p+1$? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm looking at finite simple groups of order $p+1$ where $p$ is a prime number. But they don't seem to fall into any classification - have these all been determined? Is the number of them even finite? - 1 n!/2-1 is prime for n=5, 6, 9, 31, 41, 373 ... (sequence A082671 on the OEIS). Is it known if this sequence is finite? – Alon Amit Dec 8 2010 at 9:54 3 If it were known that the sequence was finite, it would surely say so in the OEIS. Given that you're asking, I'm guessing it doesn't say so. So I am guessing it's not known, based purely on the information you have given me :-) – Kevin Buzzard Dec 8 2010 at 9:57 1 As suggested by Jason and Wikipedia, the list of finite simple groups is probably (though not certainly) known. But the question being asked here should first be investigated for the well-known infinite families starting with alternating groups and then groups of Lie type. The completeness of the classification is not an immediate issue. – Jim Humphreys Dec 8 2010 at 13:01 6 Is there any particular reason why you are looking at finite simple groups of order $p+1$ ? – Derek Holt Dec 8 2010 at 13:04 1 Funnily enough, I recently ran across another context where this same question was asked: cameroncounts.wordpress.com/2010/03/17/… . Such groups provide counterexamples to a "theorem" of Cauchy. – Harry Altman Dec 8 2010 at 18:01 show 1 more comment ## 4 Answers The philosophical point here is that if all you know about a group $G$ is its order $\lvert G \rvert$, then by far the most relevant information is the prime factorization of that order. (Back when sporadic groups were still being discovered, there are anecdotes about phoning John Thompson with the order of your hypothetical new group, and after some calculations he would tell you whether it 'checked out' or not - and of course he would just be using the knowledge of which primes divided the order and to which exponents). So questions about the prime factorization of $\lvert G \rvert - 1$ are going to be dominated by the (generally unsolved) number theoretical problems that relate the prime factorization of $n$ and $n+1$, e.g. the existence of infinitely many Sophie Germain primes, Mersenne primes, or primes $p$ for which $\frac{p^2+1}{2}$ is also prime, etc. - Ah, I see. So any potential answer here would just be a purely number-theoretic statement; the group wouldn't really enter into it. Sounds like this question is very unfeasable then. Oh well. Thanks for the insight. – Dr Shello Dec 8 2010 at 17:11 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As suggested by the other answers and comments, this is unknown (and a hard arithmetic question). Here's another example that might help indicate why: The order of the finite simple group $PSL_2(F_q)$ is (for $q$ not a power of $2$) $q(q^2-1)/2$. You'd therefore like to know when $q(q^2-1)/2-1$ is a prime, for $q$ a prime power. The question is (at least superficially, I hope we can agree) similar to that of when the Mersenne number $2^n-1$ is prime. For $q=2^n$ a power of $2$ the question boils down to asking when $2^n(2^{2n}-1)-1$ is prime. There are similar formulas for the other simple groups of Lie type, and I'll bet money no one in the world knows whether infinitely many of the relevant numbers are prime. - 1 I'd just add that there are infinitely many known infinite families of simple groups of Lie type, indexed by prime powers and having explicit order formulas. Each such family yields an infinite sequence of group orders; not random, but not easily analyzed for patterns. Moreover, the question asked has (as far as I can see) no connection with the finite group theory. It's not a nonsensical question but seems almost impossibly difficult to answer one way or the other. – Jim Humphreys Dec 8 2010 at 12:57 Standard heuristics (together with orders from the list of finite simple groups ) suggest that by far the most common orders of the form $p+1$ for $p$ prime will come from $A_1(q) = PSL_2(\mathbb{F}_q)$, of order $\frac{q^3-q}{2}$, as $q$ ranges over odd primes (or prime powers, if you want an additional small contribution). In particular, for large $N$, one should expect roughly $\frac{\sqrt[3]{N}}{(\log N)^\alpha}$ satisfactory numbers $p+1$ less than $N$, for some fixed positive number $\alpha$, and this sequence of numbers certainly grows without bound. As others have remarked, the question of proving that the set of suitable primes satisfies the rough asymptotics I gave above, or even proving that the set is infinite, seems to be beyond current technology. For example, we still don't know if there are infinitely many primes of the form $n^2+1$ for $n$ an integer. - 4 The following sporadic simple groups have order $p+1$ for $p$ a prime: $M_{11}$, $HS$, $M_{23}$, $O'N$, $Fi_{22}$, $J_4$. – S. Carnahan♦ Dec 8 2010 at 15:59 As Alon remarks, it is extremely hard to find such groups even between groups of one (most known) series $A_n$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9579936265945435, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/70081-series-functions.html
# Thread: 1. ## Series of Functions This is number 2 of the problem set for real analysis problems: Prove that if $<br /> \sum_{n=1}^{\infty}{{g_n}}<br />$ converges uniformly, then (g_n) converges uniformly to zero. 2. ## Ideas Intuitively this makes sense to me. If something converges, then that means the terms eventually get small enough so that they are infinitely small. I just can't put it into formal terms through a proof. Any ideas? 3. Originally Posted by ajj86 This is number 2 of the problem set for real analysis problems: Prove that if $<br /> \sum_{n=1}^{\infty}{{g_n}}<br />$ converges uniformly, then (g_n) converges uniformly to zero. If $\sum_{n=1}^{\infty} g_n$ converges uniformly then it is uniformly Cauchy. Let $\epsilon > 0$ then there is $N$ so that if $n,m>N$ Then, $\left| \sum_{k=1}^n g_k(x) - \sum_{k=1}^m g_k(x) \right| < \epsilon$ for $x\in S$. Thus, for example if $n=m+1$ then, $|g_n(x)| < \epsilon$. Thus, we see that $g_n \to 0$ uniformly on $S$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8894844651222229, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/68585-solved-find-values-b-c-such-limit-finite.html
# Thread: 1. ## [SOLVED] Find values of a, b and c such that limit ... is finite? Find values of a, b and c such that: lim (cos 4x + a cos 2x + b)/x^4 = Finite x --> 0 2. Originally Posted by fardeen_gen Find values of a, b and c such that: lim (cos 4x + a cos 2x + b)/x^4 = Finite x --> 0 You have missed a 'c' somewhere, thats why we will not be getting any answer correctly. But I will tell you the general idea... Let L be the limit. Observe that if $g(0) = 0$ and $\lim_{x \to 0} \frac{f(x)}{g(x)} =$ L, then $\lim_{x \to 0} f(x) = 0$. Applying it here, we get $1 + a + b = 0$. Now apply L'Hospital's rule, to get another form for L. Do the same process again. Alternate trick is to substitute the power series for cos and choosing coefficients such that the limit exists. 3. No missing 'c' according to text. 4. Originally Posted by fardeen_gen Find values of a, b and c such that: lim (cos 4x + a cos 2x + b)/x^4 = Finite x --> 0 Originally Posted by fardeen_gen No missing 'c' according to text. Then why does the question say find a, b and c? 5. You can however use power series or L'Hospitals to get the following equations: a+b+1 = 0 and 8 + 2a = 0 and thus a = -4 and b = 3. So the limit L = 8. To do this using power series, write $\cos t = 1 - \frac{t^2}{2!} + \frac{t^4}{4!} + (t^6$ ke terms $....)$. Then group terms with same powers in the numerator. All terms with constant and power of x^2 must go to 0. That will give you the above two equations.... 6. The question was a general one, under which the specific problem appears(there are some which involve a, b and c, a and b, b and c... and so on). Thanks for the help!! I got the same answer right now!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8993200659751892, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/5932/where-can-i-learn-basic-cryptography-to-know-more-about-passwords-and-bitcoin/5943
# Where can I learn basic cryptography to know more about passwords and Bitcoin? Basically my knowledge in passwords consist of setting up a Diceware master password back then, and I know hashes are not convertible back to the original password. Some basic question I want to know the answer for is how can a password be predicted from the hash? Like it can be known how many characters long it is exactly or at least roughly? A newbie friendly crypto guide would be helpful. The beginner's guide was an issue on the Bitcoin S. E. board as well. - – Beanow Mar 20 at 18:36 ## 2 Answers A good hash doesn't give you any information about password length or anything else. The only attack against such a hash is guessing the password, and then using the hash to verify if it was correct. Depending on the hashing scheme, the cost per guess can vary widely. For example with plain MD5 a single graphics card can try several billion guesses per second. That's why we use slow schemes, like scrypt, bcrypt or PBKDF2. - A good hash. But i guess most sites use a bad one, like 95% of them, right? And from a bad hash can you guess about the length? Ps. End a line with two spaces to add a <br/> linebreak: This doesn't work all the time. – superuser Jan 7 at 15:23 @superuser In this respect pretty much any hash used in practice is good(even MD5). Only really bad homebrew hashes would be vulnerable to that. Using fast hashes or not salting are the two common issues. | PS. you can't use line breaks in comments, only in questions and answers. – CodesInChaos Jan 7 at 15:29 At the core of your question is a concept called entropy, which is the amount of uncertainty or unpredictability in a set of data. In cryptography, entropy is related to probabilities, expressed in terms of powers of 2 (bits.) For example, a fair coin flip has one bit of entropy: it can be either heads (1) or tails (0). Flipping four coins gives you four bits. Rolling one six sided die gives you 2.6 bits of entropy, and so on. Ultimately, the number of bits of entropy represents the number of tries required to test every possible input (also known as a brute force attack.) Be careful not to confuse a password with a hash of a password. A SHA-1 hash is 160 bits of data that looks very random, but isn't. If I were to tell my computer to guess every possible value for a SHA-1 hash ( $2^{160}$ ), it would take longer than forever. But that doesn't mean the hash value has 160 bits of entropy. If I know the hash value is the result of running a user's password through a SHA-1 algorithm, as an attacker I don't have to try all $2^{160}$ possible values. I just have to figure out all the values you might have chosen for a password, and run them through the hash algorithm myself. Because users are humans, the passwords they choose are frequently based in their native language. If you're guessing the password of an English speaker, it's common to start with a list of frequently chosen passwords, such as "god", "admin", "root", "password", "abc123", etc, and then move on to testing all the rest of the words in an English dictionary. If a dictionary has 200,000 words in it, the entropy is only 18 bits, and it takes only a second or two for a computer to test all 200,000 possible dictionary words. Note that password restrictions reduce the number of words I would have to test. If a password policy says "passwords must be 6 letters long", then I would first test all six letter dictionary words, which is faster than testing all 200,000 dictionary words. One thing we assume in all cryptosystems is that the attacker knows everything about the system in use, just not the values of the secrets involved. Check out the wiki page I linked above for a fairly readable introduction to the concepts of entropy. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432998895645142, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=273550
Physics Forums Blog Entries: 1 ## some concrete limits 1. The problem statement, all variables and given/known data Hello! I got few questions, about limits. º $$\lim_{x \rightarrow 0}(\frac{sin(x)}{x})=1$$ If I take values for x close to zero I get: f(x)=sinx/x f(0.1)=0.017453283 f(0.01)=0.017453292 as I can see it is not even close to 1. What is the problem? Where I am doing wrong? º $$\lim_{x \rightarrow \infty}(\frac{1}{x})=0$$ Now, for all integers I agree that $$\lim_{x \rightarrow \infty}(\frac{1}{x})=0$$ (thanks to HallsofIvy for $\infty$), but what for 1/2, 1/3, 1/4 ? $$\lim_{x \rightarrow \infty}(\frac{1}{x})=\frac{1}{1/4}=4$$ and not 0 ? º $$\lim_{x \rightarrow 0^-}(e^{\frac{1}{x}})$$ How will I find the bound of the expression above? 2. Relevant equations 3. The attempt at a solution $$\lim_{x \rightarrow 0^-}(e^{\frac{1}{x}})$$ I understand that x<0 (so the values for x are tending to 0 from the left side), and $$\lim_{n \rightarrow \infty}(x_n)=0$$ For example, I know how to find the bound for: $$\lim_{x \rightarrow 2^+}(\frac{x}{x-2})$$ D=R\{2} xn>2 $$\lim_{n \rightarrow \infty}(x_n)=2$$ $$x_n-2>0$$ $$\lim_{n \rightarrow \infty}(x_n-2)=0$$ so that: $$\lim_{x \rightarrow 2^+}(\frac{x}{x-2})=\lim_{n \rightarrow \infty}(\frac{x_n}{x_n-2})=\frac{2}{\lim_{n \rightarrow \infty}(2-x_n)}=+\infty$$ Thanks in advance. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Mentor Quote by Дьявол 1. The problem statement, all variables and given/known data Hello! I got few questions, about limits. º $$\lim_{x \rightarrow 0}(\frac{sin(x)}{x})=1$$ If I take values for x close to zero I get: f(x)=sinx/x f(0.1)=0.017453283 f(0.01)=0.017453292 as I can see it is not even close to 1. What is the problem? Where I am doing wrong? Are you using a calculator to do these? If so, I think your calculator is in degree mode. It needs to be in radian mode. As the values of x get smaller, the value of your expression will get closer to 1. Quote by Дьявол º $$\lim_{x \rightarrow \infty}(\frac{1}{x})=0$$ Now, for all integers I agree that $$\lim_{x \rightarrow \infty}(\frac{1}{x})=0$$ (thanks to HallsofIvy for $\infty$), but what for 1/2, 1/3, 1/4 ? $$\lim_{x \rightarrow \infty}(\frac{1}{x})=\frac{1}{1/4}=4$$ and not 0 ? The limit is for x growing very large, so you shouldn't concern yourself with small values of x. On the other hand, $$\lim_{x \rightarrow 0^+}(\frac{1}{x})=\infty$$ which is more related to what you're doing with 1/2, 1/3, and so on. Quote by Дьявол $$\lim_{x \rightarrow 0^-}(e^{\frac{1}{x}})$$ How will I find the bound of the expression above? As x approaches 0 from the negative side, 1/x approaches neg. infinity, so e^(1/x) approaches 0. Do you need more explanation than that? Quote by Дьявол 2. Relevant equations 3. The attempt at a solution $$\lim_{x \rightarrow 0^-}(e^{\frac{1}{x}})$$ I understand that x<0 (so the values for x are tending to 0 from the left side), and $$\lim_{n \rightarrow \infty}(x_n)=0$$ For example, I know how to find the bound for: $$\lim_{x \rightarrow 2^+}(\frac{x}{x-2})$$ D=R\{2} xn>2 $$\lim_{n \rightarrow \infty}(x_n)=2$$ $$x_n-2>0$$ $$\lim_{n \rightarrow \infty}(x_n-2)=0$$ so that: $$\lim_{x \rightarrow 2^+}(\frac{x}{x-2})=\lim_{n \rightarrow \infty}(\frac{x_n}{x_n-2})=\frac{2}{\lim_{n \rightarrow \infty}(2-x_n)}=+\infty$$ Thanks in advance. Blog Entries: 1 Thanks for the post Mark44. Yes, I was using calculator in degree mode. Now with radian mode everything is all right. For the second one. Sorry, I wasn't so clear. I was learning about the number "e". So for one task (example): $$\lim_{x \rightarrow \infty}(1+\frac{1}{x-3})=\lim_{x \rightarrow \infty}(1+\frac{1}{x-3})^{(x-3)+3}=\lim_{t \rightarrow \infty}(1+\frac{1}{t})^t*\lim_{t \rightarrow \infty}(1+\frac{1}{t})^3=e*(1+0)^3=e$$ As we can see they put $$\lim_{t \rightarrow \infty}(\frac{1}{t})=0$$. How is this possible? What about for t=1/2,1/3,1/4 ? It wouldn't be zero in that case. Quote by Mark44 As x approaches 0 from the negative side, 1/x approaches neg. infinity, so e^(1/x) approaches 0. Do you need more explanation than that? Mark44, sorry for misunderstanding again. Yes I understand all of that, but how will I "show" or "prove" that. Aren't there any calculations? Thanks in advance. Mentor ## some concrete limits Quote by Дьявол Thanks for the post Mark44. Yes, I was using calculator in degree mode. Now with radian mode everything is all right. For the second one. Sorry, I wasn't so clear. I was learning about the number "e". So for one task (example): $$\lim_{x \rightarrow \infty}(1+\frac{1}{x-3})=\lim_{x \rightarrow \infty}(1+\frac{1}{x-3})^{(x-3)+3}=\lim_{t \rightarrow \infty}(1+\frac{1}{t})^t*\lim_{t \rightarrow \infty}(1+\frac{1}{t})^3=e*(1+0)^3=e$$ For your first limit above, I think you are missing an exponent of x. In other words, I think it should be: $$\lim_{x \rightarrow \infty}(1+\frac{1}{x-3})^x$$ Now I think I understand what you're saying. The first limit was as x approached infinity and involved an expression with (x - 3). They substituted t = x - 3 and changed the limit variable from x to t (as x gets very large, so does t). In the third limit expression (before they took the limit), there are two factors: $$}(1+\frac{1}{t})^t$$ and $$}(1+\frac{1}{t})^3$$. The second one is straightforward to evaluate in the limit, and turns out to be just 1. If you multiply it out before taking the limit, you have 1 + 3*1/t + 3*1/t^2 + 1/t^3, which approaches 1 as t gets large. The first one is more tricky, and you can't just say that 1/t approaches 0 as t gets large. There are two competing effects going on: the base, 1 + 1/t, is getting closer to 1, but the exponent t is getting larger. The net effect is that (1 + 1/t)^t approaches the number e as t gets large. If I recall correctly, one of the definitions of e is precisely this limit. Quote by Дьявол As we can see they put $$\lim_{t \rightarrow \infty}(\frac{1}{t})=0$$. How is this possible? What about for t=1/2,1/3,1/4 ? It wouldn't be zero in that case. Quote by Дьявол Mark44, sorry for misunderstanding again. Yes I understand all of that, but how will I "show" or "prove" that. Aren't there any calculations? Thanks in advance. Blog Entries: 1 Now I understand. I misjudged $\infty$. But for $$\lim_{x \rightarrow 0^-}(e^{1/x})$$, I need to "show", "explain" the result of the limit. How will I do that? Regards. Recognitions: Gold Member Science Advisor Staff Emeritus For x very close to 0, but negative, say -0.00001, 1/x is -100000, a very large negative number. What is e-100000? what is e any very large negative number? Blog Entries: 1 Thanks for the post. It definitely will tend to zero. And what about $$\lim_{x \rightarrow 0^+}(\frac{1}{1+e^{1/x}})$$? $$\lim_{x \rightarrow 0^+}(e^{1/x})=\infty$$ ? Mentor For the first one, as x approaches 0 (from the right), 1/x grows without bound (approaches infinity), so 1 + 1/x also grows without bound, which makes the fraction approach 0. For the second, that's the right value. Thread Tools | | | | |-------------------------------------------|----------------------------------|---------| | Similar Threads for: some concrete limits | | | | Thread | Forum | Replies | | | Precalculus Mathematics Homework | 3 | | | Introductory Physics Homework | 9 | | | Materials & Chemical Engineering | 10 | | | General Engineering | 7 | | | General Engineering | 6 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 35, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282182455062866, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/92589/what-does-it-mean-to-sample-a-value-x-from-fx/92625
## What does it mean to sample a value x* from f(x)? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This might be a really elementary question, but I'm not sure what it means. I have a density function f(x). How do I sample a value from f? For known distributions there are functions in R which do it for you (e.g. runif, rnorm, etc.) but how do I generate a random number using my own density? - 1 Why the down votes? This is highly nontrivial (see my answer) – Igor Rivin Mar 29 2012 at 17:43 I agree with that, it had also intrigued me, and I was a bit confused to ask ;) – Amin Mar 29 2012 at 21:49 Read chapter 3.4.1 from Knuth's The Art of Computer Programming. – Zsbán Ambrus Mar 30 2012 at 6:28 ## 3 Answers In the simple case that $X$ is a real valued random variable, the first thing I would reach for is the inverse-cdf method, especially since you have mentioned "runif" which gives draws from a uniform distribution. There is a pretty extensive literature on ways to sample from a variety of distributions, with names like Gibbs sampling, Metropolis-Hastings, slice samplers, perfect samplers, etc. A Google search of any of these should bring up a wealth of info. Did you want something more specific? - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I am not going to answer the philosophical question of "what does it mean", but for the practical question, there is the Ziggurat method of Marsaglia to generate a sample from your favorite distribution. Read all about it. - I suppose a definition of a random sample would be a sequence of numbers {$a_{n}$} such that, for any measurable set S we have $\sum_{1}^{n}\chi_{S}(a_{i})/n\rightarrow\mu(S)$ as $n\rightarrow\infty$, where $\mu(S):=\int_{S}f(y)dy$, with $f(y)$ your density. How to generate such a sequence is another, much more involved question; there's the inverse CDF technique, various algorithms like Metropolis Hastings or Gibbs sampling (for higher dimensional densities), etc. etc. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440841674804688, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/119980?sort=oldest
## Concise model of modern fiat money and its non-conservation ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A confession: I have never really understood the basic model of fiat money and central banking, by which a central bank controls the money supply. By the standards of someone trained in mathematics, all of the explanations that I have ever seen are either too short or too long. My impression is that the way that a central bank controls the money supply in a modern economy can be taken on faith (if you want a short explanation), or is hard to understand (if you want a long one), but I have am suspicious of both of these extremes. I have also seen explanations that describe what happens "in effect" without clearly explaining the underlying rules. I would be interested in a concise mathematical summary of how a currency such as the US dollar is controlled. (I hope that it can be taken as an MO-appropriate question in mathematical economics.) Here is a model that I understand, but that isn't true: A game such as Monopoly has a central bank that simply grants fiat money from time to time to private parties. I'm sure that this is the wrong way to run a real economy, but at the serious level I don't know why. In any case this is not how the Fed works, because it mostly lends money rather than simply granting it. Here is a failed improvement of the model: Suppose that the bank in Monopoly only lent money to the players instead of granting it. Then the players would have no way to pay back the loans with interest! Maybe it could work if the players were allowed to accumulate debt --- but what would prevent unlimited borrowing? I can believe in multiplier effects (although actually I don't know a rigorous definition). If transactions occur more and more quickly, or if assets get more and more leveraged, that could be equivalent to an increase in money. I have trouble believing that the central bank does not need to create money and that we see inflation (except in depression circumstances) solely because money keeps travelling faster and faster and because the economy gets more and more leveraged. An abstracted economy has the following actors, each operating according to certain financial rules: A central bank, a government budget, regulated private banks, and the rest of the private sector. (And foreign actors, who I suppose are an extension of the private sector.) I think that I know the basic financial rules for the last one, but not for the others. To rephrase the question, I am hoping that there is a concise mathematical model that makes clear when money is created, and that looks dynamically stable with some controllable rate of inflation. A reference could be okay, but only if it has a good, specific explanation. - 2 I don't think this is a math question. This is what my wife said when I passed the question on to her: I'm guessing you'd have to ask the economists at the Fed (or those who study Fed operations closely), to get a detailed mathematical model of how they decide exactly how much to expand the money supply in any given quarter. And bear in mind that the Fed has been winging it since the financial crisis, using unconventional (i.e., little studied) criteria for expanding that supply. (To be continued) – Felipe Voloch Jan 27 at 3:08 9 Since Greg is looking for an explicit, albeit simple, mathematical model, and such a model is not immediately found by a few web searches, I think it is a reasonable question from the standpoint of applied mathematics. Regarding the question, I am speaking from a standpoint of ignorance, but I thought central banks expand the money supply by exchanging interest-free currency notes for assets. Such assets are often government bonds (in which case the profits from interest are returned as seigniorage), but also can be drawn from the productive capacity of the private sector. – S. Carnahan♦ Jan 27 at 3:31 3 I hope that it can be taken as a valid question in applied mathematics, although I realize that it does not work well as a pure mathematics question. For example, the rules of Monopoly are in fact mathematically rigorous (if not very interesting as mathematics) and are even worth discussing as an inaccurate toy model. – Greg Kuperberg Jan 27 at 3:31 3 That said, Scott's answer is the core story. The Fed holds assets equal the amount of money in circulation, which can exist in the form of either paper curency, or balances held at the Fed. The Fed holds assets equal in value to that. If it prints up money, it uses that money to buy assets. If it takes money out of circulation, it sells assets. If you bury or burn money, then from the point of view it's still "in circulation". – arsmath Jan 27 at 9:28 2 @quid I'm sorry for the hints of controversy, but the fact is that I'm learning from the earnest answers by Greinecker and Landsburg. I don't think that it's fair to close a question just because there are some ineffectual answers that I didn't want either. – Greg Kuperberg Jan 27 at 16:33 show 14 more comments ## 5 Answers I think an answer that discusses the actual institutional details of how the Fed controls the money supply would be off-topic here. Also, the Fed works slighlty differently from the ECB in that regard and there is more than one method of influencing the money supply (take a look at the wikipedia page on money creation). So I will try in this answer to demystify how a central bank can create money without literally sending out helicopters that drop fiat money on people. First, one has to get right what money is. In explicit formal models, money is an asset that never pays out. If it has value, it is because there is a bubble in this asset. The first such model of money can probably be found in the 1958 paper An Exact Consumption-Loan Model of Interest with or without the Social Contrivance of Money by Paul Samuelson. It is worth pointing out that bubbles are not inherently bad and that paper constructs a toy economy in which everyone profits from the money bubble. Now how can one increase the supply of an asset that never has to pay out anyways? It sells the asset in exchange for other assets. Since money never has to pay out, the central bank will not face a solvency constraint in the process. Selling money is not that different from selling milk, but since there are no cows involved, central banks are not constrained by cost. - Btw: A paper that models open market policies explicitely can be found here: artsci.wustl.edu/~swilliam/papers/… I cannot give any guarantees as tot he quality of the paper. – Michael Greinecker Jan 27 at 10:55 Let me see if I understand this. The Fed, I believe, operates independently of the US Government. Its assets and liabilities are separate from the US Government, right? Moreover, when it "sells money", it is just buying assets, which I believe are usually investment grade securities, most often Treasury but also agency bonds as well as agency-backed mortgage-backed securities. If it wants to reduce the money supply, it just sells the securities, right? And if it holds the securities to maturity, then that also reduces the money supply due to the interest paid. – Deane Yang Jan 27 at 15:58 2 The Fed is essentially independent of the US government, but there is some influence. It is not completely independent, but it is relatively save from politicians trying to influence it for short term gains. You have got the mechanism essentially right. For securities, things get a bit complicated because nominal payments are denoted in units of money. But the principle work with real assets that are not given in nominal terms. Then the price of the asset s essentially independent of the corresponding payment stream. In practice, the Fed does not try to reduce the money supply. – Michael Greinecker Jan 27 at 16:17 Thanks! I never understood this before. – Deane Yang Jan 27 at 16:50 1 The Fed holds Treasury bonds, which pay interest. The Fed also makes money for services it provides to the banking sector, such processing checks. It also makes loans. All told, it takes in more money than it spends on wages and expenses. This difference it rebates to the Federal government. – arsmath Jan 27 at 21:06 show 3 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For a mathematical model see Hayashi and Matsui, 1994. For an in-depth discussion without too many (actually, any) equations, see many books by Murray Rothbard (all available on Amazon.com). - 2 I'm sure that Hayashi and Matsui wrote an interesting paper,but they seem more interested in establishing a result about fiat money than describing what it is --- after all, this paper was written in 1994 and it does not look like it is meant just as an exposition. It looks more technical than what I had in mind. As for Murray Rothbard, I would much prefer a short explanation with equations than entire books with no equations. – Greg Kuperberg Jan 27 at 2:14 3 I want to point out that Murray Rothbard is seen as a crank by most economists and has essentially no influence in academic economics. – Michael Greinecker Jan 27 at 10:06 1 @Michael: Economics is not really a science, and its social dynamics are tribal. Rothbard is a leading of proponent of (ironically) the Austrian School of Economics (in honor of von Mises and Hajek), which is not the politically dominant school. I personally find the Austrian school extremely compelling, and find what "most economists" think (or claim to think) of little interest. I could expound at length on this, but at this point we are veering quite far from mathematics. – Igor Rivin Jan 27 at 15:03 3 I had voted to close the question as off topic, but you guys have convinced me that I should have voted it subjective and argumentative. – Felipe Voloch Jan 27 at 16:36 2 @Felipe - I could have voiced my own opinion of Murray Rothbard, but I chose not to. Not fair to vote to close my entire question because of this side discussion. – Greg Kuperberg Jan 27 at 21:51 show 7 more comments I think economics is far more closely connected with the body politic than it with mathematics. Also applying mathematics to economics is also a political act and a signification. (Mathematics has an association with permanance which can be used symbolically to shore up a certain contingent political/economic order). Physics examines the world by supposing the physical world follows a rational order, and that by dint of effort this order is discoverable. I can't see how this applies to the social order of societie(s); how does one measure wealth, imagination, violence, ethics, power, desire, criminality? Whereas mathematics applied to physics captures something of its fundamental relationships, it appears to me that a mathematical model of a social order can only captures superficial and contingent things. - 1 How is this answering the question? – Michael Greinecker Jan 27 at 14:33 By denying that it's answerable in its own terms? When Newton wrote about his law of gravitation, he made certain he could model both time and space mathematically, the background to his physics. I'm denying that this background is available in economics. Theories don't exist in a vacuum , they exist in a larger theoretical space. – Mozibur Ullah Jan 27 at 15:28 The question was for a foral model of how the fed influences the money supply. Whether such model is useful or not is not part of the question. – Michael Greinecker Jan 27 at 16:05 2 -1 for off-topic. – quid Jan 27 at 16:13 Turning my last comment into an answer: The simplest model of money demand is $M=M(P,Y,i)$ where $P$ is the price level (if all prices rise, you'll probably want more money in your pocket), $Y$ is real income (if you're richer, you might want more money in your pocket) and $i$ is the nominal interest rate (if the interest rate rises, you'll want to hold more bonds and consequently less money). In the simplest models, $Y$ is determined by non-monetary factors, and (thinking now of everything as a function of time) $i=r+P'(t)$ (where $r$ is determined by non-monetary factors). This follows from the assumption that prices are perfectly flexible, so that $Y$ has to be determined by supply and demand in the markets for goods and labor. At time $t$, the money supply is $M_0(t)$, where $M_0$ is a function chosen (in the simplest models) by the Fed. Equilibrium requires $M=M_0$. (If, for example, $M$ is less than $M_0$, so that people are unwilling to hold $M_0$ dollars, they will attempt to dispose of dollars by exchanging them for goods, which bids up $P$ and causes $M$ to rise. Likewise in the opposite direction). So the key equation is $M(P(t),Y(t),r+P'(t))=M_0(t)$ with $Y(t)$, $r$ and $M_0$ determined outside the model. A more sophisticated model would make $M(t)$ dependent on expected future values of $P$ and $i$, and include an account of how those expectations are formed. So you should view this as the freshman version of the story, not the grad school version. - Thanks for this basic review. Your summary of the money demand equation was helpful and got me thinking along the right lines, but in the end Michael's answer looks a little closer to what I was missing in my thinking. In looking at your equation, I was stuck on where Y really comes from, i.e., how income is possible if all money is lent from the central bank. However, not all money is lent from the central bank due to seigniorage. (And some of it is lent by the central government and then spent, but I didn't think that that was the only non-conservation term.) – Greg Kuperberg Jan 27 at 21:34 I don't get where you get this "money is lent from the central bank" idea. If you have money, it's your money -- you're not supposed to give it back to the central bank some day. In the Monopoly money analogy, the bank doesn't lend you money, it gives you money, and you give it a hotel you built on Park Place. – arsmath Jan 27 at 22:42 @arsmath Yes, but a Monopoly board is not an accurate model. One of the Fed's activities is to lend money to commercial banks, which raises the question of how they might ever pay it back. If all money could be traced back to the Fed lending money to commercial banks, they wouldn't be able to. – Greg Kuperberg Jan 27 at 23:04 @Steve - Sorry, I should say it this way: In the more restricted formula $M = P \cdot L(Y,i)$, I knew where $Y$ comes from, but I didn't know where $P$ comes from, and I also didn't know the real-life mechanism of `$M_0$`. But now I see how your equation could work as an answer to the second part --- `$M_0$` is set by policy (and has no conservation property due to seigniorage) and your equation becomes a differential equation for $P$. Anyway I wish I could accept more than one answer, yours and Michael's, but MO only lets me choose one. – Greg Kuperberg Jan 27 at 23:46 @Greg Okay, I see. I hadn't heard of the idea that all money could be traced back to Fed lending. – arsmath Jan 28 at 7:11 show 10 more comments Thanks to the comments and answers from Scott Carnahan and Michael Greinecker, I think that I understand it better now. I'm going to write this as a CW summary answer and also accept one of the other answers. People often talk as if all currency is borrowed from the central bank, but that is not really true. If it were literally true, it wouldn't make sense, because there would be no way to pay the principal and interest back to the central bank. Or otherwise, if all money is borrowed, then an economy's total monetary assets stay at zero, which is not strictly impossible, but doesn't sound right. What I guess actually happens is that the central bank both buys and sells treasury bonds. Even though this is done for interest rate stability, the central bank is perfectly happy to sell high and buy low, thereby violating conservation of money. It is also counterintuitive in the following respect: Although in the short term a high interest rate contracts the money supply, in the long term the interest paid expands it again. Nonetheless, I guess that the demand to have money to trade sustains the value of the money and keeps everyone from just buying treasury bonds at high interest. I guess here you would point to the money supply equation that Steven Landsburg posted. (It does not leap out at me that it really leads to currency stability, but I can believe it.) Also, to get a currency started, the central bank can first buy or sell other commodities, for instance gold, so that the private sector then has money to buy treasury bonds. Another counterintuitive point (but one that doesn't bother me) is that if the central bank trades commodities at a monetary "loss", then actually it has gained those commodities. This inverted mode of gain by a bank seems to be one meaning of "seigniorage". Another meaning is any increase of the money supply from the central bank's trades, so at some level seigniorage is the main answer to my question. Another player is the national government. Unlike the private sector, it is allowed an unlimited amount of debt. So, a second non-conservation of money is deficit spending, if in tandem the central bank keeps lowering the interest rate. Unlike seigniorage, this may be de facto non-conservation of money, but it is not de jure non-conservation of money, if the government keeps an honest account of how much it borrows. (As Deane and Michael discuss, this honesty is only really possible if the central bank is politically independent from the government budget.) A third type of non-conservation of money is a default by a commercial bank that owes money to the central bank. But this does not look like a natural way to increase the money supply, and I don't think that it is. - Greg: I'm having trouble seeing why you view the "non-conservation of money" as somehow more in need of explanation than, say, the non-conservation of refrigerators. The number of refrigerators can change because there are companies that produce refrigerators. The amount of money can change because there are companies (called "banks") that produce money. Why is one more mysterious than the other? – Steven Landsburg Jan 27 at 21:36 Because I didn't know which financial rules make de jure non-conservation of an official currency possible. That was my real question. (De facto non-conservation of money is less surprising, but the two are related.) – Greg Kuperberg Jan 27 at 21:42 Besides, your analogy with manufacturing refrigerators is an explanation of money, but not fiat money. A central bank does not accumulate a hoard of refrigerators in exchange for issuing cash. Instead, it does something more circular which is still (usually) stable. – Greg Kuperberg Jan 27 at 21:49 I'm with Steve -- I don't see what's so mysterious. There's no theoretical reason why a central bank couldn't operate with a hoard of refrigerators. The Fed could print up money, and use it to buy baseball cards, or gourmet yogurt stands, or slivers of the True Cross. The mystery is not what the Fed does, but what everyone else does. What determines the value of the money that the Fed prints, and why is that value nonzero? – arsmath Jan 27 at 22:38 @armsath - If the Federal Reserve could hoard widgets, that's beside the point, because it doesn't. Every other party in the economy operates by certain accounting rules and practices. The rules and practices for a central bank with fiat money are different, and I asked because I didn't have a complete view of them. If you think that the complementary question would have been a better question, you could be right, but my thinking got stuck where it did and I wanted help from MO. – Greg Kuperberg Jan 27 at 22:53 show 8 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9636937975883484, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/hamiltonian+energy-conservation
# Tagged Questions 1answer 132 views ### The relation between Hamiltonian and Energy I know Hamiltonian can be energy and be a constant of motion if and only if: Lagrangian be time-independent, potential be independent of velocity, coordinate be time independent. Otherwise ... 2answers 386 views ### Canonical transformations and conservation of energy I have an important doubt about the nature of canonical transformations in hamiltonian mechanics. Suppose I have a one-degree-of-freedom lagrangian system, whose hamiltonian depends explicitly on ... 3answers 143 views ### Equation $H(q,p)=E$ is the equation of motion or energy-conservation law? I do not completely understand, why do we consider Hamilton–Jacobi equation $H(q,p)=E$ as equation of motion, whereas it is looks like energy-conservation law? 3answers 544 views ### Is there a valid Lagrangian formulation for all classical systems? Can one use the Lagrangian formalism for all classical systems, i.e. systems with a set of trajectories $\vec{x}_i(t)$ describing paths? On the wikipedia page of Lagrangian mechanics, there is an ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8664502501487732, "perplexity_flag": "middle"}
http://physics.stackexchange.com/tags/vortex/hot
# Tag Info ## Hot answers tagged vortex 4 ### Frequency of Vortex Shedding I think @Killercam is right, I'll try to explain the same thing a little more elaborately. Firstly. in the case considered, since the fluid and the cylinder is chosen, increase in velocity directly translates to increase in the Reynolds number as $R_e = \frac{\rho V D}{\mu}$. Before considering flow in the range $250 < R_e < 2\times 10^5$ , lets ... 3 ### Can a vortex be self-sustaining? [closed] Self-sustaining vortices without dissipation (energy loss) are possible in superfluids (like, e.g., liquid helium) because there is no internal friction (viscosity) for the superfluid component. Rotation goes on by inertia. This is as close a I can imagine to a "self-sustaining vortex" although admittedly has little to do with space-time. 3 ### Frequency of Vortex Shedding The velocity of the flow divided by the diameter of the cylinder is the typical crossing time of the fluid, hence is directly related to the frequency of the observed oscillations for a specific Reynolds number. It is as simple as that. Clearly, this time scale is then correlated with observation to provide the Strouhal number for this particular phenomenon. ... 3 ### Why water in the sink follow a curved path? The difference between rain and water in the sink is that rain is simply falling, while water in the sink is being drawn into a center from a distance away, and the water in the sink is not perfectly still. It is rotating, if only a little bit. As it is drawn to the center, the rotation becomes more rapid. The principle is Conservation of Angular Momentum. ... 2 ### Why water in the sink follow a curved path? In basic principle, both could do the same thing. Pragmatically, water in a drain has the resistance of the sink/drain walls to influence the effect. (This is a hairpin vortex regime.) Basically, vortices differ per sink. Surface tension of a rain drop exceeds wind friction. Coriolis forces still exist within the rain drop, and could produce a ... 1 ### Explanation for the next steps of chaplygin dipole Equation (2.5) expresses the velocity field in function of the stream function. It's not clear to me it really should be presented at this stage in the process, I guess it's useful to impose the conditions at infinity. Equation (2.6) expresses that the two pieces of the total solution, the one inside the disk $\psi$ and the one outside $\psi_1$, have to ... 1 ### how to determine if a vortex is laminar or turbulent Firstly let us define what is meant by turbulent and laminar in a case such as the one you describe... The Reynolds number of a flow gives a measure of the relative importance of inertial forces (associated with convective flow) and viscos forces. From experimental observations it is seen that for values of Re below the so-called critical Reynolds number ... 1 ### Explanation for the next steps of lamb-chaplygin dipole When introducing the stream function, the steps that you usually take are as follows. Replace $u$ and $v$ by the streamfunction. Derive the horizontal momentum equation (for $u$) with respect to $y$ and the other with respect to $x$. Eliminate the pressure term, to end op with a single equation in $\psi$. 1 ### Vortex street and Reynolds number Turbulence isn't the same as unsteadiness - a vortex street is not necessarily a turbulent phenomenon. As an analogy that (for some reason) I find easier to understand, consider a convection experiment where we heat a fluid at the bottom and cool it at the top. Below a certain threshold value for the temperature difference, the heat is transferred only by ... 1 ### Vortex street and Reynolds number The point at which the flow becomes turbulent is very sensitive to the flow geometry. The value of 2,300 you quote applies to flow in smooth pipes. To take an example, the flow round a sphere becomes ceases to be laminar at an Re of about 1. The flow becomes increasing turbulent as you raise the Reynold's number until vortex shedding starts around Re = 50. 1 ### Effect of rotation on turbulence threshold for Reynolds number? This is a reasonable question. At the scale of a waterspout, the inertial forces of fast-moving air should be large compared to the viscous forces (i.e., very large Reynolds number). Yet the inflow along the surface of the water is laminar, where we would ordinarily expect boundary-layer vorticity (i.e., turbulence). A detailed description of the expected ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9239079356193542, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/58618-finding-roots-polynomials-over-polynomial-quotient-rings.html
# Thread: 1. ## Finding Roots of Polynomials over Polynomial Quotient Rings Hi all, Just wondering how I would go about finding roots of an polynomial, say t^3 +t^2+1 or t^3+1, over a quotient ring, say Z/2Z[x] / <x^3+x+1>. Is there a general way to do this or do I have to plug and chug all the elements of the field? This is annoying for fields with more than 4 elements! I've programmed the computer solve them for me (plug and chug way) but I don't know how to do it without the computer. Thanks, Julian 2. Originally Posted by aznmaven Hi all, Just wondering how I would go about finding roots of an polynomial, say t^3 +t^2+1 or t^3+1, over a quotient ring, say Z/2Z[x] / <x^3+x+1>. Is there a general way to do this or do I have to plug and chug all the elements of the field? This is annoying for fields with more than 4 elements! I've programmed the computer solve them for me (plug and chug way) but I don't know how to do it without the computer. Let $K = \mathbb{Z}_2[x]/(x^3+x+1)$. Let $\alpha = x + (x^3 + x + 1)$. Notice that $\alpha^3 + \alpha + 1 = 0$. Therefore, $\alpha^3 = \alpha + 1$ - remember $\text{char}(K)=2$. You want to determine if $t^3+t^2+1$ has any roots. A messy approach here is to notice that any element in $K$ can be uniquely written as $a+b\alpha + c\alpha^2$ where $a,b,c\in \mathbb{Z}_2$. Substitute that into the polynomial and equate coefficient to zero, and remember that $\alpha^3 = \alpha + 1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309143424034119, "perplexity_flag": "head"}
http://mathoverflow.net/questions/12622/between-abstract-and-concrete-whats-the-right-way-to-think-of-specific-categori/12646
## Between abstract and concrete: What’s the right way to think of specific categories? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) At the risk of annoying some of the categorists I feel urged to pose this beginner-ish question: If one talks about a specific category such as the category of sets with functions or the category of groups with group-homomorphisms or the category of topological spaces with homeomorphisms continous maps (let's restrict to these), what should I have in mind, how should I think of it? 1. a sheer structure of point-like objects and arrows which is merely isomorphic to a class of set-theoretic objects with set-theoretically definable morphisms between them (e.g. functions as sets) or 2. the class of set-theoretic objects itself (plus morphisms) or 3. what else? In case of (1) shouldn't for example the category of sets been termed "the (abstract) category which is isomorphic to the (concrete) class (not category!) of all sets with functions" (as we would talk about "the unlabelled graph X which is isomorphic to the labelled graph Y")? And only because this is inconvenient, we talk of "the category of sets"? [Added:] It's common talk to say "Set is the category whose objects are all sets...". This sounds like taking position (2). Side-question: There is the notion of "the category of models of a theory with elementary maps". Is the category of groups with group-homomorphisms the same as the category of models of group theory with elementary maps? If not so: why? (Made a separate question out of this.) - I just changed "homeomorphism" to "continous map". – Hans Stricker Jan 22 2010 at 12:08 Hans, it sounds to me like what you should do is curl up with a nice book like Mac Lane which will presumably explain many of these issues and more. – Qiaochu Yuan Jan 22 2010 at 13:26 Even Mac Lane defines Set as "the category of all small sets" (p. 12) and I read this "whose objects are the small sets". Can you point me to a location where ML faces my question, or is it just implicitly answered (like the "inner structure" of an object is just implicitly determined by its hom-sets)? – Hans Stricker Jan 22 2010 at 13:49 And then I stumbled over this (p. 10): "A category (as distinguished from a metacategory) will mean any interpretation of the category axioms within set theory." So Mac Lane is siding with position (2)? What then about your answer below? – Hans Stricker Jan 22 2010 at 13:52 My understanding is that what Mac Lane calls a metacategory is what most people are happy to call a category, for example the nLab (ncatlab.org/nlab/show/category). But I'll wait until a real category theorist weighs in on the issue. – Qiaochu Yuan Jan 22 2010 at 13:57 show 7 more comments ## 6 Answers If one is talking about a specific group, say the group $\mathbb{Z}/3\mathbb{Z}$, should one think of it as: 1. a set of three "atoms" labeled a, b, c, together with a multiplication law (aa = a, ab=b, ...) and a zero element (a), or 2. a set {a, b, c} of three particular sets, say a = {}, b = {{}}, c = $\aleph_4$, together with an addition law...? I'm sure everyone has their own personal preference. For me, (1) corresponds more closely to my intuition, but as long as you understand the relevance of the notion of evil concepts, there's nothing you can't do with (2) as well. - If I think of $\mathbb{Z}$'s standard model (pairs of naturals in the von Neumann standard model) I would firstly think of the group $\mathbb{Z}/3\mathbb{Z}$ as a set consisting of three equivalence classes on $\mathbb{Z}$ and after that I would abstract from this model and think of the group abstractly (like you do in 1). It never would come into my mind, to think of this group as in 2. – Hans Stricker Jan 23 2010 at 14:54 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There's a "really close correspondence" between quivers and categories, where quivers are directed graphs that can have multiple arrows from one vertex to another one and also loop arrows, which are arrows from a vertex to itself. Isomorphisms become undirected edges. This is a really good and precise way to think about it, because this viewpoint generalizes very nicely to some models of higher catgory theory, specifically A. Joyal's theory of quasicategories. The whole beauty of category theory is that all of the information about an object is contained within its arrows, and that the underlying thing that the category represents is not actually important. That is, we have all of the information about the category by: a.) Knowing the structure of the graph of the category. b.) knowing the structure of the hom-sets (which don't always have to be sets), and c.) in extra structure that lives over the graph (like a grothendieck topology or a model structure (this is unrelated to the models you were talking about. It has to do with abstract homotopy theory). The only place that it's nice to have sets is for defining the hom-sets in an unenriched setting. Without some notion of a set, it's hard to get important theorems like yoneda's lemma. Lawvere famously came up with two categorical foundational theories, ETCC and ETCS. At the moment, ETCC is pretty much useless. It contains ETCS as a subaxiomatization, but all of the structure axiomatized in ETCC can be constructed from ETCS (depending on if you take the topos of sets to be boolean or not, and some other unimportant technicalities). ETCS = Elementary theory of the category of sets ETCC = Elementary theory of the category of categories - I've never heard of those "extra structures". How do they come in? – Hans Stricker Jan 22 2010 at 12:44 Regarding the vivid discussion in the comments after the question (and hopefully, also of some interest for the question itself): I think that a "metacategory" is a definition by axioms, using only first order language, while "interpretation" means: an interpretation as in logic (say, as in p. 29 of Ebbinghaus-Flum-Thomas). So such an interpretation (a category) is a set, or for convenience, several sets: A set of "objects," a set of "arrows" two function (that is, two more sets) "dom, cod" from the set of arrows to the set of objects, a function "1" from the objects to the arrows, a function "$\circ$" on the pairs of composable arrows, etc., that satisfy the first order axioms of a metacategory. In summary, I agree with the comment of Qiaochu Yuan: set theory is involved, but not because the objects should somehow be "sets with structure." - Position 2 is only tenable because the categories you describe automatically come with forgetful functors to $\text{Set}$. But in order to think about more general categories (say, homotopy categories) you can't and shouldn't think this way. One way to resolve this situation is to define "concrete category" to mean a category together with a particular forgetful functor to $\text{Set}$, since a particular abstract category may be concrete in more than one way and the functor encodes extra information. In other words, I guess I'm siding with Position 1. Edit: With regard to your edit, as Harry says, there is some set theory necessary to set up category theory, so it all depends on your approach. But I would say that defining a category to be "the category of these kinds of sets with these kinds of functions between them" is no different from defining a group via one of its faithful actions or representations or defining a manifold via one of its embeddings into $\mathbb{R}^n$. While we pick a particular instantiation to describe what we're talking about, we then talk about the abstract thing. - Would you agree then, that "the category of sets" is a convenient shortcut for "the (abstract) category that is isomorphic to the class of all sets with all functions between them". Or doesn't this make sense? – Hans Stricker Jan 22 2010 at 12:01 @Hans: To my knowledge, every definition of a category uses some of the theory of sets to develop the basic results in category theory. As I said before, and as you can read the discussion on the nLab, ETCC is not very good at all. I am not aware of a single useful result from ETCC that doesn't also appear as a consequene of ETCS. – Harry Gindi Jan 22 2010 at 12:07 1 @Qiaochu, we specifically want an adjoint pair of functors rather than just a particular forgetful functor. – Harry Gindi Jan 22 2010 at 12:10 4 Why should we have to think about everything the same way? We don't even think of all groups the same way! See mathoverflow.net/questions/2551/… – Charles Siegel Jan 22 2010 at 13:45 1 @Hans: trying to think about everything the same way is not "trying to be unreasonably rigorous", as there is no rigorousness in it at all, but being a bit silly :) – Mariano Suárez-Alvarez Jan 22 2010 at 16:06 show 12 more comments Some (abstract) categories allow their objects and arrows to be "interpreted by themselves", without recurring to sets from set theory, especially poset categories via Dedekind completions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426527619361877, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/23989/are-the-norms-of-graphs-dense-in-any-interval
## Are the norms of graphs dense in any interval? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is known that there is a gap between 2 and the next largest norm of a graph. Is there an interval of the real line in which norms of graphs are dense? - 8 What is the norm of a graph? – Qiaochu Yuan May 9 2010 at 4:56 6 The norm of a graph is the largest eigenvalue of its adjacency matrix. Scott's question mathoverflow.net/questions/1822/… has some motivation for thinking about graph norms. – Anton Geraschenko♦ May 9 2010 at 5:35 ## 1 Answer I found a reference that seems to answer your question: Shearer, James B. On the distribution of the maximum eigenvalue of graphs, 1989. The theorem in this paper is that the set of largest eigenvalues of adjacency matrices of graphs is dense in the interval $\left[\sqrt{2+\sqrt{5}},\infty\right)$. Here's an online version. Here's a related paper: Hoffman, Alan J. On limit points of spectral radii of non-negative symmetric integral matrices, 1972. In this paper limit points less than $\sqrt{2+\sqrt{5}}$ are described. In particular, they form an increasing sequence starting at 2 and converging to $\sqrt{2+\sqrt{5}}$. Here's an online version. The author also posed the problem that led to Shearer's paper. - 18 These results and more are mentioned in Proposition 1.1.5 of Coxeter Graphs and Towers of Algebras by Goodman, de la Harpe, and Jones, 1989. – Jonas Meyer May 9 2010 at 9:05 56 Wait, did you just tell Vaughan Jones about his own paper? MO has jumped the shark. – Ben Webster♦ May 9 2010 at 14:26 48 Perhaps I should end up with a negative score because of this... However I assure you that one does not have to be gaga to have forgotten what was in a book one wrote with 2 coauthors over 20 years ago. Looking at it it seems that the Shearer result was not yet published when our book was finished-it does not seem to be mentioned in the appendix I which is the only place I looked in the book when looking for the answer to the question. Apologies, especially to Pierre if he reads this, Vaughan Jones – Vaughan Jones May 9 2010 at 15:52 5 The set of limit points was vaguely reminiscent of the set of possible Jones indices, which is why I looked. I found the proposition using the listing of $(2+5^{1/2})^{1/2}$ in the index. I then hesitated to mention it, not wanting the comment to be seen as criticism--but I had to. We haven't met, but I imagined it would be worth a laugh. No need to apologize. – Jonas Meyer May 9 2010 at 17:34 7 @Everybody- No worries. It's a good thing that this tidbit is sitting somewhere on the internet, waiting to be found by google (try googling some combination of title key words, you'll see this page is quite high). I, at least, am of the opinion that anything that gets more good content on the site is a good thing. It's also good to see you here, Vaughan. – Ben Webster♦ May 9 2010 at 18:15 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9584304690361023, "perplexity_flag": "middle"}
http://www.exampleproblems.com/wiki/index.php/Geo5.1.27
# Geo5.1.27 ### From Exampleproblems Find the locus of the midpoints of chords of the parabola $y^2=4ax\,$ which subtend a right angle at the vertex of the parabola. Let $P(x_1,y_1)\,$ be the midpoint of the chord BC of the parabola $y^2=4ax\,$ Equation of BC is $yy_1-2a(x+x_1)=y_1^{2}-4ax_1\,$ $yy_1-2ax=y_1^{2}-2ax_1\,$ Join AB,AC. Given angle BAC is a right angle,homogenising $y^2=4ax\,$ $y^2-4ax[\frac{yy_1-2ax}{y_1^2-2ax}]=0\,$ $(y_1^{2}-2ax_1)y^2-4ay_1 xy+8a^2x^2=0\,$ This is the combined equation of AB and AC. $\angle BAC=90^\circ\,$ coefficient of $x^2\,$ + coefficient of $y^2\,$=0 $8a^2+(y_1^{2}-2ax_1)=0\,$ $y_1^{2}=2a(x_1-4a)\,$ Therefore the locus of P is $y^2=2a(x-4a)\,$ Main Page:Geometry:Plane|The Parabola
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7475576400756836, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=130786
Physics Forums ## Ventilation Systems Hi...Prbably a stupid question but here it goes: I have some equations here, some of them have a G for rate of generation and others have E for rate of evaporation (both of them have units of volume per time). My question is: if I know that I have 3 galons of a chemical at the beggining of the day in one tank and at the end of the day I only have 2 galon...then I lost 1 galon per day...but is this number E or G? I mean...I don't understand what is the difference, I know there is an equation relating them but I don't understand the concept. PhysOrg.com engineering news on PhysOrg.com >> Researchers use light projector and single-pixel detectors to create 3-D images>> GPS solution provides 3-minute tsunami alerts>> Single-pixel power: Scientists make 3-D images without a camera Is the problem that you dont understand the difference between Generation and Evaporation? What is meant by generation in this context? Evaporation is pretty self explanatory with regards to the description, however generation remains ambiguous with what you have described. Generation sounds like it could possibly describe, among other things, the water condensation in a system.? You have a variable volume, if you know the rate of evaporation and your final volume you can therefor calculate how much of the substance has entered the system ( possibly what generation is referring to here ). Consider a sink of water. Water evaporates throughout the day, and you start with a known volume(V). At the end of the day you measure again the volume of water in the sink(V). From a formula for the rate evaporation (based on the conditions) you know that you loose x ammount of water through evaporation during the day. You know that the change in the volume of water is related to the ammount of water lost and the ammount of water gained through the day, where G is the amount gained. (V-V) = G - x That is one possible interpretation of the meaning of Generation and G in this context. Im guessing though. Thread Tools | | | | |------------------------------------------|-------------------------------------|---------| | Similar Threads for: Ventilation Systems | | | | Thread | Forum | Replies | | | General Engineering | 9 | | | Engineering Systems & Design | 27 | | | General Engineering | 1 | | | Advanced Physics Learning Materials | 8 | | | Biology | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942079484462738, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/86469/number-of-affines-needed-to-cover-a-variety/86519
## Number of affines needed to cover a variety ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello, I am aware of the related question "Minimal size of an open affine cover", but would like to ask more specifically: Do you have some elementary (i.e. not using hard things like compactification and such) proof for one of the following (here "variety" is separated over alg. closed field): (1) Let $X$ be a variety; Can you show that $X$ can be covered by $C \cdot dim(X) + D$ open affines, where $C,D$ are universal constants? (2) Let $X$ be quasi-projective; Can you show (1) for it with $C=1,D=1$? (3) Let $X$ be smooth quasi-projective, and char. = 0; Can you show (2) for it? It is easy for a variety $X$ to find an open affine whose complement is of smaller dimension than $X$. But I don't see how given $Y$ closed in $X$, to find an affine open $U$ in $X$ such that $Y-U$ is of smaller dimension than $Y$. Sasha - Since any scheme is locally affine, for your last question it suffices to take any affine neighbourhood of any point in $Y$, no? Then we obtain, that one can take $C=D=1$ (at least) for irreducible varieties. – Mikhail Bondarko Jan 23 2012 at 18:55 If $Y$ is irreducible then it is OK, but otherwise problematic. And even if I want a result only for irreducible varieties, I can't continue by induction since I am not sure that the complement $Y-U$ will be irreducible. – Sasha Jan 23 2012 at 18:58 Sasha, you can take an affine on each irreducible component that is disjoint from all the other components. Then the union of all of these is a disjoint union of irreducible affines, hence affine itself. This you can repeat. – Sándor Kovács Jan 23 2012 at 19:23 Maybe I don't understand, but how can I repeat this? at the second step, this procedure will find an open affine inside the closed complement of the first open, not a global open affine. How will I be sure that I can extend it to an open subset which is affine? – Sasha Jan 23 2012 at 21:42 ## 2 Answers Here is a way to do (2) (and hence (3)): Let $X$ be a quasi-projective variety, i.e., $X=Y\setminus W$, where $Y,W\subseteq \mathbb P^n$ are (closed) projective varieties. Consider the irreducible decomposition $Y=\cup_i Y_i$ and observe that $I_W\not\subseteq \cup I_{Y_i}$ where $I_T\subseteq k[x_o,\dots,x_N]$ denotes the ideal of the set $T\subseteq \mathbb P^N$. Pick a homogenous polynomial $f$ of degree $d$ such that $f\in I_W\setminus (\cup_i I_{Y_i})$. Let $H=Z(f)$. Then $\mathbb P^n\setminus H$ is affine and hence so is $Y\setminus H$. By construction $Y\setminus H\subseteq Y\setminus W=X$ and $H\not\supseteq Y_i$ for any $i$ by the choice of $f$. Therefore $\dim (Y_i\cap H)<\dim Y_i$ so we may use induction on $\dim X$. Notice that the affine subset is obtained as an affine subset of the ambient projective space intersected with our variety, so the affine varieties obtained subsequently are restrictions of affine subvarieties of the original $X$. - 2 Certainly more elementary than my answer! – Laurent Moret-Bailly Jan 24 2012 at 9:30 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I am not sure what you mean by "hard", but here is an answer to (2). I claim that if $X$ is quasiprojective of dimension $d$, there is an affine morphism $X\to\mathbb{P}^d$. This obviously implies (2): cover $X$ by the preimages of the $d+1$ standard affine charts. Proof of claim: take a dense open immersion $j:X\hookrightarrow\overline{X}$, with $\overline{X}$ projective. Blowing up if necessary, you can assume that $\overline{X}\setminus X$ is a Cartier divisor. Then $j$ is an affine morphism (locally defined by inverting one function). On the other hand, by Noether normalization, there is a finite (hence affine) morphism $p:\overline{X}\to\mathbb{P}^d$, so the composite map $p\circ j$ is affine. (In this argument, the only subtle point is the notin of affine morphism, in particular the fact that it is a local condition on the target.) About the last question: for given $X$, a positive answer is equivalent to the "Chevalley property" that every finite subset of $X$ has an affine neighborhood. This clearly holds for quasiprojective $X$. If $X$ is smooth and complete, this property is equivalent to projectivity (Kleiman). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9238012433052063, "perplexity_flag": "head"}