url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://math.stackexchange.com/questions/172762/can-two-collections-of-different-size-have-same-a-m-g-m-and-h-m?answertab=votes
# Can two collections of different size have same A.M., G.M. and H.M? After following Can two sets have same AM, GM, HM? and the sublime answer of Micah, I am tempted ask the solution of the same question when size of these two sets are not same. - 4 There is a trivial answer to your question, but I hope you'll do something about that zero percent accept rate. – Gerry Myerson Jul 19 '12 at 7:39 What happens if the "two sets of different size" consist of just one repeated value? – J. M. Jul 19 '12 at 7:48 What are the implications if repetition is allowed and if not? Off course, all the elements of either set should not be equal. – lab bhattacharjee Jul 19 '12 at 7:51 ## 1 Answer Sure, why not? The general statement that my answer was a special case of is: If $\{r_1,\dots,r_n\}$ are the roots of the polynomial $x^n+a_{n-1}x^{n-1}+\dots+a_1x+a_0$, then their arithmetic mean is $-a_{n-1}/n$, their geometric mean is $\sqrt[n]{(-1)^na_0}$, and their harmonic mean is $-na_0/a_1$. This can be shown by examining the Vieta formulae in exactly the same way as the degree-$4$ case, and gives a recipe for building sets of any size at least 3 that have whatever AM, GM, and HM you want. The only possible worry is that the roots might not all be real and positive, in which case strictly speaking the GM isn't well-defined; for example, this will happen if you try to build a set that violates the AM-GM-HM inequalities. For a concrete-ish example, the set $\{1,2\}$ has AM $3/2$, HM $4/3$, and GM $\sqrt{2}$. By the above formulae, so do the roots of $x^3-9/2x^2+9\sqrt{8}/4-\sqrt{8}$ (which WolframAlpha says are $\left\{\sqrt{2},\frac{1}{4}\left(9-2\sqrt{2}\pm\sqrt{57-36\sqrt{2}}\right)\right\}$). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9433075189590454, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/27557/list
## Return to Answer 2 added additional content This can be rephrased as follows. Let $X$ and $Y$ be independent random variables with the same, continuous, distribution. Is it true that $E(X)\le E(|X-Y|)$. 1. Is this likely? 2. Is it true for a discrete distribution? 3. If one has a discrete counterexample, can one convert it to a continuous counterexample? Added I now realise that there was some absolute value signs in the original integral (these come out badly on my screen) and I should have written $E(|X|)\le E(|X-Y|)$. Plus ca change... 1 [made Community Wiki] This can be rephrased as follows. Let $X$ and $Y$ be independent random variables with the same, continuous, distribution. Is it true that $E(X)\le E(|X-Y|)$. 1. Is this likely? 2. Is it true for a discrete distribution? 3. If one has a discrete counterexample, can one convert it to a continuous counterexample?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8778024315834045, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/189649/uncountable-minus-countable-set-is-uncountable
# Uncountable minus countable set is uncountable Problem statement Prove that if $A$ is an uncountable set and $B$ is a countable set, then $A\setminus B$ must be uncountable. What I think The statement does not mention $A$ and $B$ relationship. I think there are two possibilities: • If $A \cap B = \emptyset$, then $A\setminus B$ is trivially uncountable • If $A \cap B = B$, then $B \subset A$ and as a bijection can not be made between $A\setminus B$ and $\mathbb{N}$, $A\setminus B$ is uncountable. And there is where I'm stuck. How can I prove that a bijecton can't be made? TIA. - 1 Do you know that $\aleph_0 + K=K$ where $K$ is infinite cardinality ? – Belgi Sep 1 '12 at 14:15 ## 3 Answers Maybe this is a way to see it. You can make it more precise yourself. Assume that $A\setminus B$ is countable. $B$ is countable, so that would mean that $(A \setminus B) \cup B$ is countable (finite union of countable sets is clearly countable). But then $A \subseteq (A\setminus B)\cup B$, so $A$ is contained in a countable set, so it must itself be countable. - Thank you very much. It's a nice proof by contradiction. – Pampero Sep 1 '12 at 14:43 4 @Pampero: Just fyi, this is actually a proof by contrapositive. That is, if the statement is of the form "if P, then Q," then this argument proves the equivalent statement "if not Q, then not P." A proof by contradiction would show that "if (P and not Q), then (R and not R)." – J. Loreaux Sep 1 '12 at 14:47 @J.Loreaux: I did not know that. Thank you for the clarification. :) – Pampero Sep 1 '12 at 14:51 First, the two cases you mention don't include all possibilities. For your question, note that the union of two countable sets is again countable. - Thank you for the clue. :) – Pampero Sep 1 '12 at 14:43 Suppose that $A-B=A-A\cap B$ is countable. Then you have a bijection between $A-B$ and the set of even natural numbers. Since $B$ is countable then $A\cap B$ is also countable and you have a bijection between $A\cap B$ and the set of odd natural numbers. Now, taking the union of those bijections, you get the bijection between the set $A=(A-B)\cup (A\cap B)$ and the set of natural numbers, which is an absurd because $A$ is uncountable. Thus $A-B$ is uncountable. Q.E.D. If $A\cap B$ is just finite countable then the proof goes similarly with obvious changes, see comments. - Thank you for the proof. It is very clear. When you say: "which is an absurd because $B$ is uncountable" Didn't you mean "$A$ is uncountable"? Thanks. – Pampero Sep 1 '12 at 14:56 Also, why do you assume that $A = (A \setminus B) \cup B$? I think you meant $A \subseteq (A\setminus B)\cup B$. – Pampero Sep 1 '12 at 15:08 @Pampero I made little corrections and I think that now everything is clear :) Why the minus vote? – Godot Sep 1 '12 at 16:41 One more detail: if some of the mentioned sets are finite countable ($A\cap B$ could be...) then argument still works with some minor changes (just make bijection between $\mathbb{N}-F$ and $A-B$ and between $F$ and $A\cap B$ where $F$ is a finite subset of $\mathbb{N}$ of the same cardinality as $A\cap B$). – Godot Sep 1 '12 at 16:47 thanks again for clarifying. :) It wasn't me the one who gave you the minus vote. To be able to vote you've to get 15 reputations points, and ATM I only have 13. Q.E.D., it wasn't me. :) – Pampero Sep 1 '12 at 17:04 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9576089978218079, "perplexity_flag": "head"}
http://mathoverflow.net/questions/98171?sort=oldest
## What is the relation between Quasicrystals, Riemann Hypothesis, and PV numbers? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Could somebody explain to me, from a mathematical stand-point, what is a quasi-crystal, and how it relates to the set of Pisot numbers, and the Riemann Hypothesis? I've heard Freeman Dyson say that the zeros of the Riemann zeta function form a quasi-crystal. But, a priori, I do not see what kind of property of the zeros, that we currently now of, would be able to confer to them more structure than to a random set of isolated numbers. (Notwithstanding the explicit formula in prime number theory) To wit, my second question possibly based on a misunderstanding: why is the set of zeros of $\zeta(s)$ a quasi-crystal, while a random sequence of isolated numbers is not? Of course, I first need to fully understand what is a quasi-crystal, because Freeman's definition left me in a fog. - Inquiring minds want to know. – kolik May 28 at 7:08 What makes you think Pisot numbers relate to quasi-crystals and/or the Riemann Hypothesis? Did Dyson say something about those, too? – Gerry Myerson May 28 at 7:21 2 en.wikipedia.org/wiki/Quasicrystal. Please try Wikipedia before posting here. – Charles Matthews May 28 at 10:37 2 Dyson's definition of a quasicrystal is not equivalent to the one in Wikipedia. – Misha May 28 at 13:35 1 @Charles: The author of wikipedia article is not a mathematician, so he/she does not understand the difference between the words "define" and "construct." – Misha May 28 at 19:01 show 2 more comments ## 1 Answer Freeman Dyson's proposal is online, based on a talk he gave at MSRI. Lillian Pierce's senior thesis gives a summary of Peter Sarnak's program to use properties of Gaussian Unitary Ensemble to study the zeros of the Riemann Zeta function. N. G. Debrujin wrote about Penrose tilings and their Fourier transforms. Crystalline structures on the line are pretty boring. They are just evenly spaced lattices, like $\mathbb{Z}$, which might appear on different scales. ```--o---o---o---o---o---o---o-- ---o-----o-----o-----o-----o-``` However, there are many quasi-periodic structures on the line, for example $\lfloor n\sqrt{2}\rfloor = \{ 1, 2, 4, 5, 7, 8, 9, 11, 12, 14,\dots \}$ which we can draw on the line. `--o--o-----o--o-----o--o--o-----o--o-----o-- ` Many of these have special recursive properties. Consider the line $y = \frac{1 + \sqrt{5}}{2} x$ which Golden ration slope. Mark "0" if it crosses a horizontal line and "1" if for a vertical line. You get the Fibonacci Word Of course in 2D you get more interesting quasicrystals, which have interesting number theoretic and recursive structures. Freeman Dyson wishes the zeros of the Riemann Hypotheses have structure like these. - 3 @John: What your definition of quasi-periodicity? As far as I understand the question, one issue is lack of precise definitions in the quasicrystal literature, which is dominated by physics papers. Also, your 2nd link is broken. – Misha May 28 at 18:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924979567527771, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Hilbert-style_deduction_system
# Hilbert system (Redirected from Hilbert-style deduction system) In mathematical physics, Hilbert system is an infrequently used term for a physical system described by a C*-algebra. This article needs attention from an expert in Mathematics. See the talk page for details. WikiProject Mathematics (or its Portal) may be able to help recruit an expert. (March 2011) In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege[1] and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well. Most variants of Hilbert systems take a characteristic tack in the way they balance a trade-off between logical axioms and rules of inference.[1] Hilbert systems can be characterised by the choice of a large number of schemes of logical axioms and a small set of rules of inference. Systems of natural deduction take the opposite tack, including many deduction rules but very few or no axiom schemes. The most commonly studied Hilbert systems have either just one rule of inference —modus ponens, for propositional logics— or two — with generalisation, to handle predicate logics, as well— and several infinite axiom schemes. Hilbert systems for propositional modal logics, sometimes called Hilbert-Lewis systems, are generally axiomatised with two additional rules, the necessitation rule and the uniform substitution rule. A characteristic feature of the many variants of Hilbert systems is that the context is not changed in any of their rules of inference, while both natural deduction and sequent calculus contain some context-changing rules. Thus, if we are interested only in the derivability of tautologies, no hypothetical judgments, then we can formalize the Hilbert system in such a way that its rules of inference contain only judgments of a rather simple form. The same cannot be done with the other two deductions systems: as context is changed in some of their rules of inferences, they cannot be formalized so that hypothetical judgments could be avoided — not even if we want to use them just for proving derivability of tautologies. ## Formal deductions In a Hilbert-style deduction system, a formal deduction is a finite sequence of formulas in which each formula is either an axiom or is obtained from previous formulas by a rule of inference. These formal deductions are meant to mirror natural-language proofs, although they are far more detailed. Suppose $\Gamma$ is a set of formulas, considered as hypotheses. For example $\Gamma$ could be a set of axioms for group theory or set theory. The notation $\Gamma \vdash \phi$ means that there is a deduction that ends with $\phi$ using as axioms only logical axioms and elements of $\Gamma$. Thus, informally, $\Gamma \vdash \phi$ means that $\phi$ is provable assuming all the formulas in $\Gamma$. Hilbert-style deduction systems are characterized by the use of numerous schemes of logical axioms. An axiom scheme is an infinite set of axioms obtained by substituting all formulas of some form into a specific pattern. The set of logical axioms includes not only those axioms generated from this pattern, but also any generalization of one of those axioms. A generalization of a formula is obtained by prefixing zero or more universal quantifiers on the formula; thus $\forall y ( \forall x Pxy \to Pty)$ is a generalization of $\forall x Pxy \to Pty$. ### Logical axioms There are several variant axiomatisations of predicate logic, since for any logic there is freedom in choosing axioms and rules that characterise that logic. We describe here a Hilbert system with nine axioms and just the rule modus ponens, which we call the one-rule axiomatisation and which describes classical equational logic. We deal with a minimal language for this logic, where formulas use only the connectives $\lnot$ and $\to$ and only the quantifier $\forall$. Later we show how the system can be extended to include additional logical connectives, such as $\land$ and $\lor$, without enlarging the class of deducible formulas. The first four logical axiom schemes allow (together with modus ponens) for the manipulation of logical connectives. P1. $\phi \to \phi$ P2. $\phi \to \left( \psi \to \phi \right)$ P3. $\left ( \phi \to ( \psi \rightarrow \xi \right)) \to \left( \left( \phi \to \psi \right) \to \left( \phi \to \xi \right) \right)$ P4. $\left ( \lnot \phi \to \lnot \psi \right) \to \left( \psi \to \phi \right)$ The axiom P1 is redundant, as it follows from P3, P2 and modus ponens. These axioms describe classical propositional logic; without axiom P4 we get (minimal) intuitionistic logic. Full intuitionistic logic is achieved by adding instead the axiom P4i for ex falso quodlibet, which is an axiom of classical propositional logic. P4i. $\lnot\phi \to \left( \phi \to \psi \right)$ Note that these are axiom schemes, which represent infinitely many specific instances of axioms. For example, P1 might represent the particular axiom instance $p \to p$, or it might represent $\left( p \to q \right) \to \left( p \to q \right)$: the $\phi$ is a place where any formula can be placed. A variable such as this that ranges over formulae is called a 'schematic variable'. With a second rule of uniform substitution (US), we can change each of these axiom schemes into a single axiom, replacing each schematic variable by some propositional variable that isn't mentioned in any axiom to get what we call the substitutional axiomatisation. Both formalisations have variables, but where the one-rule axiomatisation has schematic variables that are outside the logic's language, the substitutional axiomatisation uses propositional variables that do the same work by expressing the idea of a variable ranging over formulae with a rule that uses substitution. US. Let $\phi(p)$ be a formula with one or more instances of the propositional variable $p$, and let $\psi$ be another formula. Then from $\phi(p)$, infer $\phi(\psi)$. The next three logical axiom schemes provide ways to add, manipulate, and remove universal quantifiers. Q5. $\forall x \left( \phi \right) \to \phi[x:=t]$ where t may be substituted for x in $\,\!\phi$ Q6. $\forall x \left( \phi \to \psi \right) \to \left( \forall x \left( \phi \right) \to \forall x \left( \psi \right) \right)$ Q7. $\phi \to \forall x \left( \phi \right)$ where x is not a free variable of $\,\!\phi$. These three additional rules extend the propositional system to axiomatise classical predicate logic. Likewise, these three rules extend system for intuitionstic propositional logic (with P1-3 and P4i) to intuitionistic predicate logic. Universal quantification is often given an alternative axiomatisation using an extra rule of generalisation (see the section on Metatheorems), in which case the rules Q5 and Q6 are redundant. The final axiom schemes are required to work with formulas involving the equality symbol. I8. $x = x$ for every variable x. I9. $\left( x = y \right) \to \left( \phi[z:=x] \to \phi[z:=y] \right)$ ## Conservative extensions It is common to include in a Hilbert-style deduction system only axioms for implication and negation. Given these axioms, it is possible to form conservative extensions of the deduction theorem that permit the use of additional connectives. These extensions are called conservative because if a formula φ involving new connectives is rewritten as a logically equivalent formula θ involving only negation, implication, and universal quantification, then φ is derivable in the extended system if and only if θ is derivable in the original system. When fully extended, a Hilbert-style system will resemble more closely a system of natural deduction. ### Existential quantification • Introduction $\forall x(\phi \to \exists y(\phi[x:=y]))$ • Elimination $\forall x(\phi \to \psi) \to \exists x(\phi) \to \psi$ where $x$ is not a free variable of $\psi$. ### Conjunction and Disjunction • Conjunction introduction and elimination introduction: $\alpha\to\beta\to\alpha\land\beta$ elimination left: $\alpha\wedge\beta\to\alpha$ elimination right: $\alpha\wedge\beta\to\beta$ • Disjunction introduction and elimination introduction left: $\alpha\to\alpha\vee\beta$ introduction right: $\beta\to\alpha\vee\beta$ elimination: $(\alpha\to\gamma)\to (\beta\to\gamma) \to \alpha\vee\beta \to \gamma$ ## Metatheorems Because Hilbert-style systems have very few deduction rules, it is common to prove metatheorems that show that additional deduction rules add no deductive power, in the sense that a deduction using the new deduction rules can be converted into a deduction using only the original deduction rules. Some common metatheorems of this form are: • The deduction theorem: $\Gamma;\phi \vdash \psi$ if and only if $\Gamma \vdash \phi \to \psi$. • $\Gamma \vdash \phi \leftrightarrow \psi$ if and only if $\Gamma \vdash \phi \to \psi$ and $\Gamma \vdash \psi \to \phi$. • Contraposition: If $\Gamma;\phi \vdash \psi$ then $\Gamma;\lnot \psi \vdash \lnot \phi$. • Generalization: If $\Gamma \vdash \phi$ and x does not occur free in any formula of $\Gamma$ then $\Gamma \vdash \forall x \phi$. ## Alternative axiomatizations Further information: List of logic systems The axiom 3 above is credited to Łukasiewicz.[2] The original system by Frege had axioms P2 and P3 but four other axioms instead of axiom P4 (see Frege's propositional calculus). Russell and Whitehead also suggested a system with five propositional axioms. ## Further connections Axioms P1, P2 and P3, with the deduction rule modus ponens (formalising intuitionistic propositional logic), correspond to combinatory logic base combinators I, K and S with the application operator. Proofs in the Hilbert system then correspond to combinator terms in combinatory logic. See also Curry-Howard correspondence. ## Notes 1. ^ a b Máté & Ruzsa 1997:129 2. A. Tarski, Logic, semantics, metamathematics, Oxford, 1956 ## References • Curry, Haskell B.; Robert Feys (1958). Combinatory Logic Vol. I 1. Amsterdam: North Holland. • Monk, J. Donald (1976). Mathematical Logic. Graduate Texts in Mathematics. Berlin, New York: Springer-Verlag. ISBN 978-0-387-90170-1. • Ruzsa, Imre; Máté, András (1997). Bevezetés a modern logikába (in Hungarian). Budapest: Osiris Kiadó. • Tarski, Alfred (1990). Bizonyítás és igazság (in Hungarian). Budapest: Gondolat.  It is a Hungarian translation of Alfred Tarski's selected papers on semantic theory of truth. • David Hilbert (1927) "The foundations of mathematics", translated by Stephan Bauer-Menglerberg and Dagfinn Føllesdal (pp. 464–479). in: • van Heijenoort, Jean (1967, 3rd printing 1976). From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931. Cambridge MA: Harvard University Press. ISBN 0-674-32449-8 (pbk.). Hilbert's 1927, Based on an earlier 1925 "foundations" lecture (pp. 367–392), presents his 17 axioms -- axioms of implication #1-4, axioms about & and V #5-10, axioms of negation #11-12, his logical ε-axiom #13, axioms of equality #14-15, and axioms of number #16-17 -- along with the other necessary elements of his Formalist "proof theory" -- e.g. induction axioms, recursion axioms, etc; he also offers up a spirited defense against L.E.J. Brouwer's Intuitionism. Also see Hermann Weyl's (1927) comments and rebuttal (pp. 480–484), Paul Bernay's (1927) appendix to Hilbert's lecture (pp. 485–489) and Luitzen Egbertus Jan Brouwer's (1927) response (pp. 490–495) • Kleene, Stephen Cole (1952, 10th impression with 1971 corrections). Introduction to Metamathematics. Amsterdam NY: North Holland Publishing Company. ISBN 0-7204-2103-9. See in particular Chapter IV Formal System (pp. 69–85) wherein Kleene presents subchapters §16 Formal symbols, §17 Formation rules, §18 Free and bound variables (including substitution), §19 Transformation rules (e.g. modus ponens) -- and from these he presents 21 "postulates" -- 18 axioms and 3 "immediate-consequence" relations divided as follows: Postulates for the propostional calculus #1-8, Additional postulates for the predicate calculus #9-12, and Additional postulates for number theory #13-21.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 55, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8774700164794922, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/19063/reference-for-the-existence-of-a-shapovalov-type-form-on-the-tensor-product-of-in
## Reference for the existence of a Shapovalov-type form on the tensor product of integrable modules ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Shapovalov and Jantzen showed us how to construct a nice inner product on finite dimensional representations of a semi-simple Lie algebra, by simply giving the highest weight vector inner product 1 with itself and making the upper and lower halves adjoint. The result I need is an extension of this to tensor products. Roughly, I would like a statement like: There is a unique system of $U_q(\mathfrak{g})$-invariant Hermitian inner products on all tensor products $V_{\lambda_1}\otimes \cdots \otimes V_{\lambda_\ell}$ such that 1. On $V_\lambda$, it is the Shapovalov form. 2. The action of $E_i$ and $F_i$ are biadjoint (up to some powers of $q$). 3. For any $j<\ell$, the natural map $V_{\lambda_1}\otimes\cdots\otimes V_{\lambda_j}\hookrightarrow V_{\lambda_1}\otimes \cdots \otimes V_{\lambda_\ell}$ is an isometric embedding. I said to myself, "Self, it would be silly to post this question on MathOverflow. You are in a math library, just feet away from Lusztig's book. Surely it is in there." However, I've had no luck finding it in Lusztig's book, which is sadly lacking in index. Is this actually written down anywhere? EDIT: Jim asks for more motivation. I feel like this is the sort of question where motivation will not be very helpful in actually finding an answer, but there's no harm in saying a little (and it will allow me to put off real work). One of the foundational principles of categorification is that things with nice categorifications have nice inner products (since Grothendieck groups have a nice inner product given by Euler characteristic of the Ext's between objects). I'm working right now on categorifying tensor products of representations, so it would be rather convenient for me to find some earlier references that used this form. - 1 More context and motivation would help here. You are probably far away from the original motivation of both Shapovalov and Jantzen: find a nonzero symmetric bilinear form on a highest weight module for a semisimple Lie algebra so distinct weight spaces are orthogonal and the radical of the form is the unique maximal submodule. Thus the form is nondegenerate on the simple quotient module. Jantzen worked over the integers in order to study the behavior of f.d. "Weyl modules" mod `$p$`. Both of them found a miraculous determinant formula on each weight space. – Jim Humphreys Mar 22 2010 at 21:57 Edit request: "... a nice inner product on finite dimensional representations of a SEMISIMPLE Lie algebra ..." At least, I can't imagine interpreting the rest of the post without this extra word. – Theo Johnson-Freyd Mar 23 2010 at 2:12 ## 2 Answers I know a couple of ways to get a Shapovalov type form on a tensor product. The details of what I say depends on the exact conventions you use for quantum groups. I will follow Chari and Pressley's book. The first method is to alter the adjoint slightly. If you choose a * involution that is also a coalgebra automorphism, you can just take the form on a tensor product to be the product of the form on each factor, and the result is contravariant with respect to *. There is a unique such involution up to some fairly trivial modifications (like multiplying $E_i$ by $z$ and $F_i$ by $z^{-1}$). It is given by: $$*E_i = F_i K_i, \quad *F_i=K_i^{-1}E_i, \quad *K_i=K_i,$$ The resulting forms are Hermitian if $q$ is taken to be real, and will certainly satisfy your conditions 1) ad 3). Since the $K_i$s only act on weight vectors as powers of $q$, it almost satisfies 2). The second method is in case you really want * to interchange $E_i$ with exactly $F_i$. This is roughly contained in this http://www.ams.org/mathscinet-getitem?mr=1470857 paper by Wenzl, which I actually originally looked at when it was suggested in an answer to one of your previous questions. It is absolutely essential that a * involution be an algebra-antiautomorphism. However, if it is a coalgebra anti-automorphism instead of a coalgebra automorphism there is a work around to get a form on a tensor product. There is again an essentially unique such involution, given by $$*E_i=F_i, \quad *F_i=E_i, \quad *K_i=K_i^{-1}, \quad *q=q^{-1}.$$ Note that $q$ is inverted, so for this form one should think of $q$ as being a complex number of the unit circle. By the same argument as you use to get the Shapovalov form, then is a unique sesquilinear *-contravariant form on each irreducible representation $V_\lambda$, up to overall rescaling. To get a form on $V_\lambda \otimes V_\mu$, one should define $$(v_1 \otimes w_1, v_2 \otimes w_2)$$ to be the product of the form on each factor applied to $v_1 \otimes w_1$ and $R( v_2 \otimes w_2)$, where $R$ is the universal $R$ matrix. It is then straightforward to see that the result is *-contravariant, using the fact that $R \Delta(a) R^{-1} =\Delta^{op}(a).$ If you want to work with a larger tensor product, I believe you replace $R$ by the unique endomorphism $E$ on $\otimes_k V_{\lambda_k}$ such that $w_0 \circ E$ is the braid group element $T_{w_0}$ which reverses the order of the tensor factors, using the minimal possible number of positive crossings. Here $w_0$ is the symmetric group element that reverses the order of the the tensor factors. The resulting form is *-contravariant, but is not Hermitian. In Wenzl's paper he discusses how to fix this. Now 1) and 2) on your wish list hold. As for 3): It is clear from standard formulas for the $R$-matrix (e.g. Chari-Pressley Theorem 8.3.9) that $R$ acts on a vector of the form $b_\lambda \otimes c \in V_\lambda \otimes V_\mu$ as multiplication by $q^{(\lambda, wt(c))}$. Thus if you embed $V_\mu$ into $V_\lambda \otimes V_\mu$ as $w \rightarrow b_\lambda \otimes w$, the result is isometric up to an overall scaling by a power of $q$. This extends to the type of embedding you want (up to scaling by powers of $q$), only with the order reversed. I don't seem to understand what happen when you embed $V_\lambda$ is $V_\lambda \otimes V_\mu$, which confuses me, and I don't see your exact embeddings. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I have come across this for the tensor products of two highest weight modules but not more. In this case start with the tensor product of a highest weight vector with a lowest weight vector and proceed as before. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360577464103699, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/20502/angular-velocity-omega-by-v?answertab=votes
# Angular velocity $\omega$ by $v$ We have two girls, with mass (M). They become close to each other in speed of V. The distance between them is 3L. I was asked to calclute the Angular velocity (w) of the two girls. So I set the rotation axis in the middle (where the distance between the two girls and the axis is 1.5L, and I calcute the Angular velocity by the formula of w=v\r, where r=1.5L, and I got that w=2v/3L. As I understood, the answer is correct, but this is not the correct way. What is my mistake? (sorry about my english) - – Adam Sh Feb 3 '12 at 22:54 1 Please do not write "w" for "$\omega$" :-). You can use LaTeX math in `$...$` signs. So you can write `$\omega$`. – queueoverflow Feb 3 '12 at 23:20 ## 1 Answer They do not come closer to each other according to the picture. They always keep the distance of the $3L$ since they hold onto that bar that is going to rotate counter clockwise. I think your answer $$\omega = \frac{2}{3} \frac{v}{L}$$ is fine. This works since $v$ and $r$ are perpendicular, with a 90° angle in between. - But I have one problem with my answer: If I would set the rotation axis in a distance of 2/3 fron the one, and 1/3 from the other - meaning from one girl the distance is L and from the second, the distance is 2L. I would get a diffrent $w$. My way is working beacuse we are talking on center of mass? – Adam Sh Feb 4 '12 at 7:57 Yes, the symmetry says that they rotate around the center. If one girl is faster than the other, they will have different $\omega$, that is true. – queueoverflow Feb 4 '12 at 14:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9651404023170471, "perplexity_flag": "middle"}
http://programarcadegames.com/index.php?chapter=introduction_to_graphics
# Program Arcade GamesWith Python And Pygame < Previous Home Next > # Chapter 5: Introduction to Graphics Now that you can create loops, it is time to move on to learning how to create graphics. This chapter covers: • How the computer handles x, y coordinates. It isn't like the coordinate system you learned in math class. • How to specify colors. With millions of colors to choose from, telling the computer what color to use isn't as easy as just saying “red.” • How to open a blank window for drawing. Every artist needs a canvas. • How to draw lines, rectangles, ellipses, and arcs. ## 5.1 Computer Coordinate Systems The Cartesian coordinate system, shown in Figure 5.1 (Wikimedia Commons), is the system most people are used to when plotting graphics. This is the system taught in school. The computer uses a similar, but somewhat different, coordinate system. Understanding why it is different requires a quick bit of computer history. During the early '80s, most computer systems were text-based and did not support graphics. Figure 5.2 (Wikimedia Commons) shows an early spreadsheet program run on an Apple ][ computer that was popular in the '80s. When positioning text on the screen, programmers started at the top calling it line 1. The screen continued down for 24 lines and across for 40 characters. Even with plain text, it was possible to make rudimentary graphics by just using characters on the keyboard. See this kitten shown in Figure 5.3 and look carefully at how it is drawn. When making this art, characters were still positioned starting with line 1 at the top. Later the character set was expanded to include boxes and other primitive drawing shapes. Characters could be drawn in different colors. As shown in Figure 5.4 the graphics got more advanced. Search the web for “ASCII art” and many more examples can be found. Once computers moved to being able to control individual pixels for graphics, the text-based coordinate system stuck. The $x$ coordinates work the same as the Cartesian coordinates system. But the $y$ coordinates are reversed. Rather than the zero $y$ coordinate at the bottom of the graph like in Cartesian graphics, the zero $y$ coordinate is at the top of the screen with the computer. As the $y$ values go up, the computer coordinate position moved down the screen, just like lines of text rather than standard Cartesian graphics. See Figure 5.5. Also, note the screen covers the lower right quadrant, where the Cartesian coordinate system usually focuses on the upper right quadrant. It is possible to draw items at negative coordinates, but they will be drawn off-screen. This can be useful when part of a shape is off screen. The computer figures out what is off-screen and the programmer does not need to worry too much about it. ## 5.2 Pygame Library To make graphics easier to work with, we'll use the Pygame. Pygame is a library of code other people have written, and makes it simple to: • Draw graphic shapes • Display bitmapped images • Animate • Interact with keyboard, mouse, and gamepad • Play sound • Detect when objects collide The first code a Pygame program needs to do is load and initialize the Pygame library. Every program that uses Pygame should start with these lines: ```# Import a library of functions called 'pygame' import pygame # Initialize the game engine pygame.init() ``` If you haven't installed Pygame yet, directions for installing Pygame are available in the before you begin section. If Pygame is not installed on your computer, you will get an error when trying to run import pygame. Don't name any file “pygame.py” Important: The import pygame looks for a library file named pygame. If a programmer creates a new program named pygame.py, the computer will import that file instead! This will prevent any pygame programs from working until that pygame.py file is deleted. ## 5.3 Colors Next, we need to add variables that define our program's colors. Colors are defined in a list of three colors: red, green, and blue. Have you ever heard of an RGB monitor? This is where the term comes. Red-Green-Blue. With older monitors, you could sit really close to the monitor and make out the individual RGB colors. At least before your mom told you not to sit so close to the TV. This is hard to do with today's high resolution monitors. Each element of the RGB triad is a number ranging from 0 to 255. Zero means there is none of the color, and 255 tells the monitor to display as much of the color as possible. The colors combine in an additive way, so if all three colors are specified, the color on the monitor appears white. (This is different than how ink and paint work.) Lists in Python are surrounded by either square brackets or parentheses. (Chapter 7 covers lists in detail and the difference between the two types.) Individual numbers in the list are separated by commas. Below is an example that creates variables and sets them equal to lists of three numbers. These lists will be used later to specify colors. ```# Define some colors black = ( 0, 0, 0) white = ( 255, 255, 255) green = ( 0, 255, 0) red = ( 255, 0, 0) ``` Using the interactive shell in IDLE, try defining these variables and printing them out. If the five colors above aren't the colors you are looking for, you can define your own. To pick a color, find an on-line “color picker” like the one shown in Figure 5.6. One such color picker is at: http://www.colorpicker.com/ Extra: Some color pickers specify colors in hexadecimal. You can enter hexadecimal numbers if you start them with 0x. For example: ```white = (0xFF, 0xFF, 0xFF) ``` Eventually the program will need to use the value of $\pi$ when drawing arcs, so this is a good time in our program to define a variable that contains the value of $\pi$. (It is also possible to import this from the math library as math.pi.) ```pi = 3.141592653 ``` ## 5.4 Open a Window So far, the programs we have created only printed text out to the screen. Those programs did not open any windows like most modern programs do. The code to open a window is not complex. Below is the required code, which creates a window sized to a width of 700 pixels, and a height of 500: ```# Set the width and height of the screen size = (700,500) screen = pygame.display.set_mode(size) ``` Why set_mode? Why not open_window? The reason is that this command can actually do a lot more than open a window. It can also create games that run in a full-screen mode. This removes the start menu, title bars, and gives the game control of everything on the screen. Because this mode is slightly more complex to use, and most people prefer windowed games anyway, we'll skip a detailed discussion on full-screen games. But if you want to find out more about full-screen games, check out the documentation on pygame's display command. Also, why size=(700,500) and not size=700,500? The same reason why we put parentheses around the color definitions. Python can't normally store two numbers (a height and width) into one variable. The only way it can is if the numbers are stored as a list. Lists need either parentheses or square brackets. (Technically, parenthesis surrounding a set of numbers is more accurately called a tuple or an immutable list. Lists surrounded by square brackets are just called lists. An experienced Python developer would cringe at calling a list of numbers surrounded by parentheses a list rather than a tuple.) Lists are covered in detail in Chapter 7. To set the title of the window (which shown in the title bar) use the following line of code: ```pygame.display.set_caption("Professor Craven's Cool Game") ``` ## 5.5 Interacting With the User With just the code written so far, the program would create a window and immediately hang. The user can't interact with the window, even to close it. All of this needs to be programmed. Code needs to be added so that the program waits in a loop until the user clicks “exit.” This is the most complex part of the program, and a complete understanding of it isn't needed yet. But it is necessary to have an idea of what it does, so spend some time studying it and asking questions. ```#Loop until the user clicks the close button. done = False # Used to manage how fast the screen updates clock = pygame.time.Clock() # -------- Main Program Loop ----------- while done == False: # ALL EVENT PROCESSING SHOULD GO BELOW THIS COMMENT for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done = True # Flag that we are done so we exit this loop # ALL EVENT PROCESSING SHOULD GO ABOVE THIS COMMENT # ALL GAME LOGIC SHOULD GO BELOW THIS COMMENT # ALL GAME LOGIC SHOULD GO ABOVE THIS COMMENT # ALL CODE TO DRAW SHOULD GO BELOW THIS COMMENT # ALL CODE TO DRAW SHOULD GO ABOVE THIS COMMENT # Limit to 20 frames per second clock.tick(20) ``` Eventually we will add code to handle the keyboard and mouse clicks. That code will go between the comments for event processing. Code for determining when bullets are fired and how objects move will go between the comments for game logic. We'll talk about that in later chapters. Code to draw will go in between the appropriate draw-code comments. ### 5.5.1 The Event Processing Loop Pay Attention! Alert! One of the most frustrating problems programmers have is to mess up the event processing loop. This “event processing” code handles all the keystrokes, mouse button clicks, and several other types of events. For example your loop might look like: ``` for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") if event.type == pygame.KEYDOWN: print("User pressed a key.") if event.type == pygame.KEYUP: print("User let go of a key.") if event.type == pygame.MOUSEBUTTONDOWN: print("User pressed a mouse button") ``` The events (like pressing keys) all go together in a list. The program uses a for loop to loop through each event. Using a chain of if statements the code figures out what type of event occured, and the code to handle that event goes in the if statement. All the if statements should go together, in one for loop. A common mistake when doing copy and pasting of code is to not merge loops from two programs, but to have two event loops. ``` # Here is one event loop for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") if event.type == pygame.KEYDOWN: print("User pressed a key.") if event.type == pygame.KEYUP: print("User let go of a key.") # Here the programmer has copied another event loop # into the program. This is BAD. The events were already # processed. for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") if event.type == pygame.MOUSEBUTTONDOWN: print("User pressed a mouse button") ``` The for loop on line 2 grabbed all of the user events. The for loop on line 13 won't grab any events because they were already processed in the prior loop. Another typical problem is to start drawing, and then try to finish the event loop: ``` for event in pygame.event.get(): if event.type == pygame.QUIT: print("User asked to quit.") if event.type == pygame.KEYDOWN: print("User pressed a key.") pygame.rect.draw(screen,green,[50,50,100,100]) # This is code that processes events. But it is not in the # 'for' loop that processes events. It will not act reliably. if event.type == pygame.KEYUP: print("User let go of a key.") if event.type == pygame.MOUSEBUTTONDOWN: print("User pressed a mouse button") ``` This will cause the program to ignore some keyboard and mouse commands. Why? The for loop processes all the events in a list. So if there are two keys that are hit, the for loop will process both. In the example above, the if statements are not in the for loop. If there are multiple events, the if statements will only run for the last event, rather than all events. ### 5.5.2 Processing Each Frame The basic logic and order for each frame of the game: • While not done: • For each event (keypress, mouse click, etc.): • Use a chain of if statements to run code to handle each event. • Run calculations to determine where objects move, what happens when objects collide, etc. • Clear the screen • Draw everything It makes the program easier to read and understand if these steps aren't mixed togther. Don't do some calculations, some drawing, some more calculations, some more drawing. Also, see how this is similar to the calculator done in chapter one. Get user input, run calculations, and output the answer. That same pattern applies here. The code for drawing the image to the screen happens inside the while loop. With the clock tick set at 10, the contents of the window will be drawn 10 times per second. If it happens too fast the computer is sluggish because all of its time is spent updating the screen. If it isn't in the loop at all, the screen won't redraw properly. If the drawing is outside the loop, the screen may initially show the graphics, but the graphics won't reappear if the window is minimized, or if another window is placed in front. ## 5.6 Ending the Program Right now, clicking the “close” button of a window while running this Pygame program in IDLE will still cause the program to crash. This is a hassle because it requires a lot of clicking to close a crashed program. The problem is, even though the loop has exited, the program hasn't told the computer to close the window. By calling the command below, the program will close any open windows and exit as desired. ```pygame.quit() ``` ## 5.7 Clearing the Screen The following code clears whatever might be in the window with a white background. Remember that the variable white was defined earlier as a list of 3 RGB values. ```# Clear the screen and set the screen background screen.fill(white) ``` This should be done before any drawing command is issued. Clearing the screen after the program draws graphics results in the user only seeing a blank screen. When a window is first created it has a black background. It is still important to clear the screen because there are several things that could occur to keep this window from starting out cleared. A program should not assume it has a blank canvas to draw on. ## 5.8 Flipping the Screen Very important! You must flip the display after you draw. The computer will not display the graphics as you draw them because it would cause the screen to flicker. This waits to display the screen until the program has finished drawing. The command below “flips” the graphics to the screen. Failure to include this command will mean the program just shows a blank screen. Any drawing code after this flip will not display. ```# Go ahead and update the screen with what we've drawn. pygame.display.flip() ``` ## 5.9 Open a Blank Window Let's bring everything we've talked about into one full program. This code can be used as a base template for a Pygame program. It opens up a blank window and waits for the user to press the close button. ```# Sample Python/Pygame Programs # Simpson College Computer Science # http://programarcadegames.com/ # http://simpson.edu/computer-science/ # Explanation video: http://youtu.be/vRB_983kUMc import pygame # Define some colors black = ( 0, 0, 0) white = ( 255, 255, 255) green = ( 0, 255, 0) red = ( 255, 0, 0) pygame.init() # Set the width and height of the screen [width,height] size = [700,500] screen = pygame.display.set_mode(size) pygame.display.set_caption("My Game") #Loop until the user clicks the close button. done = False # Used to manage how fast the screen updates clock = pygame.time.Clock() # -------- Main Program Loop ----------- while done == False: # ALL EVENT PROCESSING SHOULD GO BELOW THIS COMMENT for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done = True # Flag that we are done so we exit this loop # ALL EVENT PROCESSING SHOULD GO ABOVE THIS COMMENT # ALL GAME LOGIC SHOULD GO BELOW THIS COMMENT # ALL GAME LOGIC SHOULD GO ABOVE THIS COMMENT # ALL CODE TO DRAW SHOULD GO BELOW THIS COMMENT # First, clear the screen to white. Don't put other drawing commands # above this, or they will be erased with this command. screen.fill(white) # ALL CODE TO DRAW SHOULD GO ABOVE THIS COMMENT # Go ahead and update the screen with what we've drawn. pygame.display.flip() # Limit to 20 frames per second clock.tick(20) # Close the window and quit. # If you forget this line, the program will 'hang' # on exit if running from IDLE. pygame.quit() ``` ## 5.10 Drawing Introduction Here is a list of things that you can draw: http://www.pygame.org/docs/ref/draw.html A program can draw things like rectangles, polygons, circles, ellipses, arcs, and lines. We will also cover how to display text with graphics. Bitmapped graphics such as images are covered in Chapter 12. If you decide to look at that pygame reference, you might see a function definition like this: ```pygame.draw.rect(Surface, color, Rect, width=0): return Rect ``` A frequent cause of confusion is the part of the line that says width=0. What this means is that if you do not supply a width, it will default to zero. Thus this function call: ```pygame.draw.rect(screen, red, [55,500,10,5]) ``` Is the same as this function call: ```pygame.draw.rect(screen, red, [55,500,10,5], 0) ``` The : return Rect is telling you that the function returns a rectangle, the same one that was passed in. You can just ignore this part. What will not work, is attempting to copy the line and put width=0 in the quotes. ```# This fails and the error the computer gives you is # really hard to understand. pygame.draw.rect(screen, red, [55,500,10,5], width=0) ``` ## 5.11 Drawing Lines The code example below shows how to draw a line on the screen. It will draw on the screen a green line from (0,0) to (100,100) that is 5 pixels wide. Remember that green is a variable that was defined earlier as a list of three RGB values. ```# Draw on the screen a green line from (0,0) to (100,100) # that is 5 pixels wide. pygame.draw.line(screen,green,[0,0],[100,100],5) ``` Use the base template from the prior example and add the code to draw lines. Read the comments to figure out exactly where to put the code. Try drawing lines with different thicknesses, colors, and locations. Draw several lines. ## 5.12 Drawing Lines With Loops and Offsets Programs can repeat things over and over. The next code example draws a line over and over using a loop. Programs can use this technique to do multiple lines, and even draw an entire car. Putting a line drawing command inside a loop will cause multiple lines being drawn to the screen. But here's the catch. If each line has the same starting and ending coordinates, then each line will draw on top of the other line. It will look like only one line was drawn. To get around this, it is necessary to offset the coordinates each time through the loop. So the first time through the loop the variable y_offset is zero. The line in the code below is drawn from (0,10) to (100,110). The next time through the loop y_offset increased by 10. This causes the next line to be drawn to have new coordinates of (0,20) and (100,120). This continues each time through the loop shifting the coordinates of each line down by 10 pixels. ```# Draw on the screen several green lines from (0,10) to (100,110) # 5 pixels wide using a while loop y_offset = 0 while y_offset < 100: pygame.draw.line(screen,red,[0,10+y_offset],[100,110+y_offset],5) y_offset = y_offset+10 ``` This same code could be done even more easily with a for loop: ```# Draw on the screen several green lines from (0,10) to (100,110) # 5 pixels wide using a for loop for y_offset in range(0,100,10): pygame.draw.line(screen,red,[0,10+y_offset],[100,110+y_offset],5) ``` Run this code and try using different changes to the offset. Try creating an offset with different values. Experiment with different values until exactly how this works is obvious. For example, here is a loop that uses sine and cosine to create a more complex set of offsets and produces the image shown in Figure 5.7. ```for i in range(200): radians_x = i/20 radians_y = i/6 x=int( 75 * math.sin(radians_x)) + 200 y=int( 75 * math.cos(radians_y)) + 200 pygame.draw.line(screen,black,[x,y],[x+5,y], 5) ``` Multiple elements can be drawn in one for loop, such as this code which draws the multiple X's shown in Figure 5.8. ``` for x_offset in range(30,300,30): pygame.draw.line(screen,black,[x_offset,100],[x_offset-10,90], 2 ) pygame.draw.line(screen,black,[x_offset,90],[x_offset-10,100], 2 ) ``` ## 5.13 Drawing a Rectangle When drawing a rectangle, the computer needs coordinates for the upper left rectangle corner (the origin), and a height and width. Figure 5.9 shows a rectangle (and an ellipse, which will be explained later) with the origin at (20,20), a width of 250 and a height of 100. When specifying a rectangle the computer needs a list of these four numbers in the order of (x, y, width, height). The next code example draws this rectangle. The first two numbers in the list define the upper left corner at (20,20). The next two numbers specify first the width of 250 pixels, and then the height of 100 pixels. The 2 at the end specifies a line width of 2 pixels. The larger the number, the thicker the line around the rectangle. If this number is 0, then there will not be a boarder around the rectangle. Instead it will be filled in with the color specified. ```# Draw a rectangle pygame.draw.rect(screen,black,[20,20,250,100],2) ``` ## 5.14 Drawing an ellipse An ellipse is drawn just like a rectangle. The boundaries of a rectangle are specified, and the computer draws an ellipses inside those boundaries. The most common mistake in working with an ellipse is to think that the starting point specifies the center of the ellipse. In reality, nothing is drawn at the starting point. It is the upper left of a rectangle that contains the ellipse. Looking back at Figure 5.9 one can see an ellipse 250 pixels wide and 100 pixels tall. The upper left corner of the 250x100 rectangle that contains it is at (20,20). Note that nothing is actually drawn at (20,20). With both drawn on top of each other it is easier to see how the ellipse is specified. ```# Draw an ellipse, using a rectangle as the outside boundaries pygame.draw.ellipse(screen,black,[20,20,250,100],2) ``` ## 5.15 Drawing an Arc What if a program only needs to draw part of an ellipse? That can be done with the arc command. This command is similar to the ellipse command, but it includes start and end angles for the arc to be drawn. The angles are in radians. The code example below draws four arcs showing four difference quadrants of the circle. Each quadrant is drawn in a different color to make the arcs sections easier to see. The result of this code is shown in Figure 5.10. ```# Draw an arc as part of an ellipse. Use radians to determine what # angle to draw. pygame.draw.arc(screen,green,[100,100,250,200], pi/2, pi, 2) pygame.draw.arc(screen,black,[100,100,250,200], 0, pi/2, 2) pygame.draw.arc(screen,red, [100,100,250,200],3*pi/2, 2*pi, 2) pygame.draw.arc(screen,blue, [100,100,250,200], pi, 3*pi/2, 2) ``` ## 5.16 Drawing a Polygon The next line of code draws a polygon. The triangle shape is defined with three points at (100,100) (0,200) and (200,200). It is possible to list as many points as desired. Note how the points are listed. Each point is a list of two numbers, and the points themselves are nested in another list that holds all the points. This code draws what can be seen in Figure 5.11. ```# This draws a triangle using the polygon command pygame.draw.polygon(screen,black,[[100,100],[0,200],[200,200]],5) ``` ## 5.17 Drawing Text Text is slightly more complex. There are three things that need to be done. First, the program creates a variable that holds information about the font to be used, such as what typeface and how big. Second, the program creates an image of the text. One way to think of it is that the program carves out a “stamp” with the required letters that is ready to be dipped in ink and stamped on the paper. The third thing that is done is the program tells where this image of the text should be stamped (or “blit'ed”) to the screen. Here's an example: ```# Select the font to use. Default font, 25 pt size. font = pygame.font.Font(None, 25) # Render the text. "True" means anti-aliased text. # Black is the color. The variable black was defined # above as a list of [0,0,0] # Note: This line creates an image of the letters, # but does not put it on the screen yet. text = font.render("My text",True,black) # Put the image of the text on the screen at 250x250 screen.blit(text, [250,250]) ``` Want to print the score to the screen? That is a bit more complex. This does not work: ```text = font.render("Score: ",score,True,black) ``` Why? A program can't just add extra items to font.render like the print statement. Only one string can be sent to the command, therefore the actual value of score needs to be appended to the “Score: ” string. But this doesn't work either: ```text = font.render("Score: "+score,True,black) ``` If score is an integer variable, the computer doesn't know how to add it to a string. You, the programmer, must convert the score to a string. Then add the strings together like this: ```text = font.render("Score: "+str(score),True,black) ``` Now you know how to print the score. If you want to print a timer, that requires print formatting, discussed in a chapter later on. Check in the example code for section on-line for the timer.py example: ProgramArcadeGames.com/python_examples/f.php?file=timer.py ## 5.18 Full Program Listing This is a full listing of the program discussed in this chapter. This program, along with other programs, may be downloaded from: http://ProgramArcadeGames.com/index.php?chapter=example_code ```# Sample Python/Pygame Programs # Simpson College Computer Science # http://programarcadegames.com/ # http://simpson.edu/computer-science/ # Import a library of functions called 'pygame' import pygame # Initialize the game engine pygame.init() # Define the colors we will use in RGB format black = [ 0, 0, 0] white = [255,255,255] blue = [ 0, 0,255] green = [ 0,255, 0] red = [255, 0, 0] pi = 3.141592653 # Set the height and width of the screen size = [400,500] screen = pygame.display.set_mode(size) pygame.display.set_caption("Professor Craven's Cool Game") #Loop until the user clicks the close button. done = False clock = pygame.time.Clock() while done == False: # This limits the while loop to a max of 10 times per second. # Leave this out and we will use all CPU we can. clock.tick(10) for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done=True # Flag that we are done so we exit this loop # All drawing code happens after the for loop and but # inside the main while done==False loop. # Clear the screen and set the screen background screen.fill(white) # Draw on the screen a green line from (0,0) to (100,100) # 5 pixels wide. pygame.draw.line(screen,green,[0,0],[100,100],5) # Draw on the screen several green lines from (0,10) to (100,110) # 5 pixels wide using a loop for y_offset in range(0,100,10): pygame.draw.line(screen,red,[0,10+y_offset],[100,110+y_offset],5) # Draw a rectangle pygame.draw.rect(screen,black,[20,20,250,100],2) # Draw an ellipse, using a rectangle as the outside boundaries pygame.draw.ellipse(screen,black,[20,20,250,100],2) # Draw an arc as part of an ellipse. # Use radians to determine what angle to draw. pygame.draw.arc(screen,black,[20,220,250,200], 0, pi/2, 2) pygame.draw.arc(screen,green,[20,220,250,200], pi/2, pi, 2) pygame.draw.arc(screen,blue, [20,220,250,200], pi,3*pi/2, 2) pygame.draw.arc(screen,red, [20,220,250,200],3*pi/2, 2*pi, 2) # This draws a triangle using the polygon command pygame.draw.polygon(screen,black,[[100,100],[0,200],[200,200]],5) # Select the font to use. Default font, 25 pt size. font = pygame.font.Font(None, 25) # Render the text. "True" means anti-aliased text. # Black is the color. This creates an image of the # letters, but does not put it on the screen text = font.render("My text",True,black) # Put the image of the text on the screen at 250x250 screen.blit(text, [250,250]) # Go ahead and update the screen with what we've drawn. # This MUST happen after all the other drawing commands. pygame.display.flip() # Be IDLE friendly pygame.quit() ``` ## 5.19 Review Questions After answering the review questions below, try writing a computer program that creates an image of your own design. For details, see the Create-a-Picture lab. 1. Before a program can use any functions like pygame.display.set_mode(), what two things must happen? 2. What does the pygame.display.set_mode() function do? 3. What is pygame.time.Clock used for? 4. What does this for event in pygame.event.get() loop do? 5. For this line of code: ```pygame.draw.line(screen,green,[0,0],[100,100],5) ``` • What does screen do? • What does [0,0] do? What does [100,100] do? • What does 5 do? 6. Explain how the computer coordinate system differs from the standard Cartesian coordinate system. 7. Explain how white = ( 255, 255, 255) represents a color. 8. When drawing a rectangle, what happens if the specified line width is zero? 9. Sketch the ellipse drawn in the code below, and label the origin coordinate, the length, and the width: ```pygame.draw.ellipse(screen,black,[20,20,250,100],2) ``` 10. Describe, in general, what are the three steps needed when printing text to the screen using graphics? 11. What are the coordinates of the polygon that the code below draws? ```pygame.draw.polygon(screen,black,[[50,100],[0,200],[200,200],[100,50]],5) ``` 12. What does pygame.display.flip() do? 13. What does pygame.quit() do? ## 5.20 Lab Complete Lab 3 “Create-a-Picture” to create your own picture, and show you understand how to use loops and graphics. 
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9122846126556396, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/spin-model+ising-model
# Tagged Questions 1answer 249 views ### A simple model that exhibits emergent symmetry? In a previous question Emergent symmetries I asked, Prof.Luboš Motl said that emergent symmetries are never exact. But I wonder whether the following example is an counterexample that has exact ... 1answer 69 views ### Random bond Ising model and computational efficiency If you want to find the ground state of the 2d random bond Ising model (no field), a computationally efficient algorithm exists to do it for you (based on minimum weight perfect matching). What about ... 1answer 72 views ### Phase Transition in the Ising Model with Non-Uniform Magnetic Field Consider the Ferromagnetic Ising Model ($J>0$) on the lattice $\mathbb{Z}^2$ with the Hamiltonian with boundary condition $\omega\in\{-1,1\}$ formally given by ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8748858571052551, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/32370/what-pdf-should-be-fit-to-a-rank-histogram
# What PDF should be fit to a rank histogram? A Rank Histogram (or Talagrand Diagram) is a neat way of measuring whether your numerical model is giving appropriate variance. It's used for weather and climate forcasting, where you only have one observational series, and many model series (an ensemble). It's described pretty clearly here. Basically, you take a handful of runs of your model, then for each timestep/gridpoint, whatever, you calculate how the observations rank relative to the model ensemble. Then you plot a histogram of those ranks. If the histogram is u-shaped, then variance is too low (obs are rank high or low too often), if the histogram looks kind of gaussian, then the variance is too high (obs rarely ranks high or low), and if the histogram is flat, then your variance is spot-on (obs have a similar variance as the ensemble). Examples from http://www.eumetcal.org/resources/ukmeteocal/temp/msgcal/www/english/msg/ver_prob_forec/uos4b/uos4b_ko1.htm: So the question is, which distribution should I fit to this kind of data, and why? The latter part of the question is more important, because I think that the correct distribution is the Beta-binomial distribution, but I'm unfamiliar with this area. - 1 The only possible correct answer to this question as generally posed is "practically any distribution." This is because the diagram is a tool to compare two arbitrary distributions: "observations" and the "model ensemble." They could differ in literally any fashion. By asking such a question, then, you appear to presuppose some kind of connection between the model and the observations. This is a fair assumption, but the nature of the connection depends on the kind of model and what it models. So: what kind of model and what kinds of observations do you have in mind? – whuber♦ Jul 16 '12 at 13:43 @whuber, (I'm assuming you're referring to the note at the end): right, I should have said data set types. ie. ordinal/continuous, finite/infinite domain, etc. And more specifically, data types like ranks, odds, what ever. But I guess the answer is "practically any [ordinal, finite domain] distribution", so maybe I'm better off removing that part of the question. – naught101 Jul 17 '12 at 1:55 1 What leads you to think that the 'correct distribution would be beta-binomial'? I see nothing that really suggests it couldn't in practice be almost anything. The examples shown might be more-or-less described by a flexible discrete distribution like a beta-binomial but that doesn't make them beta-binomial. No doubt you could make an argument that would imply a beta-binomial, though I think I don't know enough about the whole area to judge whether it's tenable. – Glen_b 2 days ago 1 Oh, okay. That wasn't clear. As I said the circumstances aren't sufficient for me to know whether the sorts of arguments one might make are tenable, but (for example) it could come from making an argument than if (something) were held constant, there'd be a situation akin to bernoulli trials, but that thing varies. You'd then argue that the varying thing should vary $p$ in those bernoulli trials. Now if the $p$ was roughly beta (and it may be possible also to give an argument for that), you would then use a beta-binomial for the mix. ...(ctd) – Glen_b 2 days ago 1 (ctd) ... An alternative argument might start with the beta and use say a series expansion around it which values are then ranked and then try to argue that the beta-binomial is an approximation to the resulting process (such an argument would be harder to make the details work for, obviously). Or there might be other ways to argue it. – Glen_b 2 days ago show 6 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9385561347007751, "perplexity_flag": "middle"}
http://cs.stackexchange.com/questions/tagged/formal-methods
# Tagged Questions a particular kind of mathematically-based technique for the specification, development and verification of software and hardware systems. 3answers 98 views ### Binary decision diagram for a six-figure Boolean function Let $p$ be the six-figure Boolean function with the following definition: \$p(x_{0},x_{1},x_{2},x_{3},x_{4},x_{5})=\begin{cases} true & \text{if } x_{0}=x_{5} \text{ and } x_{1}=x_{4} ... 0answers 53 views ### Logical conjunction of two binary decision diagrams Compute a BDD for $B_{1} \wedge B_{2}$ by using an algorithm that applies dynamic programming. Document the execution of the algorithm by indicating pairs of BDDs $(q_{1},q_{2})$ and the BDD \$q_{1} ... 1answer 86 views ### Initial Algebra example If the definition of Initial Algebra is: "An object is initial if there exists a unique morphism from the object to every object in the category" Why do we need such object, and could any one give ... 1answer 65 views ### Reference Request for Synthesis New to the world of software verification and synthesis. It was suggested to me that the book "Principles of Model Checking" is a good reference for verification, but I am clueless about synthesis. ... 2answers 150 views ### Introduction into first order logic verification I am trying to teach myself different approaches to software verification. I have read some articles. As far as I learned, propositional logic with temporal generally uses model checking with SAT ... 1answer 95 views ### How to implement simulation on two LTSs? Does any one know how to implement the simulation relation on two labelled transition systems (LTS)? I know how to do it for branching bi-simulation. The signature refinement theorem is used for ... 1answer 69 views ### Looking for a book that derives and constructs a model checking application I am teaching myself program verification and am currently learning proof assistants. I have the book Handbook of Practical Logic and Automated Reasoning which gives the proofs necessary for the ... 1answer 103 views ### Witness for the $EU(\phi_1,\phi_2)$ using BDDs I wanted ask if you know an algorithm to find the witness for $EU(\phi_1,\phi_2)$ (CTL formula "Exist Until") using BDDs (Binary Decision Diagram). In pratice you should use the fixed point for ... 1answer 65 views ### Time to construct a GNBA for LTL formula I have a problem with the proof for constructing a GNBA (generalized nondeterministic Büchi automaton) for a LTL formula: Theorem: For any LTL formula $\varphi$ there exists a GNBA $G_{\varphi}$ ... 4answers 429 views ### How do you check if two algorithms return the same result for any input? How do you check if two algorithms (say, Merge sort and Naïve sort) return the same result for any input, when the set of all inputs is infinite? Update: Thank you Ben for describing how this is ... 2answers 411 views ### A Question relating to a Turing Machine with a useless state OK, so here is a question from a past test in my Theory of Computation class: A useless state in a TM is one that is never entered on any input string. Let \mathrm{USELESS}_{\mathrm{TM}} = ... 1answer 99 views ### Late and Early Bisimulation This is a follow up to my earlier questions on coinduction and bisimulation. A relation $R \subseteq S \times S$ on the states of an LTS is a bisimulation iff $\forall (p,q)\in R,$ ... 3answers 353 views ### When are two simulations not a bisimulation? Given a labelled transition system $(S,\Lambda,\to)$, where $S$ is a set of states, $\Lambda$ is a set of labels, and $\to\subseteq S\times\Lambda\times S$ is a ternary relation. As usual, write \$p ... 2answers 497 views ### What is coinduction? I've heard of (structural) induction. It allows you to build up finite structures from smaller ones and gives you proof principles for reasoning about such structures. The idea is clear enough. ... 3answers 270 views ### Path to formal methods It is not uncommon to see students starting their PhDs with only a limited background in mathematics and the formal aspects of computer science. Obviously it will be very difficult for such students ... 5answers 160 views ### Is it possible to solve the halting problem if you have a constrained or a predictable input? The halting problem cannot be solved in the general case. It is possible to come up with defined rules that restrict allowed inputs and can the halting problem be solved for that special case? For ... 2answers 322 views ### Equivalence of Büchi automata and linear $\mu$-calculus It's a known fact that every LTL formula can be expressed by a Büchi $\omega$-automaton. But, apparently, Büchi automata are a more powerful, expressive model. I've heard somewhere that Büchi automata ... 6answers 394 views ### Algorithm to solve Turing's “Halting problem‍​” "Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist" Can I find a general algorithm to solve the halting problem ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8808916807174683, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/22250/list
## Return to Answer 2 edited body Wilfrid Hodges has shown that it is consistent with ZF that there be is an algebraic closure $L$ of the rational field $\mathbb{Q}$ with no nontrivial automorphisms. Obviously $|Aut(L)\smallsetminus {1}| = 2^{\aleph_{0}}$. See: W. Hodges, Läuchli's algebraic closure of $\mathbb{Q}$. Math. Proc. Cambridge Philos. Soc. 79 (1976), no. 2, 289--297 1 Wilfrid Hodges has shown that it is consistent with ZF that there be an algebraic closure $L$ of the rational field $\mathbb{Q}$ with no nontrivial automorphisms. Obviously $|Aut(L)\smallsetminus {1}| = 2^{\aleph_{0}}$. See: W. Hodges, Läuchli's algebraic closure of $\mathbb{Q}$. Math. Proc. Cambridge Philos. Soc. 79 (1976), no. 2, 289--297
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8714249730110168, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/10/21/inner-products-in-the-character-table/?like=1&source=post_flair&_wpnonce=966197e62c
# The Unapologetic Mathematician ## Inner Products in the Character Table As we try to fill in the character table, it will help us to note another slight variation of our inner product formula: $\displaystyle\langle\chi,\psi\rangle=\frac{1}{\lvert G\rvert}\sum\limits_K\lvert K\rvert\overline{\chi_K}\psi_K$ where our sum runs over all conjugacy classes $K\subseteq G$, and where $\chi_K$ is the common value $\chi_K=\chi(k)$ for all $k$ in the conjugacy class $K$ (and similarly for $\psi_K$). The idea is that every $k$ in a given conjugacy class gives the same summand. Instead of adding it up over and over again, we just multiply by the number of elements in the class. As an example, consider again the start of the character table of $S_3$: $\displaystyle\begin{array}{c|ccc}&e&(1\,2)&(1\,2\,3)\\\hline\chi^\mathrm{triv}&1&1&1\\\mathrm{sgn}&1&-1&1\\\vdots&\vdots&\vdots&\vdots\end{array}$ Here we index the rows by irreducible characters, and the columns by representatives of the conjugacy classes. We can calculate inner products of rows by multiplying corresponding entries, but we don’t just sum up these products; we multiply each one by the size of the conjugacy class, and at the end we divide the whole thing by the size of the whole group: $\displaystyle\begin{array}{cclccclcc}\langle\chi^\mathrm{triv},\chi^\mathrm{triv}\rangle&=(&1\cdot1\cdot1&+&3\cdot1\cdot1&+&2\cdot1\cdot1&)/6=&1\\\langle\chi^\mathrm{triv},\mathrm{sgn}\rangle&=(&1\cdot1\cdot1&+&3\cdot1\cdot-1&+&2\cdot1\cdot1&)/6=&0\\\langle\mathrm{sgn},\mathrm{sgn}\rangle&=(&1\cdot1\cdot1&+&3\cdot-1\cdot-1&+&2\cdot1\cdot1&)/6=&1\end{array}$ We find that when we take the inner product of each character with itself we get $1$, while taking the inner product of the two different characters gives $0$. This is no coincidence; for any finite group $G$ irreducible characters are orthonormal. That is, different irreducible characters have inner product $0$, while any irreducible character has inner product $1$ with itself. This is what we will prove next time. ## 2 Comments » 1. [...] Characters are Orthogonal Today we prove the assertion that we made last time: that irreducible characters are orthogonal. That is, if and are -modules with characters and , [...] Pingback by | October 22, 2010 | Reply 2. [...] dealing with lines in the character table, we found that we can write our inner product [...] Pingback by | November 22, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9019372463226318, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/tagged/sha-2+preimage-resistance
# Tagged Questions 1answer 161 views ### Is the last step of an iterated cryptographic hash still as resistant to preimage attacks as the original hash? Considering a cryptographic hash, such as MD5 or SHA2, denoted by the function $H(m)$ where $m$ is an arbitrary binary string, there is a lot of material available that deals with potential weakness ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951099693775177, "perplexity_flag": "middle"}
http://nrich.maths.org/6576/index?nomenu=1
## 'Cross with the Scalar Product' printed from http://nrich.maths.org/ ### Show menu Consider the vector {\bf v}=\pmatrix{1\cr 2\cr 3} Investigate the properties of vectors ${\bf u}$ such that ${\bf u}\cdot {\bf v}=0$. Describe geometrically the set of all such vectors ${\bf u}$. Now explore the possibilities for vectors ${\bf w}$ which are the result of taking the vector cross product of ${\bf v}$ with another vector. How does this relate to the first part of the question? Which of these vectors could arise from taking the vector cross product of ${\bf v}$ with another vector? Before performing lots of algebra, can you work out a quick way to make your decision? {\bf w}=\pmatrix{0\cr 3\cr -2}\,, \pmatrix{795\cr 11\cr 167}, \pmatrix{1\cr -1\cr 0} \mbox{ or } \pmatrix{-7\cr -7\cr 7} Can you find a way of quickly constructing other such vectors?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8516090512275696, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=160088
Physics Forums ## Intersection between a plane and a line Hi all: Given a plane ax+by+cz+d = 0, and a straight line, X = X0 + vt. What is an efficient way to compute the intersection point please? Also, is there any efficient method to determine if two points are located on the same side of the plane or on the different side of the plane??? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member Homework Help Science Advisor Staff Emeritus I've moved your thread to the Homework Help section of the site. As per the guidelines of Physics Forums, we would like to see your attempt at the problem before helping you with it. Thanks, Tom Recognitions: Gold Member Homework Help Science Advisor How to derive the answer should be obvious to you if you use a good enough notation! Remember that for any t in R, there corresponds a point (x,y,z) on the line given by the equations: $$x=x_{0}+v_{x,0}t, y=y_{0}+v_{y,0}t, z=z_{0}+v_{z,0}t$$ Furthermore, what requirement exists so that a point (x,y,z) is guaranteed to lie on the PLANE? In particular, what equation for "t" do you get out of this? (Remember that once you have found the required t-value, computing the specific values of the coordinates of the point is trivial) Thread Tools | | | | |--------------------------------------------------------------|----------------------------------|---------| | Similar Threads for: Intersection between a plane and a line | | | | Thread | Forum | Replies | | | Precalculus Mathematics Homework | 2 | | | Precalculus Mathematics Homework | 2 | | | Precalculus Mathematics Homework | 5 | | | General Math | 4 | | | Introductory Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8982294797897339, "perplexity_flag": "middle"}
http://motls.blogspot.com.au/2012/09/astronomical-unit-au-redefined.html?m=1
# The Reference Frame Our stringy Universe from a conservative viewpoint ## Wednesday, September 19, 2012 ### Astronomical unit (AU) redefined We visited an old astronomer yesterday: I could see the sunspots exactly as you can see them on the web, except that the image was left-right-reflected and somewhat rotated. By the way, if you click at the link, there's a tiny sunspot between 1569 and 1571, 2 times closer to 1569, that I could see as well. But I want to mention another astronomical report. A few days ago, Nature told us that the astronomical unit has been redefined after a vote in Peking (yes, because it's Peking, it was an unanimous vote): The astronomical unit gets fixed Similar people who gathered in Prague 6 years ago and decided that Pluto no longer belonged to the elite club of planets met in Beijing and reformed the definition of 1 AU. What was it before? The most recent previous definition of the constant – which should be the average distance between the Earth and the Sun, just to be sure – said this: [1 AU is] “the radius of an unperturbed circular Newtonian orbit about the Sun of a particle having infinitesimal mass, moving with a mean motion of 0.01720209895 radians per day (known as the Gaussian constant)” Of course, the orbit described by this definition is meant to be a circularized averaged Earth's orbit. Much like many "historical definitions" of units, this definition was linked to particular objects, in this case the Sun, which is not a terribly stable choice. In 7.5 billion years, there will be no Sun as we know it. But even long before that, the mass of the Sun will be changing. How much is it changing? Well, throughout its life that lasts about 10 billion years, only 0.7% of the mass is converted to radiation via $$E=mc^2$$ and fusion: that's a typical percentage for thermonuclear processes. If you care about the numbers, the Sun is converting 4.2 million tons of matter to pure energy each second (it converts an Earth mass to pure energy each 45 million years). If you check some literature, it will assure you that this is pretty much the whole loss. At the current rate, it would take 10 trillion years for the solar mass to drop to zero. That's a slow process but at the accuracy we need, it actually can't be neglected. In 100 years, the solar mass decreases by 1/100 billion of its value, changing the figures by $$10^{-11}$$ or so. Certain distances – although not really distances of celestial objects – may be measured much more accurately. So the new 2012 definition is 1 AU is 149,597,870,700 meters. That's simple enough; the usual value of the distance, 150 million kilometers, was just expressed a bit more accurately and the number has been fixed. Note that 12 digits are listed but only 10 significant figures are nonzero – and those probably express the precision with which we may measure this distance. Recall that 1 meter itself is defined as 1/299,792,458 of a light second and 1 second is still defined via some atomic-clocks-related radiation of an atom. In a few million years, if people will still respect the 2012 vote in Peking ;-) and if they will use the AU unit at all, and if there will be any people at all, people will have to get used to the fact that the average Earth-Sun distance differs from 1 AU by a fraction of a percent. The decreasing solar mass wasn't the only problem of the old definition. Another problem was that it completely ignored relativity. The definition doesn't make it clear whether the radius is measured as $$1/2\pi$$ of the circumference or the proper radius itself: in general relativity, due to the spacetime curvature, these two definitions don't quite agree. Also, the definition depended on a (solar) day which is fluctuating, changing, and whose "corresponding time period" is differently interpreted in different reference frames, due to time dilation of special relativity as well as the gravitational red shift of general relativity. If you had wanted to interpret the old definition too accurately, you were entering a minefield. Of course that all these changes are small enough and the numerical constant in the new definition was chosen in such a way that nothing will really change in practice. Negative implications I would endorse this simplification of the definition as well but I don't share the enthusiastic comments suggesting that the new definition is a win-win situation. In reality, every time we liquidate one of those historical units and replace it by a pure number, a numerical multiple of the modern units, we're making the users of the units less familiar with some part of science or astronomy that used to be very important and that is still arguably important. In this case, for example, people who would have learned what 1 AU was automatically learned something about the Earth-Sun distance and how it can be used to measure the distance of nearby stars as multiples of 1 AU, by the parallax method, and perhaps a few related things. If you tell your students that 1 AU is just the number (in meters), they may conclude "that's it, it's very simple, there's nothing else to learn, why did the people ever define it differently". But that's really missing the point because doable experiments we may do with the telescopes during different seasons produce distances of the stars that come out as multiples of 1 AU, not 1 meter. Because the Earth-Sun difference wasn't terribly accurately known in the units of meters or feet and because the measured distances of the stars in the units of 1 AU could in principle be more accurate, it has made a complete sense to disentangle 1 AU from 1 meter and consider them two independent units of distance. By suppressing this independence, we're really allowing people to ignore the actual methods how certain things are measured (in astronomy, in this case). The history of units in physics is of course full of additional examples. Because 1 meter looks so easy and its numerical multiples are trivial and dull, people feel that there's nothing to learn. But then they can't do many other things that were simpler and more logical in the historical units. So reforms of the units are always a mixed baggage. Such simplified definitions shouldn't be viewed as a justification of a reduction of stuff that is taught because the knowledge of the old units "almost automatically" included some actual science and methods that went beyond pure conventions, some science and methods that were looking simpler in the old units. And not all of those things should be forgotten because such forgetting weakens the people's contact with the world of physics and astronomy. And that's the memo. #### 6 comments: 1. Dilaton Hi Lumo, thanks for this interesting report :-) 2. Funky_dude hmm... are you gay or a female? I've never been able to tell... 3. Shannon Interesting article Lubos. Does the speed to which the Earth orbits around the Sun remains the same ? I learned in the book "Quantum Theory cannot Hurt You" Marcus Chown, that it is approximately 107228 km/h (variable since the orbit is elliptic). The speed of the Earth spinning at the equator is 1669km/h... in Paris it is 109 km/h... I wonder what it is in Dublin, 90 km/h maybe ? 4. Dear Shannon, as the Sun is getting lighter and as the spinning of the Earth is slowing down - because of tides whose friction consumes a part of the rotational energy - the Earth is getting further from the Sun and the orbital speed in km/h correspondingly decreases. It's of course an extremely slow process again, producing changes of order one percent in millions or billions of years - I don't want to think too much right now. The Earth's spinning motion is simply 40,000 km per 24 hours - the circumference of the equator. That's your 1,669 km/h at the equator, when precise figures are inserted. Your Paris figure is clearly an underestimate - it can't be 16 times slower than at the equator, can it? ;-) The right figures for Paris and Dublin and the formulae are: http://www.wolframalpha.com/input/?i=sinus%28latitude+of+paris%29*1669 http://www.wolframalpha.com/input/?i=sinus%28latitude+of+dublin%29*1669 5. Thanks a lot Dilaton but I wish it were this simple. What makes me not-quite-healthy is some brutal physics/biology, not the amount of meetings with other people. I could even bike there, I can do my regular pushups, I am in no bad mood relatively to what I can imagine or I have experienced, but I am still not as physically healthy as desired and I of course continue with my tough sugar-free diet, hoping that it could be abandoned on a nice sunny day in the future. 6. Shannon Wow, that's much faster then. I think that I (or Chown ?) must have forgotten to put the last digit for Paris : 109 instead of 1098 km/h... So I'm going 997km/h right ?... that is faster than my Skoda ;-) ## Who is Lumo? Luboš Motl Pilsen, Czech Republic View my complete profile ← by date
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.957986056804657, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/260280/what-is-a-normalized-valuation-corresponding-to-a-valuation-ring
# What is a “normalized valuation” corresponding to a valuation ring? I encountered the phrase "normalized valuation" similar to the following: Let $A_i$ be the valuation ring $k[x_1,...,x_n]_{\langle x_i\rangle}$ and $v_i$ be the normalized valuation defined by $A_i$. I didn't know this term before, and a short internet search did not help me. What I know: we can define a map $k[x_1,...,x_n]\smallsetminus\{0\}\to\mathbb{Z}$ by sending $f=gx_i^{n_f}$ with $x_i\nmid g$ to $n_f\in\mathbb{Z}$. Then extend this to $v:Q(k[x_1,...,x_n])^*=k(x_1,...,x_n)^*\to\mathbb{Z}$ via $\frac{f}{g}\mapsto n_f-n_g$, and this is a discrete valuation on $k(x_1,...,x_n)$ with $k[x_1,...,x_n]_v=k[x_1,...,x_n]_{\langle x_i\rangle}$ as its discrete valuation ring. Is the map $v$ already the $v_i$ mentioned above? Is it "normalized", and what does this mean? Also, the definition of the map $v$ relies on $k[x_1,...,x_n]$ being a UFD. So by the same argument, $R_{\langle p\rangle}$ is a DVR if $R$ is a UFD and $p\in R$ prime. So I guess this does no longer hold for rings of the form $R_\mathfrak{p}$ where $\mathfrak{p}\subset R$ is prime in general? What about $k[x_1,...,x_n]_{\langle x_1,x_2\rangle}$? Thank you! - I just found a comment by QiL on "normed valuations", where he says that's the case if the value of a uniformizing element is 1. In my case, if I'm not mistaken, a uniformizing element would be $\frac{x_i}{1}$, with value $1-0=1$. Is normed = normalized maybe? If yes, I'll also gladly accept any answer elaborating on the question(s) in my last paragraph above! – Rand al'Thor Dec 16 '12 at 22:01 In that message, the OP wrote wrongly normed instead of normalized. I don't know what is a normed valuation, but a normalized discrete valuation is as in Makoto's answer. – QiL'8 Dec 17 '12 at 21:43 ## 2 Answers A normalized discrete valuation $v$ of a field $K$ means that $v$ is a discrete valuation of $K$ such that $v(K^*) = \mathbb{Z}$. In general, $v(K^*)$ can be any discrete subgroup of $\mathbb{R}$. - Hello @Makoto! Are you sure about that? I'm a bit confused now, since we defined a valuation of a field $K$ in a totally ordered group $G$ as a group homomorphism $v:K^*\to G$ with the known properties. A discrete valuation was then if $G=\mathbb{Z}$ and if $v$ was surjective. By my definition, any discrete valuation would then be normalized? – Rand al'Thor Dec 16 '12 at 22:06 @Randal'Thor There are several definitions of valuations. Your definition is one of them. However, usually a discrete valuation takes its values in $\mathbb{R}$. – Makoto Kato Dec 16 '12 at 22:17 – Rand al'Thor Dec 16 '12 at 22:27 @Randal'Thor, there are natural valuations that you want to consider discrete but the group is not $\mathbb{Z}$. Consider for example the 2-adic valuation $v_2$ of $\mathbb{Z}$. $2 = (1+i)^2$ (up to unit) in $\mathbb{Z}[i]$. If you want to extend $v_2$ to $v_p$ where $p = (1+i)$, a possible way is to define $v_p(2) = 1$ (so that $v_p$ restricted back to $\mathbb{Z}$ is $v_2$), but then $v_p(1+i) = 1/2$. For cases like this, one may want to scale the valuation back (in this case, $v_p(1+i) = 1$) – Sanchez Dec 16 '12 at 22:31 For general prime ideals $\mathfrak{p}\subseteq k[x_1,\ldots, x_n]$, the local ring $k[x_1,\ldots, x_n]_\mathfrak{p}$ will not be a valuation ring, and hence will not define a valuation. But that does not mean you cannot consider valuations on such a ring; in fact, it is a very interesting thing to do! Take, for instance, the local ring $R = \mathbb{C}[x,y]_{(x,y)}$. This is not a valuation ring, but it makes sense to consider valuations $\nu\colon R\to \mathbb{R}\cup\{+\infty\}$. In fact, a rather interesting book (The Valuative Tree by Favre and Jonsson) has been written about valuations $\nu\colon R\to \mathbb{R}\cup\{+\infty\}$ satisfying the properties: 1. $\nu$ only takes nonnegative values on $R$. 2. $\nu(z) = 0$ for all $z\in \mathbb{C}^\times$. 3. $\nu(f)>0$ for all $f$ in the maximal ideal $\mathfrak{m}$ of $R$. Such $\nu$ are called centered valuations. Such valuations also have a notion of normalized. We say a centered valuation $\nu$ is normalized if $\min_{f\in \mathfrak{m}} \nu(f) = 1$. It turns out that the set of normalized centered valuations on $R$ has an interesting combinatorial structure: it is an $\mathbb{R}$-tree. The study of spaces of valuations on rings is becoming a more popular subject these days, sometimes falling under the heading of nonarchimedean analytic geometry or Berkovich geometry. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427148103713989, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/11017/is-this-algorithm-for-simulating-a-quantum-computer-accurate/11019
# Is this algorithm for simulating a quantum computer accurate? I'm very new to quantum mechanics. I'm thinking of writing a quantum computer simulator, would the following work? • Each qubit is stored as a single bit, • For each operation, the qubits involved are transformed into a complex vector of amplitudes of size 2^N. This will result in a vector containing 2^N-1 0s and one 1. • This vector is multiplied by the unitary matrix representing the operation. • The resulting vector is squared element-wise. • An outcome is picked using the elements of the result as probabilities and the qubits are set according to this outcome. I'm not concerned with the running time or the memory cost of the algorithm. What I am concerned about is whether this would result in a physically accurate simulation. - I suspect that it is incorrect and that I don't quite understand what happens when a qubit is measured, but I'd like to know why it is incorrect; can you provide a case where the above algorithm generates an incorrect result. – dan_waterworth Jun 12 '11 at 8:08 ## 3 Answers Yes, this will essentially work, although there are some pitfalls in the way you've phrased things. For one, you cannot store each qubit as a single bit, but you can work things out so that any computation you do starts from a state which can be represented by $N$ classical bits, i.e. the zero state. Following this your second and third bullet points are correct. As for the fourth, element-wise squaring implies that your measurement is in the computational basis. If you need to measure in, say, the Fourier basis, you will need to convert to a computational basis measurement by including a Fourier transformation in step 3. Bullet 5, like bullet 1, conflates the classical information encoded onto/extracted from qubits with the qubits themselves. Everything here is fine assuming you mean for step 5 to be the absolute end of the computation, but you cannot, in general, loop from step 5 back to step 2. Ultimately the reason your simulation works is that any computation on $N$ qubits can be represented as a single ($2^N\times 2^N$) unitary acting on the zero state followed by measurement in the computational basis. You cannot, however, use a simulation such as yours to do gate-by-gate simulation of a quantum algorithm. - Thanks, that's sort of what I expected. Could you elaborate on why it is not possible to do gate-by-gate simulation? – dan_waterworth Jun 12 '11 at 15:10 Sure, consider what happens if you apply 2 sequential Hadamard gates to the zero state and measure the result. A quantum computer will always give the answer zero since $\mathbf{H}\mathbf{H}\vert 0 \rangle = \mathbf{I}\vert 0 \rangle = \vert 0 \rangle$. Using your simulator in a gate-by-gate manner, a measurement would be made after the first Hadamard, the input to the second Hadamard would then be $\vert 1 \rangle$ with probability 1/2 and the final measurement will be of the state $\mathbf{H}\vert 1 \rangle$. This measurement will yield 1 with probability 1/2. – John Schanck Jun 12 '11 at 15:21 Thank you. You've been most helpful. – dan_waterworth Jun 12 '11 at 16:16 No, this won't work. The last step of picking an outcome by squaring the elements of the result and resetting the qubits are set according to this outcome is equivalent to measuring the state of your quantum compute. If you measure the state of your quantum computer after every step, this decoheres any coherent superposition. You will get a computation which can easily be simulated on a classical computer, and which you won't be able to use for any interesting quantum computations. Pretty much any quantum algorithm that takes more than one step will give an example of a case where this won't give the right answer. - You're certainly correct if Dan intends to iterate through his steps. But is it not the case, as I mentioned in my answer, that the simulation is fine (albeit inefficient) as long as the unitary in step 3 represents the entire computation? – John Schanck Jun 12 '11 at 23:46 1 @John: Good point. I didn't look at your answer closely enough before posting my own, so there is some duplication. But it really sounded to me as if Dan is intending to iterate through his steps. – Peter Shor Jun 12 '11 at 23:48 Shor, Thanks for posting this answer. It does clear a few things up for me. It's a privilege to have a question I asked answered by someone with a name I recognize, (assuming you are the same Peter Shor). – dan_waterworth Jun 13 '11 at 10:26 The information about an $N$-qubit computer is given by $2^N$ complex numbers. A single operation means to multiply this vector by a unitary matrix which means doing something between $2^N$ and $4^N$ operations (usually the first one because the operations are "localized" on the qubits, so they're not quite generic matrices of the same size). At the end, you measure the qubits as $N$ ordinary bits. That's when the vector gets reduced to $2^{N}-1$ zeros and $1$ entry equal to one. It's not quite clear from your wording whether you realize that this special form only occurs once, at the very end of the calculation - after you do the "measurement". The problem in turning this to a practical solution is that $2^N$ operations is a very high number for useful quantum computers that would have at least $N=128$ or much more. A quantum computer would do the operation in one step instead of $10^{38}$ steps. Also, it wouldn't need the classical memory $10^{38+}$ - just 128 spins etc. - to remember all those quantum amplitudes haha. - I realize that this is the assumed case. I want specifics, why can't it be done as in the question? – dan_waterworth Jun 12 '11 at 12:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515050053596497, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/84384-integral-x-4-4-x-5-1-2-a.html
# Thread: 1. ## Integral x^4(4+x^5)^1/2 $\int x^4\sqrt{4+x^5}$ Do I make $u=4+x^5$ in order to solve this problem? 2. Originally Posted by cammywhite $\int x^4\sqrt{4+x^5}$ Do I make $u=4+x^5$ in order to solve this problem? Yes! Just remember to factor $\frac{1}{5}$ out of your integral! 3. Originally Posted by cammywhite $\int x^4\sqrt{4+x^5}$ Do I make $u=4+x^5$ in order to solve this problem? Yes. Notice that if $u = 4 + x^5$ then $\frac{du}{dx} = 5x^4$. So $\int{x^4\sqrt{4 + x^5}\,dx} = \frac{1}{5}\int{5x^4(4 + x^5)^{\frac{1}{2}}\,dx}$ $= \frac{1}{5}\int{u^{\frac{1}{2}}\,\frac{du}{dx}\,dx }$ $= \frac{1}{5}\int{u^{\frac{1}{2}}\,du}$. I trust you can go from here... 4. Originally Posted by Prove It Yes. Notice that if $u = 4 + x^5$ then $\frac{du}{dx} = 5x^4$. So $\int{x^4\sqrt{4 + x^5}\,dx} = \frac{1}{5}\int{5x^4(4 + x^5)^{\frac{1}{2}}\,dx}$ $= \frac{1}{5}\int{u^{\frac{1}{2}}\,\frac{du}{dx}\,dx }$ $= \frac{1}{5}\int{u^{\frac{1}{2}}\,du}$. I trust you can go from here... Let me try $= \frac{1}{5}\int{u^{\frac{1}{2}}\,du}$ $= \frac{1}{5}\int\frac{{u^{\frac{3}{2}}\,du}}{\frac{ 3}{2}}$ so far correct? 5. Originally Posted by cammywhite Let me try $= \frac{1}{5}\int{u^{\frac{1}{2}}\,du}$ $= \frac{1}{5}\int\frac{{u^{\frac{3}{2}}\,du}}{\frac{ 3}{2}}$ so far correct? Your integration is correct, but you need to leave out the integral sign & du since you have already integrated. 6. Originally Posted by mollymcf2009 Your integration is correct, but you need to leave out the integral sign & du since you have already integrated. I looked at the answer and it's $\frac {\sqrt{x^5+4}(2x^5+8)}{15}$ I have no clue how to get to the answer 7. Originally Posted by cammywhite Let me try $= \frac{1}{5}\int{u^{\frac{1}{2}}\,du}$ $= \frac{1}{5}\int\frac{{u^{\frac{3}{2}}\,du}}{\frac{ 3}{2}}$ so far correct? $<br /> \frac{1}{5} \left( \frac{u^{3/2}}{\frac{3}{2}}\right)=\frac{2}{15}u^{3/2}<br />$ $\frac{2}{15}(4+x^5)^{3/2}=\frac{2}{15}\sqrt{4+x^5}(4+x^5)=\frac{2(4+x^5)\ sqrt{4+x^5}}{15}=\frac{(8+2x^5)\sqrt{4+x^5}}{15}$ 8. Originally Posted by TheEmptySet $<br /> \frac{1}{5} \left( \frac{u^{3/2}}{\frac{3}{2}}\right)=\frac{2}{15}u^{3/2}<br />$ $\frac{2}{15}(4+x^5)^{3/2}=\frac{2}{15}\sqrt{4+x^5}(4+x^5)=\frac{2(4+x^5)\ sqrt{4+x^5}}{15}=\frac{(8+2x^5)\sqrt{4+x^5}}{15}$ i don't get this part $\frac{2}{15}(4+x^5)^{3/2} =\frac{2}{15}\sqrt{4+x^5}(4+x^5)$ 9. Originally Posted by cammywhite i don't get this part $\frac{2}{15}(4+x^5)^{3/2} =\frac{2}{15}\sqrt{4+x^5}(4+x^5)$ We are using this exponent law $x^ax^b=x^{a+b}$ Note that $x^{\frac{3}{2}}=x^{\frac{1}{2}}x^1=\sqrt{x}\cdot x$ So using this we get $(4+x^5)^{3/2} =\sqrt{4+x^5}(4+x^5)$ 10. Integration by substitution is like a backward-application of the chain rule. In the expression $x^4\sqrt{4+x^5},$ we see that $x^4$ looks like it was differentiated from $4+x^5$ of the square root. To find the real antiderivative, we just add in the factor $\frac{1}{5}$ to get $x^4$ from $5x^4$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 39, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269463419914246, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/1589/toy-sheaf-cohomology-computation?answertab=oldest
# Toy sheaf cohomology computation One thing that really helped in learning the Serre SS was doing particular computations (like $H^*(CP^{\infty})$) I am curious, as a sort of followup if anyone can suggest: 1. a reference where small computations are carried out? or 2. a specific computation to do with a small enough sheaf an some simple topological space that would be able to give one a feel for sheaf cohomology. So this space that we are working over need not be a scheme, in fact it would probably be best if it were not a scheme since i dont understand them quite yet. And are there tricks of the trade to computing these things? or do people just hammer away ate injective resolutions? in short, please suggest a space and a sheaf on it that i should work on computing the sheaf cohomology of. PS I of course welcome any other suggestions for understanding how to compute sheaf cohomology. - 5 No one uses injective resolutions to compute! – Mariano Suárez-Alvarez♦ Aug 5 '10 at 6:02 Thank god! so what do people use in AG? And why have you not answered my MO question? ;) – Sean Tilson Aug 5 '10 at 6:17 4 In AG, people often use Cech computations, based on a finite cover by affine opens and the fact that higher cohomology of coherent sheaves vanishes on affine opens. They also use a result of Serre, to the effect that higher cohomology of any coherent sheaf twisted by a sufficiently positive line bundle vanishes. They use Riemann-Roch. In the analytic setting, they use the exponential short exact sequence, and in the etale cohomology setting, they use an etale analogue of this. They use the interpretation of $H^1$ of $\mathcal O^{\times}$ as the Picard group. They use Kodaira vanishing, – Matt E Aug 5 '10 at 6:44 4 when it applies, and other related vanishing theorems, they use Hodge theory (and especially Hodge symmetry, i.e. the equality $h^{p,q} = h^{q,p}$); of course, this list is not exhaustive, but it will give you some idea. Another very simple fact, but still often useful (especially when working on curves) is that skyscraper sheaves have vanishing higher cohomology. (More generally, if a sheaf is supported on some closed subspace $Y$ of $X$, we can compute its cohomology on $Y$ rather than $X$, which gives a lot of scope for inductive computations; this gives an interaction between the – Matt E Aug 5 '10 at 6:48 3 traditional approach, in projective geometry, of considering hyperplane sections, and the more modern viewpoint of using cohomology. For more, see Zariski's old (but beautiful) report on cohomology in algebraic geometry, from the Bulletin of the AMS in the 50s. – Matt E Aug 5 '10 at 6:49 show 1 more comment ## 3 Answers Any de Rham cohomology (or Dolbeault cohomology) computation is a computation in sheaf cohomology. Actually --- any computation in singular cohomology is a computation in sheaf cohomology!! ;-) We're just taking different resolutions of the appropriate constant sheaf. IIRC, there are some good Cech cohomology computations and examples in Bott-Tu. Also, have you read section 3.H of Hatcher's algebraic topology book, on "local coefficients"? For a simple example from algebraic geometry, compute the cohomology of the structure sheaf of $\mathbb{A}^2$ minus a point. I seem to recall an exercise or an example in Hartshorne in which the genus of a degree $d$ curve in $\mathbb{P}^2$ is computed using Cech cohomology. The section in Hartshorne on the cohomology of $\mathbb{P}^n$ uses Cech cohomology, and I remember finding it pretty instructive. Eisenbud's commutative algebra book probably has lots of good examples. However, he might not use words like "sheaf cohomology" (since I don't think the book introduces sheaves), although that's what it is. - Rotman does some very elementary explicit computations of Cech cohomology in his book Homological Algebra. If I remember correctly he does these computations using resolutions, spectral sequences, and by just starting with some sequence. As a complete beginner to this material, I was able to understand his treatment and compute some specific examples on my own. I hope this helps :) P.S. I just looked at the first edition, and it seems to be different slightly. For your information I used the second edition. - What were your thoughts of the second edition as compared to the first, if any? – Sean Tilson Aug 5 '10 at 15:09 Supposedly he added/improved the second a lot. I don hav ethe first to compare. I should mention there are some typos in the second. I still recommend a look at this book despite. :) – BBischof Aug 5 '10 at 22:43 The relevant pages are 386-387 he does six examples, two are stupid, then he does reimann-roch. – BBischof Aug 5 '10 at 22:54 This is rather scheme-y, but there's a really nice paper by Kempf (hopefully you have institutional access :() that gives a very basic and elementary proof that the higher cohomology of a quasi-coherent sheaf on an affine scheme is trivial. The first part of the paper uses nothing more than the basic properties (e.g. long exact sequence) of cohomology, and might be fun. I thought it was fun, anyway; it's also nice because it shows that Hartshorne is unnecessarily restrictive in sticking to noetherian affine schemes in chapter III (even if one wants to avoid anything fancy). OK, update: here is the proof explained (admittedly by a beginner :)). - 1 In your quoted Theorem 1, "$latex{\mathcal{F}}&fg=000000$" did not get converted by whatever TeX renderer you use. – Larry Wang Aug 6 '10 at 22:32 Fixed, thanks for pointing it out. – Akhil Mathew Aug 7 '10 at 0:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9436838030815125, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/102068?sort=newest
## Intersection forms of 4-manifolds with boundary ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be any simply connected smooth 4-manifold with a fixed Euler characteristic $e$, signature $\sigma$ and boundary $Y$. Assume that the determinant of the intersection form $Q_{X}$ is equal to a fixed integer $k \neq 0, \pm 1$. Is it true that the number of possible intersection forms for such $X$ is finite? Any reference would be appreciated. - ## 2 Answers The classification of integral quadratic forms is discussed in Chapter 15 of Conway-Sloane. In particular, the discussion there implies that there are only finitely many integral quadratic forms of a given determinant and dimension. Section 11 in the chapter discusses methods for computing the number of such forms. - @Agol Thanks a lot for the reference! – david-sun Jul 20 at 22:33 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As the question is posed, the answer is not. If you take connected sum with $\Bbb{CP}^2$ you preserve the determinant, but change the intersection form. - Thanks Daniele. I should have added that the Euler characteristic and the signature of $X$ are fixed as well. – david-sun Jul 12 at 21:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219845533370972, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/218020/finite-free-rings-over-complete-local-rings-are-direct-products-of-local-rings
# Finite free rings over complete local rings are direct products of local rings I came across the following statement: Let $R$ be a complete local Noetherian commutative ring. If $A$ is a commutative $R$-algebra that is finitely generated and free as a module over $R$, then $A$ is a semi-local ring that is the direct product of local rings. (I'm unsure if completeness or the Noetherian condition is actually relevant to this; but this is the specific fact being used) I can prove it is a semi-local ring: Let $m$ be the maximal ideal of $R$, then $\frac{A}{mA}$ is finite dimensional as a $\frac{R}{m}$ vector space, and thus Artinian. Therefore, it only has a finite number of maximal ideals, and its maximal ideals correspond to maximal ideals of $A$ containing $mA$. But all maximal ideals of $A$ contain $mA$: To see this, this is equivalent to the Jacobson radical containing $mA$, which is equivalent to $1-x$ being a unit in $A$ for any $x \in mA$. The inverse is just $1+x+x^2+\cdots$, which exists by completeness. But why is $A$ necessarily the direct product of local rings? - Indeed. I'll edit the post. – only Oct 21 '12 at 15:25 ## 2 Answers Commutative Artinian rings in general are finite direct products of local Artinian rings, and that has nothing to do with it being an algebra over a special ring. Every idempotent of such a ring generates an ideal (which is actually a subring) $eRe$. Sometimes it's possible that $e$ splits into two smaller nonzero orthogonal idempotents: $e=f+g$ such that $fg=0$, whereupon $eRe=fRf\oplus gRg$ and the idea splits into two smaller ideals. Using the Artinian condition, you refine the idempotents until they cannot be broken down any more. The result is a set of finitely many idempotents $e_i$ which cannot be written as a sum of two other orthogonal nontrivial idempotents, and $\sum e_i=1$. The resulting subrings (which are ideals) $e_iRe_i$ are local rings, and $\oplus e_iRe_i=R$. - $A$ isn't necessarily Artinian though; $\frac{A}{mA}$ is. Unless I'm missing something? – only Oct 21 '12 at 15:27 Oh, I just realized that the splitting of $\frac{A}{mA}$ corresponds to a splitting of $A$. – only Oct 21 '12 at 15:33 Sorry, I didn't catch that the first time around. I thought we had that $A$ was artinian. I'll look again! – rschwieb Oct 21 '12 at 15:34 @only haha, well looks like you figured it out before I did. Nice! I was suspecting that commutative semilocal rings split into local rings, but I wasn't sure. Is that true? – rschwieb Oct 21 '12 at 15:34 I know that there is a counterexample, but I don't know the counterexample itself. – only Oct 21 '12 at 15:37 show 7 more comments As rschwieb notes, commutative Artinian rings can always be factored uniquely as a product of local Artinian rings, and the formation of this factorization commutes with the passage to the maximal reduced quotient (i.e. it is invariant under quotienting out by the nilradical). Since a complete semilocal ring is (by definition) the projective limit of $A/I^n A$, where $I$ is the Jacoson radical and is the intersection of the finitely many maximal ideals, applying the preceding paragraph to the quotients $A/I^n A$ (each of which is Artinian, and all of which has the same maximal reduced quotient, namely $A/IA$), we obtain a factorization of $A$ into a product of finitely many complete local rings. This result uses completeness in a crucial way (via passage to the Artinian case). E.g. the localization of $\mathbb Z[i]$ at the prime ideal $5$ of $\mathbb Z$ (i.e. invert all elements coprime to $5$) is semi-local (because there are two prime ideals in $\mathbb Z[i]$ lying over the prime ideal $5$ of $\mathbb Z$), but is not a product of local rings. (It is an integral domain, and so cannot be written as a product in a non-trivial way.) Of course, if we $5$-adically complete it, then by the preceding discussion it will split as a product (of two copies of $\mathbb Z_5$). - @rschwieb: Dear rschwieb, Sorry about the misspelling, which I'll correct right now. Best wishes, – Matt E Oct 22 '12 at 18:10 It still was misspelled but I appreciate the effort... I've noticed that people fail to @ tag me properly because of it. – rschwieb Oct 22 '12 at 18:22 @rschwieb: Dear rschwieb, Well, I couldn't have made much of a worse mess of this if I'd tried; I'll plead tiredness-induced poor typing, and hope you'll accept another apology. Best wishes, – Matt E Oct 22 '12 at 21:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417377710342407, "perplexity_flag": "head"}
http://nrich.maths.org/2220/note
### Pebbles Place four pebbles on the sand in the form of a square. Keep adding as few pebbles as necessary to double the area. How many extra pebbles are added each time? ### It Figures Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all? ### Bracelets Investigate the different shaped bracelets you could make from 18 different spherical beads. How do they compare if you use 24 beads? # Colour Wheels ### Why do this problem? This activity, included in a month when we are focusing on visualising, challenges pupils to form pattern images in their heads. The problem encourages them to use this imagery to recognise, describe and manipulate pattern. This leads on to opportunities to build up generalisations using words and this links in aspects of arithmetic, (multiples and divisibility). ### Possible approach You could introduce this activity orally. Start off by asking the group to imagine a wheel with a blue mark painted on the edge and a red mark painted on the opposite edge. Say that you place it on the ground and roll it. Ask them to talk in pairs about what they think they would see on the ground if the paint was still wet. Share ideas amongst the whole group. Encourage learners to describe using just words at first, rather than using pictures or objects. Once the blue, red, blue, red ... pattern has been established, ask a few questions like "Can you predict the colour of the third/fifth/tenth/hundredth mark?". Give learners plenty of time to think about each question and at this stage, you can allow them to draw/write if they want to. When discussing their responses, encourage clear explanations. Did they have to draw $100$ marks? You can then go on to the problem as it is written. You may want to continue working orally to start with, but you could always show the children the animations of the wheel/s or have a large disc/cylinder as a prop. When it comes to drawing their ideas together, look for learners who express their explanations clearly in terms of multiples. Many children will be able to investigate their own wheels and to ask their own questions, and their work would make an impressive display. ### Key questions What do you notice? What would the next mark be? How do you know? You could draw or write down the sequence of colours and number each one. What do you notice? How can you predict what the marks will be without drawing or writing? ### Possible extension Challenge the pupils to find wheels that are different but would produce the same patterns when rolled. Alternatively, here is another question learners could pursue: The third mark on a wheel is red and in a line of colours it is found that the 100th mark made by the wheel is also red. How many marks are there on the wheel altogether? ### Possible support A wheel or cylinder to roll would be helpful for some pupils, or perhaps just a disc of card on which they could draw coloured marks. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587094783782959, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/92683?sort=votes
## Finiteness conditions and Veronese subrings ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider a commutative group $G$ of finite type, a subgroup of finite index $H\subseteq G$, a noetherian commutative ring $A$, and a $G$-graded $A$-algebra $R=\bigoplus_{g\in G}R_g$ with no zero-divisors, and denote by $R_H=\bigoplus_{g\in H}R_g$ the degree restriction of $R$ to $H$. It is well-known that if $R$ is of finite type over $A$, then so is $R_H$. So, we can ask whether the following statement is true: (+) If $R_H$ is of finite type over $A$, then so is $R$. Using the fact that $R$ is integral over $R_H$, one can show that (+) holds in the following two cases: • $R_H$ is integrally closed and the field of fractions of $R$ is a separable extension of the field of fractions of $R_H$; • $R_H$ is a japanese ring (see EGA 0$_{\rm IV}$.23). In particular, (+) holds if $A$ is universally japanese, e.g. excellent, e.g. of finite type over a field. My question is now as follows: Is there an example where (+) does not hold? - ## 1 Answer In case $G$ is finite, this cannot happen. (This might extend to the general case of finitely generated groups, as Fred told me when we talked about this in my office :-) ) First, let me show that $R$ is of finite type over $R_0$ in case $R_0$ is noetherian and $G$ is finite. For that, it suffices to show that every $R_g$, $g \in G$ is finitely generated as an $R_0$-module. Now fix some $g \in G$. In case $R_g = { 0 }$, we are done. Otherwise, let $\alpha \in R_g \setminus { 0 }$. As $G$ is a finite group, there exists some $n > 0$ such that $n g = 0$. As $R$ is integral, $\beta := \alpha^{n-1} \neq 0$ is a non-trivial element of $R_{g^{-1}}$. Now the map $\varphi : R_g \to R_0$, $x \mapsto \beta x$ is an injective $R_0$-module homomorphism. The image, $\beta R_g$, is therefore isomorphic to $R_g$ as an $R_0$-module. As $R_0$ is noetherian, every $R_0$-submodule of $R_0$ is finitely generated, whence $\beta R_g$ and thus $R_g$ is a finitely generated $R_0$-module. This shows that $R$ is of finite type over $R_0$. Now let me go back to the original problem. As $R_0$ is a subring of both $R$ and $R_{(H)}$, it is of finite type over the noetherian ring $A$. Therefore, $R_0$ is noetherian as well. So in case $G$ is finite, the above shows that $R$ is of finite type over $R_0$, and thus also over $A$. - 1 Doesn't this solve the general case as well? (which I assume is what you're saying). It's really about the grading induced by $G/H$, which is a finite group. (I.e., you should lump $R_H$ into one ring, $R_0$, and grade by the cosets in $G/H$). – Mike Roth Apr 3 2012 at 16:39 Thanks, @Felix! The general case follows as @Mike suggests. In particular, $G$ need not be of finite type. – Fred Rohrer Apr 4 2012 at 11:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388204216957092, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/136716/program-for-eigenvalue-calculation/136728
# program for eigenvalue calculation I have a n x n matrix. I would like to (a) take successively higher powers of the matrix and then multiply by projection vectors until the resulting vectors differ by only a scalar factor. (b) calculate the dominant eigenvalue of the matrix to compare to (a) and (c) calculate, using the same tactic as in (a), the dominant right and left eigenvectors. This is too much work to do by hand, so my question is: can anyone recommend a program/language or package that would be ideal for the above calculations? Thanks. - It seems to me that OP wants nothing more than a simple power method implementation, and the answers given thus far do quite a bit more than that... NAPACK has a power method implementation (in FORTRAN of course); translation to other computing environments shouldn't be too hard... – J. M. Apr 25 '12 at 12:07 ## 3 Answers You should check out a Computer Algebra System (CAS) and pick one that you like the interface for. There are commercial to open sourced - freeware ones and here is a comparison (many of the responses refer to these, but this is a nice list. For example, Mathematica (\$) or Maxima or SAGE (both free). See: http://en.wikipedia.org/wiki/Comparison_of_computer_algebra_systems Enjoy - A - $$+\left(\text{multiplicative identity in}\; \mathbb R\right)$$ – amWhy 15 hours ago There are various standardized answers for your question. The most obvious solution would be to use a computer algebra system, e.g., Mathematica, Matlab or Maple (both commercial), as well as Axiom or Octave (free). If you prefer to use a certain programming language, there are a number of linear algebra libraries that may help. For C and Fortran, the classic library to use is probably BLAS (basic linear algebra) in conjunction with something like LINPACK for more advanced operations. - High Level: For a high level interface, you may use any commercial or free software including: • MATLAB/Octave/SciLab and the many other clones • Maple/Mathematica/R • NumPy/SciPy Low Level: You might want to study LAPACK a little closely. There are many many libraries which provide support for what you wish to achieve. Including Boost, GSL, Eigen etc. - Is it possible to compute steps a-c above in R, if I have a small sparse matrix (6 x 6)? I have tried using the basic functions svd() and qr() to get dominant eigenvalues and decomposition of the matrix, but the results are not ideal (i.e., wrong or not iterated enough to converge on a plausible answer). – eric Apr 25 '12 at 14:48 I haven't tried it myself but I don't see any reason why. Regd. (a), Computing $A^2$ is trivial in R. Calculating dominant eigenvalue is also trivial (using eigen(A)). I'm not as knowledgeable about (c). Also, it is not difficult to "extend" R to behave like MATLAB. Check out this and this. – Inquest Apr 25 '12 at 14:56 I hit the character cap, continuing, have you tried using eigen() function? Also, why not try using Octave or NumPy? I would have preferred Octave since it is high level and you wouldn't have to bother with implementation till I got the underlying math right. – Inquest Apr 25 '12 at 15:03 Using eigen(A) in R gives an odd result, not the one I'm expecting, anyhow, so I'll try my hand at Octave, see what comes out. Thanks again for your help. – eric Apr 25 '12 at 15:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9176220297813416, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/34470/how-does-the-holographic-principle-imply-nonlocality
# How does the holographic principle imply nonlocality? For example in the discussions here and here there are comments by Ron Maimon: Your complaint about locality would be more serious if holography didn't show the way--- the CFT in AdS/CFT produces local AdS physics, even though the description is completely and ridiculously nonlocal and Once you realize that gravity is defined far away on a holographic screen, the idea of hidden variables becomes more plausible, because the physics of gravity is nonlocal in a way that suggests it might fix quantum mechanics How is gravity nonlocal? I thought GR was explicitly Lorentz Invariant? Or are these statements more philosophical (something I would not expect from Ron), ie, just a statement that the boundary is "far away" and isomorphic to the interior... EDIT: Ron gave an answer that is very difficult for me to parse. Can someone who is a bit more pedagogically inclined interpret what he says? I asked him to clarify various points in the comments, with little luck. I'm not even sure how he is defining 'locality': The nonlocality of gravity doesn't mean that Lorentz invariance is broken, Lorentz invariance and locality are separate concepts. It just means that to define the state of the universe at a certain point, you need to know what is going on everywhere, the state space isn't decomposing into a basis of local operators. I do not see how this does not violate Lorentz invariance. If your state at time t depends on parts of the universe outside your light cone, this is clearly a-causal. "Locality" is a bit of an overloaded term, and for this discussion I will assume that it means there are bosonic operators at every point which commute at spacelike separation (Bosonic fields and bilinears in Fermi fields). This means that that the orthogonal basis states at one time are all possible values of the bosonic field states on a spacelike hypersurface, and over Fermi Grassman variables if you want to have fermions. I do not understand this definition, and frankly it seems unnecessarily complicated and non-transparent. Is this a different definition of 'locality' compared to what is used, for example, in Bell's famous paper? - 1 – drake Aug 18 '12 at 23:35 To clarify your doubts: Lorentz invariance just tells us that all inertial frames are equivalent. Locality/causality is a separate physical input into the framework. (Afaik) I could build a theory which is globally coupled but rotationally invariant, for eg: a system of N spins coupled to all other spins while respecting d-dimensional rotational invariance of the space in which they sit. – Siva Sep 26 '12 at 22:53 Operators commuting at spacelike separation is like saying that two points at spacelike separation are not correlated (since you calculate their correlation by taking a "vacuum" sandwich of those bosonic operators) – Siva Sep 26 '12 at 22:54 ## 1 Answer The nonlocality of gravity doesn't mean that Lorentz invariance is broken, Lorentz invariance and locality are separate concepts. It just means that to define the state of the universe at a certain point, you need to know what is going on everywhere, the state space isn't decomposing into a basis of local operators. "Locality" is a bit of an overloaded term, and for this discussion I will assume that it means there are bosonic operators at every point which commute at spacelike separation (Bosonic fields and bilinears in Fermi fields). This means that that the orthogonal basis states at one time are all possible values of the bosonic field states on a spacelike hypersurface, and over Fermi Grassman variables if you want to have fermions. If you extend this idea to curved spacetime and to arbitrarily short distances, you get a completely ridiculous divergence in the number of black hole states. This was the major discovery of 'tHooft, which is the basis of the holographic principle. To see this, consider the exterior Schwarzschild solution, The local t temperature is the periodicity of the imaginary time solution, and it diverges as 1/a where a is the distance to the horizon (this distance is measured by the metric, which is diverging in r coordinates, so it is not $r-2m$ for r near the horizon, but proportional to $\sqrt{r-2m}$. With this change of variables, the horizon is locally Rindler). Assuming that the fields are local near the horizon, the thermal fluctuations of the fields consists of a sum over the entropy of independent thermal field fluctuations at the local temperature. You can estimate the entropy (per unit horizon area) in these fluctuations by integrating the entropy at any r with respect to r. The entropy density of a free field (say EM) at temperature T goes as $T^{3}$, so you get $$\int_{2m}^A {1\over (r-2m)^{1.5}} dr$$ The convergence at large A is spurious, the redshift factor asymptotes to a constant in the real solution, so you get a diverging entropy. This is sensible, it is just the bulk entropy of the gas of radiation in equilibrium with the black hole. But this integral is divergent near the horizon, so that the black hole Hawking vacuum in a local quantum field theory in curved spacetime is carrying an infinite entropy skin. This divergent entropy is inconsistent with the picture of a black hole forming and evaporating in a unitary way, it is inconsistent with physical intuition to have such an enormous entropy in an arbitrary small black hole, it is just ridiculous. So any quantum theory of gravity with the proper number of degrees of freedom must be nonlocal near a black hole horizon, and by natural extension, everywhere. The divergence is intuitive--- it is saying you can fit an infinite amount of information right near the horizon, because nothing actually falls in from the extrerior point of view. If the fields are really local, you can throw in a Gutenberg bible and extract all the text by careful local field measurements a hundred years later. This is nonsense-- the information should merge with the black hole and be reemitted in the Hawking radiation, but that's not what the semiclassiclal QFT in curved space says. 'tHooft first fixed this divergence with a brick wall, a cutoff on the integrals to make the entropy come out right. This cutoff was a heuristic for where locality breaks down. In order to fix information loss, around 1986, he considered what happens when a particle flies into a black hole, and how it could influence emissions. He realized that the only way the particle could influence the emissions was through the gravitational deformation the particle leaves on the horizon. This deformation is nonlocal, in that the horizon shape is determined by which light rays make it to infinity. The backtracing showed that an infalling particle leaves a gravitational imprint on the horizon, like a tent-pole bump where it is going to enter. He could get a handle on the S-matrix by imagining that the bumps are doing all the physics, the horizon motion itself, and this bump-on-the-horizon description was clearly similar to the vertex operator formalism in string theory, but with crazy imaginary coupling, and all sorts of wrong behavior. This is now known to be because he was considering a thermal Schwartzschild black hole, rather than an extremal one. In extremal black holes, the natural analog to 'tHoofts construction is AdS/CFT. ### String theory In string theory, you have a nonlocality which was puzzling from the beginning--- the string scattering is only defined on-shell, and the only extension to an off-shell formalism requires you to take light-cone coordinates. This was considered an embarassment in string theory in the 1980s, because to define a space-time point, you need to know off-shell operators which you can Fourier transform to find point-to-point correlation functions. In the 1990s, this S-matrix nonlocality was reevaluated. Susskind argued heuristically that a highly excited string state should be indistinguishable from a large thermal black hole. One of the arguments was that the strings at weak coupling at large excitation numbers are long and tangled, and should have the right energy-radius relationship. Another of Susskind's arguments is that a string falling into a black hole should get highly thermally excited, and get longer, and it becomes as wide as the black hole at 'tHooft's brick wall, so that the brick wall is not an imaginary surface to cut off an integral, but the point where the strings in the string theory are no longer small compared to the black hole, and the description is no longer local. Susskind argued that at large occupation numbers, it is thermodynamically preferrable to have one long string rather than two strings with half the excitation. This is essentially due to the exponential growth of states in string theory, to the Hagedorn behavior. But it means that the picture of a string falling into a black hole is better considered a string merging with the big string which is the black hole already. The d-branes were also identified with black holes by Polchinsky, and the dualities between D-branes and F-strings made it clear that everything in string theory was really a black hole. This resolved the mystery of why the strings were described by a 2d theory which was so strangely reproducing higher dimensional physics--- it was just an example of 'tHooft's holographic descriptions. All this stuff made a tremendous pressure to find a real mathematically precise realization of the holographic principle. This was first done by Banks Fischler Shenker and Susskind, but the best example is Maldacena's. ### AdS/CFT In AdS/CFT, you look near a stack of type IIB 3-branes to get the near-horizon geometry (which is now curved AdS, not flat Rindler, because the black holes are extremal), and you identify the dynamics of string theory near the horizon with the low-energy theory on the branes themselves, which consists of open strings stuck to the branes, or N=4 SUSY SU(N) gauge theory (the SU(N) gauge group comes from the Chan-Paton factors, the N=4 SUSY is the SUSY of the brane background, and the superconformal invariance is identified with the geometeric symmetry of AdS). The correspondence maps the AdS translation group to involve a dilatation operator on the field theory, so that if you make an N=4 field state which is sort-of localized at some point in AdS, and you move in one of the AdS directions, it corresponds to making the blob bigger without changing its center. This means that there is absolutely no locality on the AdS side, only on the CFT side. Two widely separated points are represented by a CFT blobs of different scale, not by a CFT state of different position, so they cannot possibly be commuting, except in some approximation of low energy. The CFT is local, but this is boundary locality, analogous to light-cone locality, not a bulk locality. There is no bulk locality. This nonlocality is so obvious that I don't know how to justify it any more than what I said. There aren't four dimensions of commuting bosonic operators in the N=4 theory, just 3 dimensions. There aren't five spacetime dimensions, just four. The remaining dimension is emergent by different scales in the CFT. So this example is airtight--- string theory is definitely nonlocal, and nonlocal in the right way suggested by 'tHooft and Susskind's arguments. - we discard infinities in gauge theories like its nothing and we don't even blink because as long as the observable quantities are finite, we are all good and dandy. Are we supposed to chicken out from some infinite entropy by asymptotically integrating near the horizon? but we cannot observe or measure anything arbitrarily close to the horizon either, that entropy is not observable. if we are "good" with infinities in QED non observable quantities we'll be fine with infinite entropies in the horizons as well. In any case this a topic i find exciting and your answer is very interesting Ron, +1 – lurscher Aug 19 '12 at 6:24 1 @lurscher: the "infinities" in gauge theory are unphysical and not troublesome--- they are just mass corrections and charge corrections. This is a real honest to goodness physical divergence, it means the entropy is wrong in a local theory. It's not something you can fix by formal tricks, or by redefining parameters. Entropy is an absolute quantity in quantum mechanics, it is the log of a counting number. – Ron Maimon Aug 19 '12 at 6:33 Hi @Ron Maimon. Thanks for the answer, but it's a bit over my head. I took field theory in grad school, so I have some background here, just not quite enough I guess. Below I have some questions to help clarify my understanding. – user1247 Aug 19 '12 at 9:53 "It just means that to define the state of the universe at a certain point, you need to know what is going on everywhere" -- So, for example, a measurement I make at point A may depend on what is happening at point B outside my light cone? In such a case wouldn't boosting mix cause and effect, and further wouldn't communication faster than light be naively possible? – user1247 Aug 19 '12 at 9:57 "This deformation is nonlocal, in that the horizon shape is determined by which light rays make it to infinity." -- But you go on to say that the deformation is caused by the infalling particle, not the outgoing particle making it to infinity. So I'm confused, and left not completely understanding how and where the nonlocality is manifested between the bumps on the horizon and the infalling and outgoing particles. Can you give a simple example involving one infalling particle, one bump, and one outgoing particle? – user1247 Aug 19 '12 at 10:01 show 14 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410662651062012, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/209091-show-me-steps-solving-complex-numbers-involving-e.html
# Thread: 1. ## Show me the steps to solving complex numbers involving e? My teacher didn't go into too much detail on these equations and they were on a test, so I'd like to know how to solve these equations for my final. Z= Can someone show me the steps how to get this complex number into algebraic form answer Z=4i ? Find the product of and conjugate Z? How do I solve to get the answer 27? 2. ## Re: Show me the steps to solving complex numbers involving e? Hello, INeedOfHelp! $z \,=\,4e^{\frac{\pi}{2}i}$ Can someone show me the steps how to get this complex number into algebraic form: $z 4i$ $z \:=\:4e^{\frac{\pi}{2}i} \;=\;4\left(\cos\tfrac{\pi}{2} + i\sin\tfrac{\pi}{2}\right) \;=\; 4(0 + i\!\cdot\!1) \;=\;4i$ Find the product of $z \,=\,3\;\!\sqrt{3}e^{\pi i}$ and its conjugate $\overline{z}$ How do I solve to get the answer $27$? We have: . $\begin{Bmatrix}z &=& 3\sqrt{3}\;\!e^{\pi i} \\ \\[-3mm] \overline{z} &=& 3\sqrt{3}\;\!e^{-\pi i} \end{Bmatrix}$ Therefore: . $z\cdot\overline{z} \;=\;\left(3\sqrt{3}\;\!e^{\pi i}\right)\left(3\sqrt{3}\;\!e^{-\pi i}\right) \;=\; \left(3\sqrt{3}\cdot3\sqrt{3}\right)\left(e^{\pi i}\cdot e^{-\pi i}\right)$ n . . . . . . . . . . . $=\;(27)(e^0) \;=\;(27)(1) \;=\;27$ 3. ## Re: Show me the steps to solving complex numbers involving e? Oh well that's easy enough, thank you very much.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9296151995658875, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/172409-find-specified-value-r-n.html
# Thread: 1. ## find the specified value for r and n can anybody here help me on this problem.... i've been trying to solve them with the use of the formula of geometric sequence and geometric series. but i could not make it to the answer. Given: 1. S sub 5 = 31/4 ; r = 1/2 ; what is first term? 2. S sub n = 94.5 ; r = 1/2 ; a sub n = 3/2 ; what is n? Thank you so much 2. 1. $\displaystyle \frac{31}{4} = \frac{a\left[1-\left(\frac{1}{2}\right)^5\right]}{1-\frac{1}{2}}$. Simplify and solve for $\displaystyle a$. 2. $\displaystyle 94.5 = \frac{\frac{3}{2}\left[1 - \left(\frac{1}{2}\right)^n\right]}{1 - \frac{1}{2}}$. Simplify and solve for $\displaystyle n$. 3. thank you Prove it. it is ok to put 3/2 which is the n value in the place of a sub 1 or first term?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301537275314331, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/30408/how-much-are-reduced-powers-different/100141
## How much are reduced powers different ? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given two infinite sets $X$ and $I$, and a filter ${\cal F}$ on $I$, one defines as usual the equivalence relation $\approx_{\cal F}$ on $X^I$ and obtains the reduced power $Y = X^I / \approx_{\cal F}$. Question 1 : to what extent do such reduced powers differ when one filter on $I$ is changed to another filter on $I$ ? Question 2 : consider question 1 in the case of different ultrafilters on $I$, thus in the case of ultrapowers. - 1 Are you interested in reduced products of bare naked sets or of structures such as groups etc? – Simon Thomas Jul 3 2010 at 14:58 Actually, I am interested in reduced powers and ultrapowers of the field \mathbb{R} of usual real numbers, but also more generally, of rings. – Elemer E Rosinger Jul 3 2010 at 16:26 ## 6 Answers Easy differences arise if one allows principal ultrafilters, since the ultrapower of $X$ by a principal filter is canonically isomorphic to $X$, but other ultrapowers are not. Another easy difference arises when $I$ is uncountable, since one filter might concentrate on a countable subset of $I$ and others might not, and this can dramatically affect the size of the reduced power, making them different. So the question is more interesting when one considers only non-principal filters and also only uniform filters, meaning that every small subset of $I$ is measure $0$. In this case, under the Generalized Continuum Hypothesis, the ultrapower of any first order structure is saturated, and thus any two of them will be canonically isomorphic by a back-and-forth argument. Without the GCH, it is consistent with ZFC to have ultrafilters on the same set leading to nonisomorphic ultrapowers. Also relevant is the Keisler-Shelah theorem, which asserts that two first order structures---such as two graphs, groups or rings---are elementarily equivalent (have all the same first order truths) if and only if they have an isomorphic ultrapowers. - What´s "small"? – Mariano Suárez-Alvarez Jul 3 2010 at 14:53 In this case, it means size less than the cardinality of $I$. For a filter to give measure $1$ to a strictly smaller set, means in a sense that you have the wrong index set. – Joel David Hamkins Jul 3 2010 at 14:56 Thank you for the answer. As I commented above, I am interested only in non-principal ultrafilters and in filters which contain the Frechet filter. – Elemer E Rosinger Jul 3 2010 at 16:29 Elemer, if by the Frechet filter, you mean the filter of finite sets, this is not good enough, when $I$ is uncountable, since one filter might still concentrate on a countable set while another does not. What you want is the filter of all co-small sets, making your filter uniform in the sense I mentioned. – Joel David Hamkins Jul 3 2010 at 16:32 1 In fact, the existence of non-isomorphic ultrapowers of the linearly ordered sets $\mathbb{N}$ or $\mathbb{R}$ over a countable index set is not just consistent with $\neg CH$. It is actually equivalent to $\neg CH$. – Simon Thomas Jul 3 2010 at 16:50 show 5 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Given that you are interested in ultrapowers of $\mathbb{R}$, you might like the following which appears in a joint paper with Kramer, Shelah and Tent. Theorem: Up to isomorphism, the number of ultrapowers $\prod_{\mathcal{U}} \mathbb{R}$, where $\mathcal{U}$ is a nonprincipal ultrafilter over $\mathbb{N}$, is 1 if $CH$ holds and $2^{2^{\aleph_{0}}}$ if $CH$ fails. Here $CH$ is the Continuuum Hypothesis. (In the case when $CH$ fails, the relevant ultrapowers are already non-isomorphic merely as linearly ordered sets.) The relevant reference is: L. Kramer, S. Shelah, K. Tent and S. Thomas Asymptotic cones of finitely presented groups, Advances in Mathematics 193 (2005), 142-173. - Since the question was about reduced powers, not just ultrapowers, it seems worthwhile to point out that, when $F$ is a filter on $I$ but not an ultrafilter, then the reduced power of a field $k$ with respect to $F$ will not be a field, not even an integral domain. Proof: Since $F$ isn't an ultrafilter, $I$ can be partitioned into two pieces $A$ and $B$, neither of which is in $F$. Then the characteristic functions of these pieces represent nonzero elements of the reduced power $k^I/F$ whose product is zero. Thus, reduced powers of fields modulo non-ultra-filters differ very strongly from ultrapowers, since the latter are fields. A similar argument shows, for example, that if $X$ is a linearly ordered set with at least two elements, then the reduced power $X^I/F$ is linearly ordered if and only if $F$ is an ultrafilter. - The (latest version of the) question is probably more general than it should be. Since $X$ is allowed to change and since the filters could be principal ultrafilters, the answer is that absolutely anything can happen. Use a principal ultrafilter $F$ on any set $I$, and vary $X$ at will. If the intention was to prohibit principal ultrafilters, then I can't say that absolutely anything can happen, but quite a lot can. For example, given any ultrapower of any $X$, one could change $X$ to a set bigger than that ultrapower; any ultrapower (or any reduced power) of the new $X$ would be bigger than the ultrapower you started with. So it's not clear that anything useful can be said at (or even near) the level of generality of the question. - Let me specify my question. Given two ultrapowers $X^I/F$ and $Y^J/G$ where $X,I,Y,J$ are arbitrary infinite sets, while $F,G$ are ultrafilters on $I$ and $J$< respectively, the questions is to what extent : 1) the cardinals of those two ultrapowers can differ ? 2) those two ultrapowers are not isomorphic when $X$ and $Y$ are fields ? - 4 If you want to add information to the question, you should edit the question to include the information. – JBL Nov 2 2010 at 14:52 A more precise, and at the same time, more general form of the question is as follows : Given a reduced power X^I / F, where X and I are arbitrary infinite sets, while F is an arbitrary filter on I, to what extent does the reduced power X^I / F change, when X, I or F are changed ? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369818568229675, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/179267-double-integration.html
# Thread: 1. ## Double Integration Use double integration to calculate the area that liew between the curves $\sqrt{x} + \sqrt{y} = \sqrt{a}$ $and x + y = a, a > 0$This is what i have done , between the limits on the y = a-x and y = 0 and for x = a and 0 $\iint dy dx \int a- x dx = ax - \frac{{x}^{ 2} }{2 } = {a}^{ 2} - \frac{{a}^{ 2} }{2 } = \frac{{a}^{ 2} }{2 }$ But according to my text book the answer is $\frac{{a}^{ 2} }{3 }$ 2. Originally Posted by adam_leeds Use double integration to calculate the area that liew between the curves $<br /> \sqrt{x} + \sqrt{y} = \sqrt{a}<br />$ $<br /> and x + y = a, a > 0<br />$ This is what i have done , between the limits on the y = a-x and y = 0 and for x = a and 0 $<br /> \iint dy dx<br /> <br /> \int a- x dx<br /> <br /> = ax - \frac{{x}^{ 2} }{2 } = {a}^{ 2} - \frac{{a}^{ 2} }{2 } = \frac{{a}^{ 2} }{2 } <br />$ But according to my text book the answer is $\frac{{a}^{ 2} }{3 } <br />$ Your limits of integration are incorrect. If you sketch the region you will see that the line $y=a-x$ lies above the curves $y=(\sqrt{a}-\sqrt{x})^2$ and they both have x intercept $(a,0)$ This gives $\int_{0}^{a}\int_{(\sqrt{a}-\sqrt{x})^2}^{a-x}1 dydx$ 3. Where did you get y = 0? Why not use the first equation? We probably should establish, due ot the first constraint, that a >= 0. 4. Originally Posted by TheEmptySet This gives $\int_{0}^{a}\int_{(\sqrt{a}-\sqrt{x})^2}^{a-x}1 dydx$ That doesnt get a^2/3 5. Originally Posted by adam_leeds That doesnt get a^2/3 ummm no. $(\sqrt{a}-\sqrt{x})^2=a-2\sqrt{ax}+x$ This gives $\int_{0}^{a} \int_{a-2\sqrt{ax}+x}^{a-x}dydx =\int_{0}^{a}(a-x)-(a-2\sqrt{ax}+x)dx$ $\int_{0}^{a} -2x+2\sqrt{a}\sqrt{x}dx=-x^2+\frac{4}{3}\sqrt{a}x^{\frac{3}{2}}\bigg|_{0}^{ a}=\frac{a^2}{3}$ 6. Missed a negative, thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.921928346157074, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/36223/would-it-be-economical-to-add-a-counterweight-to-rocket-launches?answertab=votes
# Would it be economical to add a counterweight to rocket launches? It seems a large amount of rocket fuel during launches is spent to get the mass moving; indeed according to QuickLaunch, Inc. it takes 40% of the rocket fuel to get to Mach 1.3. It seems as though the engines are firing quite a while before liftoff, and considering that the full launch weight of the space shuttle is 4.4 million pounds (~2 million kg). Would it be feasible to add a counterweight system to help it get started? It seems as though one could be built with a structure a few hundred meters above the launch vehicle, which could significantly reduce the launch weight and get the upward motion started sooner. What I'd imagine is four cables attached to the launch vehicle, running up to the structure and each having ~400,000kg weights attached. This would make the rockets only need to lift ~400,000kg for the first, say, 200m, which would lead to much greater acceleration for this span and it'd be to Mach 1.3 much sooner. Is this too hard to make? Would it have any noticeable effect on the fuel requirements, or would it be negligible? Is the acceleration already near the limits of the astronauts' bodies? Or would it just be one other thing that could fail? The reason I ask it just because of the seemingly ludicrous amount of weight that needs to be launched. Are there any other methods in the works to assist the launch besides rockets for manned vehicles? It doesn't seem space guns or sky ramps are ever planning on having humans in the launch vehicles. - All are probably waiting for Superconductor maintenance man. Once it's in hand, everything would become handy..! One more thing: Even though you've used some counterweight methods for conserving fuel, the total energy required would be the same. But instead of fuel, the counterweight requires a lot of manpower and new bills..! I've already answered your Trebuchet question with Mass Drivers. Please refer Wikipedia..! – Ϛѓăʑɏ βµԂԃϔ Sep 12 '12 at 17:02 ## 1 Answer I think the main reason why this is not done is that the first stage in most rockets burns for several minutes. The acceleration from a counterweight cannot be higher than $g$ and for a falling distance of $200$m the total speed of $g\cdot t \approx 63$m/s is not significant enough to warrant such a huge engineering challenge. On the other hand this looks very different if a higher acceleration is used, e.g. in a railgun or mass driver. Here the acceleration and terminal velocity is much higher but this is still only done on a research level. - In addition liquid rocket engines run for a few seconds to get pumps upto speed and temperature. The engine gimbals are also used to balance the rocket as the support clamps are released – Martin Beckett Sep 12 '12 at 15:14 2 Using a pulley system or something you could achieve an acceleration greater than $g$ using a counterweight. (Probably not practical for a rocket, though) – David Zaslavsky♦ Sep 12 '12 at 18:27 @DavidZaslavsky: Yes, I thought of that after I wrote the answer but a pulley for a rope that holds 400 tons is beyond my imagination and it does not really help as the kinetic energy is the same for the same counterweight. – Alexander Sep 12 '12 at 20:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9651633501052856, "perplexity_flag": "middle"}
http://medlibrary.org/medwiki/Minkowski%E2%80%93Bouligand_dimension
Minkowski–Bouligand dimension Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below: Estimating the box-counting dimension of the coast of Great Britain In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). To calculate this dimension for a fractal S, imagine this fractal lying on an evenly-spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid finer by applying a box-counting algorithm. Suppose that N(ε) is the number of boxes of side length ε required to cover the set. Then the box-counting dimension is defined as: $\dim_{\rm box}(S) := \lim_{\varepsilon \to 0} \frac {\log N(\varepsilon)}{\log (1/\varepsilon)}.$ If the limit does not exist then one must talk about the upper box dimension and the lower box dimension which correspond to the upper limit and lower limit respectively in the expression above. In other words, the box-counting dimension is well defined only if the upper and lower box dimensions are equal. The upper box dimension is sometimes called the entropy dimension, Kolmogorov dimension, Kolmogorov capacity or upper Minkowski dimension, while the lower box dimension is also called the lower Minkowski dimension. The upper and lower box dimensions are strongly related to the more popular Hausdorff dimension. Only in very specialized applications is it important to distinguish between the three. See below for more details. Also, another measure of fractal dimension is the correlation dimension. Alternative definitions 3 types of coverings or packings It is possible to define the box dimensions using balls, with either the covering number or the packing number. The covering number $N_{\rm covering}(\varepsilon)$ is the minimal number of open balls of radius ε required to cover the fractal, or in other words, such that their union contains the fractal. We can also consider the intrinsic covering number $N'_{\rm covering}(\varepsilon)$, which is defined the same way but with the additional requirement that the centers of the open balls lie inside the set S. The packing number $N_{\rm packing}(\varepsilon)$ is the maximal number of disjoint balls of radius ε one can situate such that their centers would be inside the fractal. While N, Ncovering, N'covering and Npacking are not exactly identical, they are closely related, and give rise to identical definitions of the upper and lower box dimensions. This is easy to prove once the following inequalities are proven: $N'_\text{covering}(2\varepsilon) \leq N_\text{covering}(\varepsilon), N_\text{packing}(\varepsilon) \leq N'_\text{covering}(\varepsilon). \,$ These, in turn follow with a little effort from the triangle inequality. The advantage of using balls rather than squares is that this definition generalizes to any metric space. In other words, the box definition is "external" — one needs to assume the fractal is contained in a Euclidean space, and define boxes according to the external structure "imposed" by the containing space. The ball definition is "internal". One can imagine the fractal disconnected from its environment, define balls using the distance between points on the fractal and calculate the dimension (to be more precise, the Ncovering definition is also external, but the other two are internal). The advantage of using boxes is that in many cases N(ε) may be easily calculated explicitly, and that for boxes the covering and packing numbers (defined in an equivalent way) are equal. The logarithm of the packing and covering numbers are sometimes referred to as entropy numbers, and are somewhat analogous (though not identical) to the concepts of thermodynamic entropy and information-theoretic entropy, in that they measure the amount of "disorder" in the metric space or fractal at scale $\varepsilon$, and also measure how many "bits" one would need to describe an element of the metric space or fractal to accuracy $\varepsilon$. Another equivalent definition for the box counting dimension, which is again "external", is given by the formula $\dim_\text{box}(S) = n - \lim_{r \to 0} \frac{\log \text{vol}(S_r)}{\log r},$ where for each r > 0, the set $S_r$ is defined to be the r-neighborhood of S, i.e. the set of all points in $R^n$ which are at distance less than r from S (or equivalently, $S_r$ is the union of all the open balls of radius r which are centered at a point in S). Properties Both box dimensions are finitely additive, i.e. if { A1, .... An } is a finite collection of sets then $\dim (A_1 \cup \dotsb \cup A_n) = \max \{ \dim A_1 ,\dots, \dim A_n \}. \,$ However, they are not countably additive, i.e. this equality does not hold for an infinite sequence of sets. For example, the box dimension of a single point is 0, but the box dimension of the collection of rational numbers in the interval [0, 1] has dimension 1. The Hausdorff measure by comparison, is countable additive. An interesting property of the upper box dimension not shared with either the lower box dimension or the Hausdorff dimension is the connection to set addition. If A and B are two sets in a Euclidean space then A + B is formed by taking all the couples of points a,b where a is from A and b is from B and adding a+b. One has $\dim_\text{upper box}(A+B)\leq \dim_\text{upper box}(A)+\dim_\text{upper box}(B).$ Relations to the Hausdorff dimension The box-counting dimension is one of a number of definitions for dimension that can be applied to fractals. For many well behaved fractals all these dimensions are equal; in particular, these dimensions coincide whenever the fractal satisfies the open set condition (OSC). For example, the Hausdorff dimension, lower box dimension, and upper box dimension of the Cantor set are all equal to log(2)/log(3). However, the definitions are not equivalent. The box dimensions and the Hausdorff dimension are related by the inequality $\dim_{\operatorname{Haus}} \leq \dim_{\operatorname{lower box}} \leq \dim_{\operatorname{upper box}}.$ In general both inequalities may be strict. The upper box dimension may be bigger than the lower box dimension if the fractal has different behaviour in different scales. For example, examine the interval [0, 1], and examine the set of numbers satisfying the condition for any n, all the digits between the 22n-th digit and the (22n+1 − 1)th digit are zero The digits in the "odd places", i.e. between 22n+1 and 22n+2 − 1 are not restricted and may take any value. This fractal has upper box dimension 2/3 and lower box dimension 1/3, a fact which may be easily verified by calculating N(ε) for $\varepsilon=10^{-2^n}$ and noting that their values behaves differently for n even and odd. To see that the Hausdorff dimension may be smaller than the lower box dimension, return to the example of the rational numbers in [0, 1] discussed above. The Hausdorff dimension of this set is 0. Another example: The set of rational numbers $\mathbb{Q}$, a countable set with $\dim_{\operatorname{Haus}} = 0$, has $\dim_{\operatorname{box}} = 1$ because its closure, $\mathbb{R}$, has dimension 1. Box counting dimension also lacks certain stability properties one would expect of a dimension. For instance, one might expect that adding a countable set would have no effect on the dimension of a set. This property fails for box dimension. In fact $\dim_{\operatorname{box}} \left\{0,1,\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \ldots\right\} = \frac{1}{2}.$ References • Falconer, Kenneth (1990). Fractal geometry: mathematical foundations and applications. Chichester: John Wiley. pp. 38–47. ISBN 0-471-92287-0 [Amazon-US | Amazon-UK]. Zbl 0689.28003. Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Minkowski–Bouligand dimension", available in its original form here: http://en.wikipedia.org/w/index.php?title=Minkowski%E2%80%93Bouligand_dimension • Finding More You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library. • Questions or Comments? If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. • About This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8996512293815613, "perplexity_flag": "head"}
http://mathoverflow.net/questions/38019/zeros-of-gradient-of-positive-polynomials/38634
## Zeros of Gradient of Positive Polynomials. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It was asked in the Putnam exam of 1969, to list all sets which can be the range of polynomials in two variables with real coefficients. Surprisingly, the set $(0,\infty )$ can be the range of such polynomials. These don't attain their global infimum although they are bounded below. But is it also possible that such polynomials with range $(0,\infty )$ also have a non zero gradient everywhere? - What's the question? – Daniel Moskovich Sep 8 2010 at 1:43 4 @Will $x^2+(xy-1)^2$. @Daniel The question is, can there be such a polynomial with nonvanishing gradient? I don't see why not, but I haven't thought about it very hard. – David Speyer Sep 8 2010 at 2:36 Here is a sketch of a proof that it is impossible for the gradient to be everywhere nonzero. If all level surfaces are bounded, the gradient has to have a zero somewhere: either it vanishes somewhere on a nonempty level surface or one has a smooth bounded level surface and then the gradient vanishes somewhere in the interior. If there is an unbounded level surface, then, assuming the gradient is everywhere nonzero there are lines through the origin along which the function infinitely increases and lines along which it infinitely decreases. – algori Sep 8 2010 at 2:58 Algori, I don't follow your last sentence at all. If you really think you have an argument, could you write it up in detail as an answer? This question was also posted on Art of Problem Solving, but no one there made any progress: artofproblemsolving.com/Forum/… – JBL Sep 8 2010 at 3:17 If algori's approach can be fixed up it is subtle. I'm looking up Poincare-Bendixson and related matters. With no critical point we do have a global nonzero vector field, the gradient. With no periodic level curve it is tempting to predict that the integral curves of the gradient field foliate the plane and therefore so do the level curves, and all meet orthogonally. But it needs work. – Will Jagy Sep 8 2010 at 3:54 ## 2 Answers $(1+x+x^2y)^2+x^2$ - 2 fedja, beautiful! – Will Jagy Sep 14 2010 at 2:49 I knew an answer would have appeared here, containing only a polynomial! – Pietro Majer Sep 14 2010 at 18:36 2 Hope it's not spoiling if I post a link to fedja's explanation in a different thread mathoverflow.net/questions/38639/… – jc Sep 14 2010 at 21:54 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Too long for a comment. I've got to wonder just how difficult this is. Anyway, one thing did work out, at least locally: We have a polynomial function $F(x,y)$ that is assumed to have a nonvanishing gradient. Then the vector field $$\left( \frac{\partial F}{\partial x}, \; \frac{\partial F}{\partial y} \right)$$ has integral curves that foliate the plane. On the other hand, the integral curves of $$\left( \frac{ - \partial F}{\partial y}, \; \frac{\partial F}{\partial x} \right)$$ are level curves as well, and foliate the plane. We know that if one of these level curves is a simple closed curve, using the Jordan Curve theorem it has an interior. If the function is constant on this it has zero gradient within, otherwise it achieves its maximum or minimum within and again has a critical point. After this I'm stuck. In particular, I simply don't see what polynomial does for us. Homogeneous polynomial would be different. There is a conjecture of Thom about local behavior that he apparently settled for homogeneous polynomials only. I would like to say that the picture that is being built up resembles that for $F(x,y) = e^x$ and that is absurd for a polynomial. Well, perhaps. I've also got to wonder how much the OP knows. - I didn't do much.. Looking into the fact that, for polynomials of degree 2 except a small class, the levels curves are smymmetric with respect to some point (h,k),and this point turns out to be a point where gradient is zero. I wonder if anything can be said about polynomials of higher degrees,with their gradient vanishing only at a single point.It works for the example atleast.. – Anonymous Sep 9 2010 at 8:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9479638338088989, "perplexity_flag": "head"}
http://www.scholarpedia.org/article/Attractor_reconstruction
Attractor reconstruction From Scholarpedia Timothy D. Sauer (2006), Scholarpedia, 1(10):1727. Curator and Contributors 1.00 - Timothy D. Sauer 0.50 - Eugene M. Izhikevich Attractor reconstruction refers to methods for inference of geometrical and topological information about a dynamical attractor from observations. Contents The modeling of a deterministic dynamical system relies on the concept of a phase space, the collection of possible system states. The system state at time t consists of all information needed to uniquely determine the future system states for times $$\geq$$ t; e.g., in many cases, positions and velocities. For a system that can be modeled mathematically, the phase space is known from the equations of motion. For experimental and naturally occurring chaotic dynamical systems, the phase space and a mathematical description of the system are often unknown. Attractor reconstruction methods have been developed as a means to reconstruct the phase space and develop new predictive models. One or more signals from the system must be observed as a function of time. The time series are then used to build a proxy of the observed states. Whitney and Takens Embedding Theorems The Whitney Embedding Theorem (Whitney 1936) holds that a generic map from an n-manifold to 2n+1 dimensional Euclidean space is an embedding: the image of the n-manifold is completely unfolded in the larger space. In particular, no two points in the n-dimensional manifold map to the same point in the (2n+1)-dimensional space. As 2n+1 independent signals measured from a system can be considered as a map from the set of states to 2n+1 dimensional space, Whitney's theorem implies that each state can be identified uniquely by a vector of 2n+1 measurements, thereby reconstructing the phase space. The contribution of the Takens Embedding Theorem (Takens 1981) was to show that the same goal could be reached with a single measured quantity. Takens proved that instead of 2n+1 generic signals, the time-delayed versions $$[y(t), y(t-\tau), y(t-2\tau), \ldots, y(t-2n\tau)]$$ of one generic signal would suffice to embed the n-dimensional manifold. There are some technical assumptions that must be satisfied, restricting the number of low-period orbits with respect to the time-delay $$\tau$$ and repeated eigenvalues of the periodic orbits. Similar theoretical results in (Aeyels 1981) from the mathematical control theory point of view, and a more empirical account (Packard et al. 1980) were published at about the same time. The idea of using time delayed coordinates to represent a system state is reminiscent of the theory of ordinary differential equations, where existence theorems say that a unique solution exists for each $$[y(t), \dot{y}(t), \ddot{y}(t), \ldots]\ .$$ For example, in many-body dynamics under Newtonian gravitation, current knowledge of the position and momentum of each body suffices to uniquely determine the future dynamics. The time derivatives can be approximated by delay-coordinate terms as $$[y(t), \frac{y(t)-y(t-\tau)}{\tau}, \frac{y(t)-2y(t-\tau)+y(t-2\tau)}{\tau^2}, \ldots]\ .$$ Emergence of chaos and fractal geometry in physical systems motivated a reassessment of the original theory, which applies to smooth manifold attractors. It was shown (Sauer et al. 1991) that a, possibly fractal, attractor of box-counting dimension d can always be reconstructed with m generic observations, or with m time-delayed versions of one generic observation, where m is any integer greater than 2d. Figure 1: The Lorenz attractor Figure 2: Times series formed by x coordinate Figure 3: Reconstructed attractor The figure shows a reconstruction of the fractal attractor for the well-known Lorenz system, whose fractal dimension is slightly larger than 2. The time series shown consists of the $$x$$ coordinate of the system traced as a function of time. In the third image, triples of time series values $$[x(t), x(t-\tau), x(t-2\tau)]$$ are plotted. The topological structure of the Lorenz attractor is preserved by the reconstruction. Embedding ideas were later extended beyond autonomous systems with continuously-measured time series. A version was designed for excitable media, where information may be transmitted by spiking events, extending usage to possible neuroscience applications (Sauer 1994). An embedding theorem for skew systems (Stark 1999) explores extensions of the methodology when one part of a system is driving another, and only the latter can be observed. Attractor Reconstruction in Practice Although the theory implies that an arbitrary time delay is sufficient to reconstruct the attractor, efficiency with a limited amount of data is enhanced by particular choices of the time delay $$\tau\ .$$ Methods for choosing an appropriate time delay have centered on measures of autocorrelation and mutual information (Fraser & Swinney 1986). Further, in the absence of knowledge of the phase space dimension n, a choice of the number of embedding dimensions m must also be made. A number of ad hoc methods have been proposed that try to estimate whether the image has been fully unfolded by a given m-dimensional map. The approach of Kennel and Abarbanel (Kennel & Abarbanel 2002) is often used. This approach examines whether points that are near neighbors in one dimension are also near neighbors in the next higher embedding dimension. If not, then the image had not been fully unfolded. If all near neighbors remain so, then the unfolding is complete and the dimension is established. There are still unresolved issues as to what constitutes a near neighbor or a false near neighbor. The success of embedding in practice depends heavily on the specifics of the application. In particular, the hypothesis of a generic observation function creating the time series is often problematic. A mathematically generic observation, by definition, monitors all degrees of freedom of the system. The extent to which this is true affects the faithfulness of the reconstruction. If there is only a weak connection from some degrees of freedom to the observation function, the data requirements for a satisfactory reconstruction may be prohibitive in practice. Other factors which limit success are difference in time scales between different parts of the system, as well as system and observational noise. Applications of embedding time-series data (Ott et al. 1994, Kantz & Schreiber 1997) have been extensive since the Takens Embedding Theorem was published. Many techniques of system characterization and identification were made possible, including determination of unstable periodic orbits and symbolic dynamics, as well as approximation of attractor dimensions and Lyapunov exponents of chaotic dynamics. In addition, researchers have focused on methods of time series prediction and nonlinear filtering for noise reduction, the use of chaotic signals for communication, and for controlling chaos. References • Abarbanel, H. (1996) Analysis of Observed Chaotic Data. New York: Springer-Verlag • Aeyels, D. (1981) Generic observability of differentiable systems. SIAM J. Control Optim. 19:595-603 • Eckmann, J.-P., Ruelle, D. (1985) Ergodic theory of chaos and strange attractors. Reviews of Modern Physics 57:617-652 • Fraser, A.M., Swinney, H.L. (1986) Independent coordinates for strange attractors from mutual information. Physical Review A 33:1134-1140 • Kantz, H., Schreiber, T. (1997) Nonlinear Time Series Analysis. Cambridge:Cambridge University Press • Kennel, M and Abarbanel, H. (2002) Physical Review E 66: 026209 • Ott, E., Sauer, T, Yorke, J.A. (1994) Coping with Chaos: Analysis of Chaotic Data and the Exploitation of Chaotic Systems. New York:Wiley Interscience • Packard, N., Crutchfield, J., Farmer, D., Shaw, R. (1980) Geometry from a time series. Physical Review Letters 45:712-715 • Sauer, T., Yorke, J.A., Casdagli, M. (1991) Embedology. Journal of Statistical Physics 65:579-616 • Sauer, T. (1994) Reconstruction of dynamical systems from interspike intervals. Phys. Rev. Lett. 72:3811-3814 • Stark, J. (1999) Delay embeddings of forced systems I: Deterministic forcing. J. Nonlinear Sci. 9:255-332 • Takens, F. (1981) Detecting strange attractors in turbulence. Lecture Notes in Mathematics 898, Berlin:Springer-Verlag • Whitney, H. (1936) Differentiable manifolds. Ann. Math. 37:645-680 Internal references • John W. Milnor (2006) Attractor. Scholarpedia, 1(11):1815. • Edward Ott (2006) Controlling chaos. Scholarpedia, 1(8):1699. • Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. • James Murdock (2006) Unfoldings. Scholarpedia, 1(12):1904. See also Attractor | Attractor Dimensions | Chaos | Controlling Chaos | Dynamical Systems | Fractals | Lyapunov Exponents | Noise Reduction | Phase Space | Time Series Prediction | Unstable Periodic Orbits
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.886493980884552, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/47629-consecutive-heads-tails-weighted-coin-toss.html
1Thanks • 1 Post By awkward # Thread: 1. ## Consecutive Heads/Tails in weighted coin toss Hi, Supposing you have a weighted coin which lands heads up 70% of the time and tails up 30% of the time. Q. What is the probability that it lands on tails 20 or more times in a row if I did this 1000 times? *Also could I have the equation on how you worked this out so that I can change the weighting, number of trials etc (I need to know this for my business!!!) Thank you very much. Zak 2. Originally Posted by slyone Hi, Supposing you have a weighted coin which lands heads up 70% of the time and tails up 30% of the time. Q. What is the probability that it lands on tails 20 or more times in a row if I did this 1000 times? *Also could I have the equation on how you worked this out so that I can change the weighting, number of trials etc (I need to know this for my business!!!) Thank you very much. Zak Hi Zak, I don't know how to compute the probability you seek exactly, but I think I can give a pretty good approximation. To generalize the problem, let's suppose you have a possibly biased coin which comes up tails with probability p and heads with probability q, where p+q = 1. You flip the coin N times and look for a run of R or more successive tails. We would like to know the probability that at least one such run occurs. The "at least one" part is my interpretation of your problem statement; you didn't say that, but I'm guessing that's what you mean. As I said, I haven't come up with a reasonable way to compute the exact probability of having at least one run of length R, but think I can solve a closely associated problem which can be used to obtain an approximate answer. Let $\lambda$ be the expected number of runs of R or more successive tails. It can be shown that $\lambda = p^R \,[(N - R) q + 1]$. That is an exact result, not an approximation. (I derived this formula from scratch, but I suppose it must be a well-known result.) So in your case, for example, we have N = 1000, R = 20, p = 0.3 and q = 0.7, so $\lambda = 2.4 * 10^{-8}$. Now comes the approximation. Let's suppose the number of runs, say X, has a Poisson distribution with mean $\lambda$. It doesn't, but for small values of $\lambda$ it should be pretty close. Then $Pr(X > 0) = 1 - Pr(X = 0) = 1 - e^{-\lambda}$. For small values of $\lambda$ this value can be hard to compute precisely, but we can use another approximation: $1 - e^{-\lambda} = \lambda$ (approximately), so the answer to your question is (once again, approximately) $\lambda = 2.4 * 10^{-8}$. 3. ## Re: Consecutive Heads/Tails in weighted coin toss Hi, great answer from awkward. But how did you figure out the expected number of runs of R or more successive tails: \lambda = p^R \,[(N - R) q + 1]. You derived this formula from scratch. I tried to figure by myself this result and tried to Google it and did not find an answer. Could you share your Derivation ? Thanks Otto 4. ## Re: Consecutive Heads/Tails in weighted coin toss The chance of getting 20 tails in a row in 20 tosses is 0.320. When tossing a coin 1000 times there is are (1000-20+1)=981 times where you can have 20 of an outcome in a row. Since it is 20 or more we don't care what the other 980 tosses are. Since the chance of getting 20 in a row in one attempt is 0.320 and you get 981 attempts the chance of getting 20 in a row through the whole 1000 tosses is 981(0.320) 5. ## Re: Consecutive Heads/Tails in weighted coin toss Originally Posted by ottof Hi, great answer from awkward. But how did you figure out the expected number of runs of R or more successive tails: $\lambda = p^R \,[(N - R) q + 1]$ You derived this formula from scratch. I tried to figure by myself this result and tried to Google it and did not find an answer. Could you share your Derivation ? Thanks Otto Let $X_i = \begin{cases} 1 &\text{if there is a run of at least R tails starting with flip i}\\0 &\text{otherwise} \end{cases}$ Then $X_1 = 1$ if the first R flips are tails, but for $i > 1$, $X_i = 1$ if the (i-1)st flip is a head and the next R flips are tails. So $E(X_1)=\Pr(X_1 = 1) = p^R$ and $E(X_i)=\Pr(X_i = 1) = q p^R$ for $i > 1$ Applying the theorem that E(X+Y) = E(X) + E(Y), we have $E \left( \sum_{i=0}^{N-R+1} X_i \right) = \sum_{i=0}^{N-R+1} E(X_i) = p^R + (N-R)q p^R = p^R [(N-R)q + 1]$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9571002125740051, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/128521/number-of-decompositions-into-k-powers-of-p-counting-functions
# “Number of Decompositions into $k$ Powers of $p$”-Counting Functions Let $a_j\in \mathbb{N}$, with $0< a_{j-1}<a_j$. My question is: What is known about the counting functions $\sigma_{k,p}(n)$, counting how many numbers, less than $n$, have a decomposition into a sum of $k$ $p$-powers of $a_j$? So for every $x\le n$, I ask, how many ways there are to decompose $x$ like $$x=\sum_{j=1}^{k} a_j^p,$$ and then sum these (ways) up, e.g. $30$ has a unique decomposition into $1^2+2^2+3^2+4^2$, $90$ has 2 ($1^2+2^2+6^2+7^2$ and $1^2+3^2+4^2+8^2$), $78$ has 3 (see here) and so forth... Below you'll find some plots, where I summed up $4$ squares with $\max a_j=300$ and $4$ cubes with $\max a_j=200$. They show a quite different behaviour, concerning their curvature, but I'm not sure if this is an artifact of the sample I have taken. EDIT: The linear appearance of the lower left cube-plot might be related to the fact that even if $a_j=a_k$ is allowed (Waring's problem) $9$ cubes are needed to represent all numbers. So does the curvature related to the number of $p$ powers I add up? - – draks ... Apr 5 '12 at 21:49 – draks ... Apr 5 '12 at 22:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528725743293762, "perplexity_flag": "head"}
http://mathbabe.org/2012/09/09/nyc-parks-datadive-update-does-pruning-prevent-future-fallen-trees/
# mathbabe Exploring and venting about quantitative issues Home > data science > NYC Parks datadive update: does pruning prevent future fallen trees? ## NYC Parks datadive update: does pruning prevent future fallen trees? September 9, 2012 After introducing ourselves, we subdivided our pruning problem into 5 problems: 1. mapping tree coordinates to block segments 2. defining the expected number of fallen tree events based on number of trees, size and age of trees, and species, 3. accounting for weather, 4. designing the model assuming the above sub-models are in shape, and 5. getting the data in shape to train the model (right now the data is in pieces with different formats). After a few hours of work, there was real progress on 1 and 5, and we’d noticed that we don’t have the age of trees, but only the size, which we can use as a proxy. Moreover, the size measurements weren’t updated after they were taken once in 2005. So it would require much more domain expertise that we currently had to incorporate a model of how fast trees grow, which we don’t have time for this weekend. Before lunch we realized we really needed to talk about 4, namely the design of the model, so we scheduled pow-wow for after lunch. After some discussion, we settled on a univariate regression model where the basic unit is a block of trees in Brooklyn for a given year: $y = \alpha x + \epsilon,$ So for each street block and for each year of data, we define: • $x$ to a simple function of the number of years since that block was last pruned, • $y$‘s numerator to be a (weighted) count of the number of fallen tree events (or similar) the following year – this is weighted by the fact that some work orders are much more expensive than others, and • $y$‘s denominator to be a (weighted) count of the number of trees on the block – this is weighted by the fact that larger trees should possibly get counted more than smaller trees. Going back to the $x,$ since we are trying to predict work orders per tree, we expect the effect of pruning on this count to be (negative and) greatest the year following pruning, and for the effect to wear off over time. So the actual function is probable $f(n) = 1/n$ or $f(n) = 1/\sqrt(n),$ or something like that, which tends to zero as $n$ tends to infinity. We ended up deciding that we can’t really account for weather in our model, since we won’t have any idea how many storms will pass through Brooklyn next year. I left last night before we’d gotten all the data in shape so I’m eager to go back this morning to the presentation event and see if we have any hard results. Even if we don’t, I think we have a reasonable model and a very good start on it, and I think we will have helped the NYC Parks department with the question. I’ll update soon with the final results. ### Like this: Categories: data science 1. September 9, 2012 at 9:43 am | #1 needs to be interactive; you are trying to mastermind a central control for a complex continually changing problem. why not involve the people on the block who walk by those trees every day? let them enter issues seen. every spring some branches are found not to have survived the winter or lack of water from the previous summer, so they die, but are still part of the tree, until a storm renders them weaker, and they break. You cannot track or model the effects of the weather so completely, so use the eyes of the people, too. 2. somedude September 9, 2012 at 4:19 pm | #2 My own experience with trees leads me te assume that the number of falling trees is highly dependant on species. Some fast growing species can easily lose branches. Oaks are good, willows not so much. 3. jmacclure September 9, 2012 at 10:43 pm | #3 I’m an actuary, we use mathematical models of survival to value liabilities for insurance companies and pension funds. To use your example, actuaries: 1. map survival probabilities to characteristics of the population: age, gender, etc 2. define the expected cash flows under a pension scheme or insurance policy 3. accounting for the economic environment 4. tweaking the assumptions in our model as experience of the specific population unfolds over time 5. here is where it gets interesting, and where i’d like to be able to apply some of this “data science” to actuarial science, where the models would essentially “train themselves” to extract trends and information from the data 1. September 11, 2012 at 1:51 pm | #1 2. October 24, 2012 at 6:24 am | #2 3. January 3, 2013 at 11:08 am | #3 %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379094243049622, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/313282/finding-a-counterexample-to-a-prime-factorization-conjecture
# Finding a counterexample to a Prime Factorization Conjecture Let $\mathbb{Z}_{\geq 2}$ be the set of natural numbers starting at 2: $$\mathbb{Z}_{\geq 2}= \{2, 3, 4, 5,\ldots\}.$$ An natural number's prime factorization is odd if the total number of primes in its factorization is odd. It is even if the total number of primes in its factorization is even. Let $N(k) = \{j \mid j\in \mathbb{Z}_{\geq 2}, j\leq k\text{, the prime factorization for$j$is odd}\}$. Let $n(k) = |N(k)|$ Let $A(k) = \{j \mid j\in \mathbb{Z}_{\geq 2}, j\leq k\text{, the prime factorization for$j$is even}\}$. Let $a(k) = |A(k)|$ Conjecture: $n(k) \geq a(k)$ for all prime numbers $k$ in $\mathbb{Z}_{\geq 2}$. - my bad! $a(2)< n(2)$ and $a(3)< n(2)$ :) – fidbc Feb 24 at 20:50 ## 2 Answers This is a slightly modified version of Polya's Conjecture; you are asking for a prime witness to its falsehood. I suspect there is one, but it is probably hard to find. Polya's conjecture is true for most numbers, and the first counterexample is 906,150,257, which is not prime. But there may well be a prime counterexample soon after. - The number $906,150,293$ is a counterexample to your conjecture. Starting from the fact that $m=906,150,257$ is a counterexample to Polya's conjecture, we know that there are more abnormal numbers less than it than there are normal numbers less than it. If we take a number $n$ that is larger than $m$ and there are more abnormal numbers between $m$ and $n$ than there are normal numbers between $m$ and $n$, then we can conclude that there are more abnormal numbers less than $n$ than there are normal numbers less than $n$. In Mathematica, running ````NormalFactorization[n_] := Module[{factorlist}, factorlist = FactorInteger[n]; OddQ[Sum[factorlist[[i]][[2]], {i, 1, Length[factorlist]}]]] CountNormalLessThan[n_] := Length[Select[normal, # <= n &]] CountAbnormalLessThan[n_] := Length[Select[abnormal, # <= n &]] start = 906150257 normal = {}; abnormal = {}; counterexamples = {}; For[n = start, n < start + 100, n++, If[NormalFactorization[n], AppendTo[normal, n], AppendTo[abnormal, n]]]; For[n = start, n < start + 100, n++, If[CountNormalLessThan[n] < CountAbnormalLessThan[n] && PrimeQ[n], AppendTo[counterexamples, n]]]; counterexamples ```` produces ````{906150293, 906150341} ```` Here is a screenshot: - What if we didn't know about Polya's conjecture? Could you make the program run from n=2 to n=906150257 without it taking a massive amount of time? – user63813 Feb 24 at 23:38 Unfortunately, no; I tried doing that first, and it was already taking about a minute when going from 1 to 10,000. – Zev Chonoles♦ Feb 25 at 18:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8863094449043274, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/91944/flattening-a-corner-in-a-convex-d-polytope-into-d-1-dimensions-without-over
## Flattening a corner in a convex $d$-polytope (into $d-1$ dimensions, without overlap)? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm interested in the following question, which seems to be assumed all over the place (at least for 3 dimensions) in convex geometry, and which I cannot find a proof of. Suppose we have a corner of a convex polytope in $\mathbb{R}^d$. How do we show that we can 'flatten' the surrounding facets into $d-1$ dimensions without overlap? What do I mean by that? Well, for the three-dimensional case, it means that if you sum the angles around a given vertex of a convex polyhedron, you get a sum of less than $2 \pi$. Here's another way to say that: (0) Given a collection of 2-dim cones $C_i$ with angles $a_i$, if the sum of the $a_i$ is greater than $2 \pi$, then $C_i$ cannot be facets of a 3-dim cone. (I'm defining a cone to be the convex hull of a collection of rays; so, cones are assumed to be convex.) And in general dimensions: (1) Given a collection of $(d-1)$-dim cones $C_i$ with total $(d-1)$-angle measures $a_i$, if the sum of the $a_i$ is greater than the total angle surrounding a point in $\mathbb{R}^{(d-1)}$, then the $C_i$ cannot be facets of a d-dim cone. This fact can be restated in a lot of other ways. Maybe one of these is easier to prove? (2) If a collection of $(d-1)$-dim cones $C_i$ with total $(d-1)$-angle sum greater than the total angle surrounding a point in $\mathbb{R}^{(d-1)}$ is configured in $\mathbb{R}^d$ with all cone points set at the origin and every $(d-2)$-face identified with a $(d-2)$-face of some other cone (i.e., the cones are glued to make a simplicial complex), and if this configuration lies on one side of a hyperplane, then the configuration is not convex. (3) The facets of any d-cone can be isometrically mapped (unfolded!) into a $(d-1)$-hyperplane, retaining the coincidence of the cone point and without overlap. (4) The convex spherical polygon of largest perimeter is a great circle, i.e. any spherical polygon with perimeter larger than $2 \pi$ is not convex. It seems to me like there should be a straightforward, convex-geometry proof of this fact, but I can't find it. If you know another way to prove it (say, using ideas from curvature?) I'd be very interested in that too! - 2 I suspect [Miller-Pak 2003] should be relevant: math.ucla.edu/~pak/papers/FoldLI.pdf – Allen Knutson Mar 22 2012 at 23:55 Allen -- thanks! This paper, in fact, proves much stronger results than the one I'm asking about. On the other hand, the tools it invokes go well beyond convex geometry, so I'm still curious about whether there's a simple proof of this fact. – Emily Peters Mar 23 2012 at 15:32 ## 1 Answer One way of doing this (in $d=3,$ say) requires the "Archimedes axiom": if one convex body $K$ contains another body $L,$ then the perimeter of $K$ is greater than that of $L$ (the nicest proof uses Crofton's formula, which says that the perimeter is proportional to the measure of the set of lines which intersect the set). Then, you set $K$ to the hemisphere, and $L$ to your spherical convex polygon, and you are done. (Archimedes actually introduced this as an axiom in the Euclidean case when computing the perimeter of the circle, since he needed to know that the inscribed polygons provided a lower bound). EDIT To answer the OP's question: Crofton's (or kinematic) formulas work in all dimension in all constant curvature spaces. The canonical reference is L. Santalo's integral geometry and geometric probability. A nice survey of the generalizations this paper by Hug and Schneider, but it does not cover the non-Euclidean case. - Igor, this "Archimedes axiom" seems like it would do the trick. On the other hand, Crofton's formula seems to be stated for the plane -- does a more general version exist for other surfaces (or just the sphere)? And do you know of proofs of the Archimedes axiom in higher dimensions? – Emily Peters Mar 23 2012 at 15:35 1 @Emily: See the edit... – Igor Rivin Mar 23 2012 at 16:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230912923812866, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=60863
Physics Forums ## infinetely many primes of the form 3n+1 prove that there are infinetely many primes of the form 3n+1 we used : Assume there is a finitely # of primes of the form 3n+1 let P = product of those primes.. which is also of the form 3A+1 for some A. Let N = (2p)^2 + 3. Now we need to show that N has a prime divisor of the form 3n+1, which is not in the list of the ones before. This would be a contradiction. But I'm not sure how to show that. any help would be appreciated PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 2 Quote by b0mb0nika prove that there are infinetely many primes of the form 3n+1 we used : Assume there is a finitely # of primes of the form 3n+1 let P = product of those primes.. which is also of the form 3A+1 for some A. Let N = (2p)^2 + 3. Now we need to show that N has a prime divisor of the form 3n+1, which is not in the list of the ones before. This would be a contradiction. But I'm not sure how to show that. any help would be appreciated I think your trying to make use of the technique to prove infinitely many primes of the form 4n + 1. I think you need to show that N is of the form 3X+1, none of the factors of which belong to the list. If N is prime or has a factor of the form 3Y+1 then that would be a contradiction. Then show respectively what happens if the factors of N are respectively of one of the forms 3Y or 3Y-1. Unfortunately as I am typing this I can't rule out two prime factors of the form 3Y-1. Anyway, that is my thought on this. Recognitions: Gold Member Science Advisor Staff Emeritus Here's a thought : Assume a finite number of such primes, $q_1, q_2, ..., q_n$ Let $$N = 3(\Pi_i q_i) + 1 = 3A + 1$$ And let its prime factorization be of the form $$N = \Pi_i p_i$$ Clearly no $p_i$ can be of the form 3k. And since $3k-1 \equiv -1 (mod 3)$ and $3A + 1 \equiv 1 (mod 3)$, there can only be an even number of factors of the form 3k-1. Also, if there is a factor of the form p = 3k+1, you a contradivtion of your assumption, because p can not be any of the q's (else p|1, which is not true). So, this leaves the only possibility that N is a product of an even number of primes of the form 3k+2. If you can show this is impossible, you are through. I think this would be doable by comparing this product with the corresponding product obtained from terms 1 less than each of the above terms. Very likely, there's a nicer way, so just wait around while you're thinking about it, and someone will show up and state the obvious. Recognitions: Gold Member ## infinetely many primes of the form 3n+1 bOmbOnika: Assume there is a finitely # of primes of the form 3n+1 let P = product of those primes.. which is also of the form 3A+1 for some A. Let N = (2p)^2 + 3. I have an idea that may work. I am going to use the form (2P)^2 +3, granting that the capital P in the second line above is the small p in the third. The form 3n+1 would represent the product of primes of the form 3k+1, and so we look at (6N+2)^2+3 = Q=36N^2+24N+7 ==7 Mod 12. (which is a reduction since we could have used 24, and in fact since the form is actually 6k+1 for the primes since 2 is the only even prime, we could do better.) But anyway, we use the form Q=12K+7, which is of the form 3X+1, so it can not be prime, or we have a contradiction. Now all primes but 2 are of the form 4k+1 or 4k+3. Suppose Q has a prime factor q of the form 4k+1. Then we have: (2P)^2 +3 == 0 Mod q. This gives (2P)^2 ==-3 Mod q, and since -1 is a quadratic reside of q, we have a U such that (U)^2==3 Mod q. Thus by the law of quadratic reciproicity, we have an X such that X^2 =q Mod 3. But 1 is the only quadratic residue Mod 3, so in this case we are through since we have q==1 Mod 3. Thus a prime factor q is of the form 4k+3 and of the form 3k+2. Modulo 12 the forms are 12k+1, 12k+5, 12k+7, 12k+11, since 2 or 3 could not divide Q. But the only form available of both the forms 4k+3 and 3k+2, is 12k+11. But the products and powers of primes involving -1 Mod 12, are only +-1Mod 12, so they can not equal Q==7 Mod 12. Well, if that works, it involves understanding that -1 is a quadratic residue for primes of the form 4k+1 and the Law of Quadratic Reciprocity. Thus, maybe, that is why the form (2P)^2+3 was involved. Recognitions: Homework Help Science Advisor Suppose you have a prime that divides N=(2P)^2+3, say p. You know that p is not 2 and that it's congruent to 2 mod 3. I assume you're familiar with quadratic reciprocity at this point. Use what you know about the legendre symbol to determine if -3 is a Quadratic Residue mod q. Next, use the special form of N and the fact that p divides it to come to a contradiction. Thread Tools | | | | |--------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: infinetely many primes of the form 3n+1 | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 1 | | | Linear & Abstract Algebra | 11 | | | Introductory Physics Homework | 14 | | | Introductory Physics Homework | 10 | | | Introductory Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582432508468628, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/199044/rational-number-solution-for-an-equation/199064
# Rational number solution for an equation Does there exist $v=(a,b,c)\in\mathbb{Q^3}$ with none of $v$'s terms being zero s.t. $a+b\sqrt[3]2+c\sqrt[3]4=0$ ? And I was doing undergraduate algebra 2 homework when I encountered it in my head. At first It seemed like it can be proved there can be no such $v$ like how $\sqrt{2}$, or $\sqrt{2}+\sqrt{3}$ are proved to be irrational, but this case wasn't easy like those. Or maybe I was too hasty. - Do you know field theory ? – Belgi Sep 19 '12 at 10:03 Very little. I studied group theory in algebra 1 and now I'm in algebra 2. I have currently learned Euler,Fermat's theorems and what the field of fractions is. – YD55 Sep 19 '12 at 10:44 ## 3 Answers I'll try to show some more elementary approach, not using field theory. Suppose there is such a $v$. We have $c \ne 0$ (as $\sqrt[3]2$ is irrational). Rewriting the equation, we find $\alpha, \beta \in \mathbb Q$ with $$\sqrt[3]4 = \alpha + \beta \sqrt[3]2$$ and $\alpha, \beta \ne 0$ (as $\sqrt[3]2, \sqrt[3]4 \not\in \mathbb Q$). Taking the third power, we get $$4 = \alpha^3 + 3\alpha^2\beta \sqrt[3]2 + 3\alpha\beta^2\sqrt[3]4 + 2\beta^3$$ so, as $\alpha\beta^2 \ne 0$, $$\sqrt[3]4 = \frac{4 - \alpha^3 - 3\alpha^2\beta\sqrt[3]2 - 2\beta^3}{3\alpha\beta^2}$$ Which gives $$\alpha + \beta \sqrt[3]2 = \frac{4-\alpha^3 - 2\beta^3}{3\alpha\beta^2} - \frac\alpha\beta \sqrt[3]2$$ As $\sqrt[3]2$ is irrational, we must have $$\alpha = \frac{4-\alpha^3 - 2\beta^3}{3\alpha\beta^2}, \beta = -\frac\alpha\beta$$ So $\beta^2 = -\alpha$, giving $$-3\alpha^3 = 3\alpha^2\beta^2 = 4-\alpha^3 - 2\beta^3 \iff 2(\beta^3 - \alpha^3) = 4 \iff \beta^3 - \alpha^3 = 2 \iff \beta^3 + \beta^6 = 2$$ But $x^6 + x^3 - 2$ has no rational zeros, as $\pm 1, \pm 2$ are the only possibilities. Contradiction. So, there is no such $v$. - 1 you can also put $t=x^3$ and solve directly to see no rational roots – Belgi Sep 19 '12 at 10:48 Suppose there exists such a v. Then $a = - b\sqrt[3]2 - c\sqrt[3]4$ You can try to show that the left hand side is irrational. Which would show there is no such v. If there really is no such v. - Did that work out? Because I tried that way too but it didn't for me. – YD55 Sep 19 '12 at 9:45 The question can be rephrased as: Is the triple $\ 1,\sqrt[3]2,\sqrt[3]4\$ linearly independent over $\mathbb Q$? And the answer is no (i.e., there is no such $v$). Denote $\alpha:=\sqrt[3]2$. Then $\alpha^3=2$. The main point is, that the polynomial $x^3-2$ (which defines $\alpha$ as its root) is irreducible over $\mathbb Q$: cannot be written as a proper product of polynomials of degree 1 and 2. (This can be shown directly..) In other words, the field extension $\mathbb Q(\alpha)$ of $\mathbb Q$, is --as $\alpha$ is the root of the irreducible $x^3-2$-- per definition, is isomorphic to the quotient $K:=\mathbb Q[x]/(x^3-2)$ of polynomial ring $\mathbb Q[x]$. That is, the elements of $K$ are the polynomials, but $x^3-2 = 0$ is assumed (as the only rule) in $K$. And, similarly as $\mathbb Q[x]$ has $1,x,x^2,x^3,x^3,\ldots$ as basis, $K$ has $1,x,x^2$ as a (standard) basis over $\mathbb Q$. ($x^3$ and above powers can be rephrased by $1,x,x^2$, using the rule: for example $x^3=2,\ x^4=2x,$ etc.) The correspondence between $K$ and $\mathbb Q(\alpha)$ is simply given by $x\mapsto\alpha$. - I think you meant that the answer is yes, in the third line above...or else you meant to write dependent on your second line. – DonAntonio Sep 19 '12 at 10:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9704059958457947, "perplexity_flag": "head"}
http://www.reference.com/browse/representable
Definitions # Representable functor In mathematics, especially in category theory, a representable functor is a functor of a special form from an arbitrary category into the category of sets. Such functors give representations of an abstract category in terms of known structures (i.e. sets and functions) allowing one to utilize, as much as possible, knowledge about the category of sets in other settings. From another point of view, representable functors for a category C are the functors given with C. Their theory is a vast generalisation of upper sets in posets, and of Cayley's theorem in group theory. ## Definition Let C be a locally small category and let Set be the category of sets. For each object A of C let Hom(A,–) be the hom functor which maps objects X to the set Hom(A,X). A functor F : C → Set is said to be representable if it is naturally isomorphic to Hom(A,–) for some object A of C. A representation of F is a pair (A, Φ) where Φ : Hom(A,–) → F is a natural isomorphism. A contravariant functor G : C → Set is said to representable if it is naturally isomorphic to the contravariant hom-functor Hom(–,A) for some object A of C. ## Universal elements According to Yoneda's lemma, natural transformations from Hom(A,–) to F are in one-to-one correspondence with the elements of F(A). Given a natural transformation Φ : Hom(A,–) → F the corresponding element of u ∈ F(A) is given by $u = Phi_A\left(mathrm\left\{id\right\}_A\right).,$ Conversely, given any element u ∈ F(A) we may define a natural transformation Φ : Hom(A,–) → F via $Phi_X\left(f\right) = \left(Ff\right)\left(u\right),$ where f is an element of Hom(A,X). In order to get a representation of F we want to know when the natural transformation induced by u is an isomorphism. This leads to the following definition: A universal element of a functor F : C → Set is a pair (A,u) consisting of an object A of C and an element u ∈ F(A) such that for every pair (X,v) with v ∈ F(X) there exists a unique morphism f : A → X such that (Ff)u = v. A universal element may be viewed as a universal morphism from the one-point set {•} to the functor F or as an initial object in the category of elements of F. The natural transformation induced by an element u ∈ F(A) is an isomorphism if and only if (A,u) is a universal element of F. We therefore conclude that representations of F are in one-to-one correspondence with universal elements of F. For this reason, it is common to refer to universal elements (A,u) as representations. ## Examples • Consider the contravariant functor P : Set → Set which maps each set to its power set and each function to its inverse image map. To represent this functor we need a pair (A,u) where A is a set and u is a subset of A, i.e. an element of P(A), such that for all sets X, the hom-set Hom(X,A) is isomorphic to P(X) via ΦX(f) = (Pf)u = f–1(u). Take A = {0,1} and u = {1}. Given a subset S ⊆ X the corresponding function from X to A is the characteristic function of S. • Forgetful functors to Set are very often representable. It particular, a forgetful functor is represented by (A, u) whenever A is a free object over a singleton set with generator u. • The forgetful functor Grp → Set on the category of groups is represented by (Z, 1). • The forgetful functor Ring → Set on the category of rings is represented by (Z[x], x), the polynomial ring in one variable with integer coefficients. • The forgetful functor Vect → Set on the category of real vector spaces is represented by (R, 1). • The forgetful functor Top → Set on the category of topological spaces is represented by any singleton topological space with its unique element. • A group G can be considered a category (even a groupoid) with one object which we denote by •. A functor from G to Set then corresponds to a G-set. The unique hom-functor Hom(•,–) from G to Set corresponds to the canonical G-set G with the action of left multiplication. Standard arguments from group theory show that a functor from G to Set is representable if and only if the corresponding G-set is simply transitive (i.e. a G-torsor). Choosing a representation amounts to choosing an identity for the group structure. • Let C be the category of CW-complexes with morphisms given by homotopy classes of continuous functions. For each natural number n there is a contravariant functor Hn : C → Ab which assigns each CW-complex its nth cohomology group (with integer coefficients). Composing this with the forgetful functor we have a contravariant functor from C to Set. Brown's representability theorem in algebraic topology says that this functor is represented by a CW-complex K(Z,n) called an Eilenberg-Mac Lane space. ## Properties ### Uniqueness Representations of functors are unique up to a unique isomorphism. That is, if (A1,Φ1) and (A2,Φ2) represent the same functor, then there exists a unique isomorphism φ : A1 → A2 such that $Phi_1^\left\{-1\right\}circPhi_2 = mathrm\left\{Hom\right\}\left(varphi,-\right)$ as natural isomorphisms from Hom(A2,–) to Hom(A1,–). This fact follows easily from Yoneda's lemma. Stated in terms of universal elements: if (A1,u1) and (A2,u2) represent the same functor, then there exists a unique isomorphism φ : A1 → A2 such that $\left(Fvarphi\right)u_1 = u_2.$ ### Preservation of limits Representable functors are naturally isomorphic to Hom functors and therefore share their properties. In particular, (covariant) representable functors preserve all limits. It follows that any functor which fails to preserve some limit is not representable. Contravariant representable functors take colimits to limits. ### Left adjoint Any functor K : C → Set with a left adjoint F : Set → C is represented by (FX, ηX(•)) where X = {•} is a singleton set and η is the unit of the adjunction. Conversely, if K is represented by a pair (A, u) and all small copowers of A exist in C then K has a left adjoint F which sends each set I to the Ith copower of A. Therefore, if C is a category with all small copowers, a functor K : C → Set is representable if and only if it has a left adjoint. ## Relation to universal morphisms and adjoints The categorical notions of universal morphisms and adjoint functors can both be expressed using representable functors. Let G : D → C be a functor and let X be an object of C. Then (A,φ) is a universal morphism from X to G if and only if (A,φ) is a representation of the functor HomC(X,G–) from D to Set. It follows that G has a left-adjoint F if and only if HomC(X,G–) is representable for all X in C. The natural isomorphism ΦX : HomD(FX,–) → HomC(X,G–) yields the adjointness; that is $Phi_\left\{X,Y\right\}colon mathrm\left\{Hom\right\}_\left\{mathcal D\right\}\left(FX,Y\right) to mathrm\left\{Hom\right\}_\left\{mathcal C\right\}\left(X,GY\right)$ is a bijection for all X and Y. The dual statements are also true. Let F : C → D be a functor and let Y be an object of D. Then (A,φ) is a universal morphism from F to Y if and only if (A,φ) is a representation of the functor HomD(F–,Y) from C to Set. It follows that F has a right-adjoint G if and only if HomD(F–,Y) is representable for all Y in D. ## References • Mac Lane, Saunders (1998). Categories for the Working Mathematician. (2nd ed.), Springer. ISBN 0-387-98403-8.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8729139566421509, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Circular_motion
# Circular motion "Radial motion" redirects here. It is not to be confused with radial velocity or rotational speed. Classical mechanics Branches Formulations Fundamental concepts Core topics Scientists In physics, circular motion is a movement of an object along the circumference of a circle or rotation along a circular path. It can be uniform, with constant angular rate of rotation (and constant speed), or non-uniform with a changing rate of rotation. The rotation around a fixed axis of a three-dimensional body involves circular motion of its parts. The equations of motion describe the movement of the center of mass of a body. Examples of circular motion include: an artificial satellite orbiting the Earth at constant height, a stone which is tied to a rope and is being swung in circles, a car turning through a curve in a race track, an electron moving perpendicular to a uniform magnetic field, and a gear turning inside a mechanism. Since the object's velocity vector is constantly changing direction, the moving object is undergoing acceleration by a centripetal force in the direction of the center of rotation. Without this acceleration, the object would move in a straight line, according to Newton's laws of motion. ## Uniform circular motion Figure 1: Velocity v and acceleration a in uniform circular motion at angular rate ω; the speed is constant, but the velocity is always tangent to the orbit; the acceleration has constant magnitude, but always points toward the center of rotation Figure 2: The velocity vectors at time t and time t + dt are moved from the orbit on the left to new positions where their tails coincide, on the right. Because the velocity is fixed in magnitude at v = r ω, the velocity vectors also sweep out a circular path at angular rate ω. As dt → 0, the acceleration vector a becomes perpendicular to v, which means it points toward the center of the orbit in the circle on the left. Angle ω dt is the very small angle between the two velocities and tends to zero as dt→ 0 Figure 3: (Left) Ball in circular motion – rope provides centripetal force to keep ball in circle (Right) Rope is cut and ball continues in straight line with velocity at the time of cutting the rope, in accord with Newton's law of inertia, because centripetal force is no longer there In physics, uniform circular motion describes the motion of a body traversing a circular path at constant speed. The distance of the body from the axis of rotation remains constant at all times. Though the body's speed is constant, its velocity is not constant: velocity, a vector quantity, depends on both the body's speed and its direction of travel. This changing velocity indicates the presence of an acceleration; this centripetal acceleration is of constant magnitude and directed at all times towards the axis of rotation. This acceleration is, in turn, produced by a centripetal force which is also constant in magnitude and directed towards the axis of rotation. In the case of rotation around a fixed axis of a rigid body that is not negligibly small compared to the radius of the path, each particle of the body describes a uniform circular motion with the same angular velocity, but with velocity and acceleration varying with the position with respect to the axis. ### Formulas for uniform circular motion Figure 1: Vector relationships for uniform circular motion; vector Ω representing the rotation is normal to the plane of the orbit. For motion in a circle of radius r, the circumference of the circle is C = 2π r. If the period for one rotation is T, the angular rate of rotation, also known as angular velocity, ω is: $\omega = \frac {2 \pi}{T} \$ and the units are radians/sec The speed of the object traveling the circle is: $v\, = \frac {2 \pi r } {T} = \omega r$ The angle θ swept out in a time t is: $\theta = 2 \pi \frac{t}{T} = \omega t\,$ The acceleration due to change in the direction is: $a\, = \frac {v^2} {r} \, = {\omega^2} {r}$ The vector relationships are shown in Figure 1. The axis of rotation is shown as a vector Ω perpendicular to the plane of the orbit and with a magnitude ω = dθ / dt. The direction of Ω is chosen using the right-hand rule. With this convention for depicting rotation, the velocity is given by a vector cross product as $\mathbf{v} = \boldsymbol \Omega \times \mathbf r \ ,$ which is a vector perpendicular to both Ω and r ( t ), tangential to the orbit, and of magnitude ω r. Likewise, the acceleration is given by $\mathbf{a} = \boldsymbol \Omega \times \mathbf v = \boldsymbol \Omega \times \left( \boldsymbol \Omega \times \mathbf r \right) \ ,$ which is a vector perpendicular to both Ω and v ( t ) of magnitude ω |v| = ω2 r and directed exactly opposite to r ( t ).[1] In the simplest case the speed, mass and radius are constant. Consider a body of one kilogram, moving in a circle of radius one metre, with an angular velocity of one radian per second. • The speed is one metre per second. • The inward acceleration is one metre per square second[v^2/r] • It is subject to a centripetal force of one kilogram metre per square second, which is one newton. • The momentum of the body is one kg·m·s−1. • The moment of inertia is one kg·m2. • The angular momentum is one kg·m2·s−1. • The kinetic energy is 1/2 joule. • The circumference of the orbit is 2π (~ 6.283) metres. • The period of the motion is 2π seconds per turn. • The frequency is (2π)−1 hertz. ### In polar coordinates Figure 2: Polar coordinates for circular trajectory. On the left is a unit circle showing the changes $\mathbf{d\hat u_R}$ and $\mathbf{d\hat u_\theta}$ in the unit vectors $\mathbf{\hat u_R}$ and $\mathbf{\hat u_\theta}$ for a small increment $\mathrm{d \theta}$ in angle $\mathrm{\theta}$. During circular motion the body moves on a curve that can be described in polar coordinate system as a fixed distance R from the center of the orbit taken as origin, oriented at an angle θ (t) from some reference direction. See Figure 2. The displacement vector $\stackrel{\vec r}{}$ is the radial vector from the origin to the particle location: $\vec r=R \hat u_R (t)\ ,$ where $\hat u_R (t)$ is the unit vector parallel to the radius vector at time t and pointing away from the origin. It is convenient to introduce the unit vector orthogonal to $\hat u_R$ as well, namely $\hat u_\theta$. It is customary to orient $\hat u_\theta$ to point in the direction of travel along the orbit. The velocity is the time derivative of the displacement: $\vec v = \frac {d}{dt} \vec r(t) = \frac {d R}{dt} \hat u_R + R\frac {d \hat u_R } {dt} \ .$ Because the radius of the circle is constant, the radial component of the velocity is zero. The unit vector $\hat u_R$ has a time-invariant magnitude of unity, so as time varies its tip always lies on a circle of unit radius, with an angle θ the same as the angle of $\vec r (t)$. If the particle displacement rotates through an angle dθ in time dt, so does $\hat u_R$, describing an arc on the unit circle of magnitude dθ. See the unit circle at the left of Figure 2. Hence: $\frac {d \hat u_R } {dt} = \frac {d \theta } {dt} \hat u_\theta \ ,$ where the direction of the change must be perpendicular to $\hat u_R$ (or, in other words, along $\hat u_\theta$) because any change d$\hat u_R$ in the direction of $\hat u_R$ would change the size of $\hat u_R$. The sign is positive, because an increase in dθ implies the object and $\hat u_R$ have moved in the direction of $\hat u_\theta$. Hence the velocity becomes: $\vec v = \frac {d}{dt} \vec r(t) = R\frac {d \hat u_R } {dt} = R \frac {d \theta } {dt} \hat u_\theta \ = R \omega \hat u_\theta \ .$ The acceleration of the body can also be broken into radial and tangential components. The acceleration is the time derivative of the velocity: $\vec a = \frac {d}{dt} \vec v = \frac {d}{dt} \left(R\ \omega \ \hat u_\theta \ \right) \ .$ $=R \left( \frac {d \omega}{dt}\ \hat u_\theta + \omega \ \frac {d \hat u_\theta}{dt} \right) \ .$ The time derivative of $\hat u_\theta$ is found the same way as for $\hat u_R$. Again, $\hat u_\theta$ is a unit vector and its tip traces a unit circle with an angle that is π/2 + θ. Hence, an increase in angle dθ by $\vec r (t)$ implies $\hat u_\theta$ traces an arc of magnitude dθ, and as $\hat u_\theta$ is orthogonal to $\hat u_R$, we have: $\frac {d \hat u_\theta } {dt} = -\frac {d \theta } {dt} \hat u_R = -\omega \hat u_R\ ,$ where a negative sign is necessary to keep $\hat u_\theta$ orthogonal to $\hat u_R$. (Otherwise, the angle between $\hat u_\theta$ and $\hat u_R$ would decrease with increase in dθ.) See the unit circle at the left of Figure 2. Consequently the acceleration is: $\vec a = R \left( \frac {d \omega}{dt}\ \hat u_\theta + \omega \ \frac {d \hat u_\theta}{dt} \right)$ $=R \frac {d \omega}{dt}\ \hat u_\theta - \omega^2 R \ \hat u_R \ .$ The centripetal acceleration is the radial component, which is directed radially inward: $\vec a_R= -\omega ^2R \hat u_R \ ,$ while the tangential component changes the magnitude of the velocity: $\vec a_{\theta}= R \frac {d \omega}{dt}\ \hat u_\theta = \frac {d R \omega}{dt}\ \hat u_\theta =\frac {d |\vec v|}{dt}\ \hat u_\theta \ .$ ### Using complex numbers Circular motion can be described using complex numbers. Let the $x$ axis be the real axis and the $y$ axis be the imaginary axis. The position of the body can then be given as $z$, a complex "vector": $z=x+iy=R(\cos \theta +i \sin \theta)=Re^{i\theta}\ ,$ where $i$ is the imaginary unit, and $\theta =\theta (t)\ ,$ is the angle of the complex vector with the real axis and is a function of time t. Since the radius is constant: $\dot R =\ddot R =0 \ ,$ where a dot indicates time differentiation. With this notation the velocity becomes: $v=\dot z = \frac {d (R e^{i \theta})}{d t} = R \frac {d \theta}{d t} \frac {d (e^{i \theta})}{d \theta} = iR\dot \theta e^{i\theta} = i\omega \cdot Re^{i\theta}= i\omega z$ and the acceleration becomes: $a=\dot v =i\dot \omega z +i \omega \dot z =(i\dot \omega -\omega^2)z$ $= \left(i\dot \omega-\omega^2 \right) R e^{i\theta}$ $=-\omega^2 R e^{i\theta} + \dot \omega e^{i\frac{\pi}{2}}R e^{i\theta} \ .$ The first term is opposite in direction to the displacement vector and the second is perpendicular to it, just like the earlier results shown before. ## Velocity Figure 1 illustrates velocity and acceleration vectors for uniform motion at four different points in the orbit. Because the velocity v is tangent to the circular path, no two velocities point in the same direction. Although the object has a constant speed, its direction is always changing. This change in velocity is caused by an acceleration a, whose magnitude is (like that of the velocity) held constant, but whose direction also is always changing. The acceleration points radially inwards (centripetally) and is perpendicular to the velocity. This acceleration is known as centripetal acceleration. For a path of radius r, when an angle θ is swept out, the distance travelled on the periphery of the orbit is s = rθ. Therefore, the speed of travel around the orbit is $v = r \frac{d\theta}{dt} = r\omega$, where the angular rate of rotation is ω. (By rearrangement, ω = v/r.) Thus, v is a constant, and the velocity vector v also rotates with constant magnitude v, at the same angular rate ω. ## Acceleration Main article: Acceleration The left-hand circle in Figure 2 is the orbit showing the velocity vectors at two adjacent times. On the right, these two velocities are moved so their tails coincide. Because speed is constant, the velocity vectors on the right sweep out a circle as time advances. For a swept angle dθ = ω dt the change in v is a vector at right angles to v and of magnitude v dθ, which in turn means that the magnitude of the acceleration is given by $a = v \frac{d\theta}{dt} = v\omega = \frac{v^2}{r}$ Centripetal acceleration for some values of radius and magnitude of velocity |v| r 1 m/s 3.6 km/h 2.2 mph 2 m/s 7.2 km/h 4.5 mph 5 m/s 18 km/h 11 mph 10 m/s 36 km/h 22 mph 20 m/s 72 km/h 45 mph 50 m/s 180 km/h 110 mph 100 m/s 360 km/h 220 mph Slow walk Bicycle City car Aerobatics 10 cm 3.9 in Laboratory centrifuge 10 m/s² 1.0 g 40 m/s² 4.1 g 250 m/s² 25 g 1.0 km/s² 100 g 4.0 km/s² 410 g 25 km/s² 2500 g 100 km/s² 10000 g 20 cm 7.9 in 5.0 m/s² 0.51 g 20 m/s² 2.0 g 130 m/s² 13 g 500 m/s² 51 g 2.0 km/s² 200 g 13 km/s² 1300 g 50 km/s² 5100 g 50 cm 1.6 ft 2.0 m/s² 0.20 g 8.0 m/s² 0.82 g 50 m/s² 5.1 g 200 m/s² 20 g 800 m/s² 82 g 5.0 km/s² 510 g 20 km/s² 2000 g 1 m 3.3 ft Playground carousel 1.0 m/s² 0.10 g 4.0 m/s² 0.41 g 25 m/s² 2.5 g 100 m/s² 10 g 400 m/s² 41 g 2.5 km/s² 250 g 10 km/s² 1000 g 2 m 6.6 ft 500 mm/s² 0.051 g 2.0 m/s² 0.20 g 13 m/s² 1.3 g 50 m/s² 5.1 g 200 m/s² 20 g 1.3 km/s² 130 g 5.0 km/s² 510 g 5 m 16 ft 200 mm/s² 0.020 g 800 mm/s² 0.082 g 5.0 m/s² 0.51 g 20 m/s² 2.0 g 80 m/s² 8.2 g 500 m/s² 51 g 2.0 km/s² 200 g 10 m 33 ft Roller-coaster vertical loop 100 mm/s² 0.010 g 400 mm/s² 0.041 g 2.5 m/s² 0.25 g 10 m/s² 1.0 g 40 m/s² 4.1 g 250 m/s² 25 g 1.0 km/s² 100 g 20 m 66 ft 50 mm/s² 0.0051 g 200 mm/s² 0.020 g 1.3 m/s² 0.13 g 5.0 m/s² 0.51 g 20 m/s² 2 g 130 m/s² 13 g 500 m/s² 51 g 50 m 160 ft 20 mm/s² 0.0020 g 80 mm/s² 0.0082 g 500 mm/s² 0.051 g 2.0 m/s² 0.20 g 8.0 m/s² 0.82 g 50 m/s² 5.1 g 200 m/s² 20 g 100 m 330 ft Freeway on-ramp 10 mm/s² 0.0010 g 40 mm/s² 0.0041 g 250 mm/s² 0.025 g 1.0 m/s² 0.10 g 4.0 m/s² 0.41 g 25 m/s² 2.5 g 100 m/s² 10 g 200 m 660 ft 5.0 mm/s² 0.00051 g 20 mm/s² 0.0020 g 130 m/s² 0.013 g 500 mm/s² 0.051 g 2.0 m/s² 0.20 g 13 m/s² 1.3 g 50 m/s² 5.1 g 500 m 1600 ft 2.0 mm/s² 0.00020 g 8.0 mm/s² 0.00082 g 50 mm/s² 0.0051 g 200 mm/s² 0.020 g 800 mm/s² 0.082 g 5.0 m/s² 0.51 g 20 m/s² 2.0 g 1 km 3300 ft High-speed railway 1.0 mm/s² 0.00010 g 4.0 mm/s² 0.00041 g 25 mm/s² 0.0025 g 100 mm/s² 0.010 g 400 mm/s² 0.041 g 2.5 m/s² 0.25 g 10 m/s² 1.0 g ## Non-uniform Non-uniform circular motion is any case in which an object moving in a circular path has a varying speed. The tangential acceleration is non-zero; the speed is changing. Since there is a non-zero tangential acceleration, there are forces that act on an object in addition to its centripetal force (composed of the mass and radial acceleration). These forces include weight, normal force, and friction. In non-uniform circular motion, normal force does not always point in the opposite direction of weight. Here is an example with an object traveling in a straight path then loops a loop back into a straight path again. This diagram shows the normal force pointing in other directions rather than opposite to the weight force. The normal force is actually the sum of the radial and tangential forces that help to counteract the weight force and contribute to the centripetal force. The horizontal component of normal force is what contributes to the centripetal force. The vertical component of the normal force is what counteracts the weight of the object. In non-uniform circular motion, normal force and weight may point in the same direction. Both forces can point down, yet the object will remain in a circular path without falling straight down. First let’s see why normal force can point down in the first place. In the first diagram, let's say the object is a person sitting inside a plane, the two forces point down only when it reaches the top of the circle. The reason for this is that the normal force is the sum of the weight and centripetal force. Since both weight and centripetal force points down at the top of the circle, normal force will point down as well. From a logical standpoint, a person who is traveling in the plane will be upside down at the top of the circle. At that moment, the person’s seat is actually pushing down on the person, which is the normal force. The reason why the object does not fall down when subjected to only downward forces is a simple one. Think about what keeps an object up after it is thrown. Once an object is thrown into the air, there is only the downward force of earth’s gravity that acts on the object. That does not mean that once an object is thrown in the air, it will fall instantly. What keeps that object up in the air is its velocity. The first of Newton's laws of motion states that an object’s inertia keeps it in motion, and since the object in the air has a velocity, it will tend to keep moving in that direction. ## Applications Solving applications dealing with non-uniform circular motion involves force analysis. With uniform circular motion, the only force acting upon an object traveling in a circle is the centripetal force. In non-uniform circular motion, there are additional forces acting on the object due to a non-zero tangential acceleration. Although there are additional forces acting upon the object, the sum of all the forces acting on the object will have to equal to the centripetal force. $F_{net} = ma\,$ $F_{net} = ma_r\,$ $F_{net} = mv^2/r\,$ $F_{net} = F_c\,$ Radial acceleration is used when calculating the total force. Tangential acceleration is not used in calculating total force because it is not responsible for keeping the object in a circular path. The only acceleration responsible for keeping an object moving in a circle is the radial acceleration. Since the sum of all forces is the centripetal force, drawing centripetal force into a free body diagram is not necessary and usually not recommended. Using $F_{net} = F_c\,$, we can draw free body diagrams to list all the forces acting on an object then set it equal to $F_c\,$. Afterwards, we can solve for what ever is unknown (this can be mass, velocity, radius of curvature, coefficient of friction, normal force, etc.). For example, the visual above showing an object at the top of a semicircle would be expressed as $F_c = (n+mg)\,$. In uniform circular motion, total acceleration of an object in a circular path is equal to the radial acceleration. Due to the presence of tangential acceleration in non uniform circular motion, that does not hold true any more. To find the total acceleration of an object in non uniform circular, find the vector sum of the tangential acceleration and the radial acceleration. $\sqrt{a_r^2+a_t^2}=a$ Radial acceleration is still equal to $v^2/r$. Tangential acceleration is simply the derivative of the velocity at any given point: $a_t = dv/dt \,$. This root sum of squares of separate radial and tangential accelerations is only correct for circular motion; for general motion within a plane with polar coordinates $(r,\theta)$, the Coriolis term $a_c = 2(dr/dt)(d\theta/dt)$ should be added to $a_t$, whereas radial acceleration then becomes $a_r=- v^2/r+d^2 r/dt^2$. ## References 1. Knudsen, Jens M.; Hjorth, Poul G. (2000). Elements of Newtonian mechanics: including nonlinear dynamics (3 ed.). Springer. p. 96. ISBN 3-540-67652-X. , Chapter 5 page 96
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 76, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8816454410552979, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/9468/riemannian-geometry/9487
## Riemannian Geometry ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I come from a background of having done undergraduate and graduate courses in General Relativity and elementary course in riemannian geometry. Jurgen Jost's book does give somewhat of an argument for the the statements below but I would like to know if there is a reference where the following two things are proven explicitly, 1. That the sectional curvature of a 2-dimensional subspace of a tangent space at a point on the Riemannian manifold is independent of the choice of basis. That is the definition of the sectional curvature depends only on the choice of the 2-dim subspace. 2. That the sectional curvature determines the Riemannian curvature fully. Secondly can one give me a reference where I can see how in practice is sectional curvature computed. To a first timer to this subject it is not obvious how one does a calculation on "all" 2-dimensional subspaces of a high-dimensional space. Especially when people talk of manifolds with "constant sectional curvature". How are they realized? I would like to see some explicit examples to understand this point. Further some studies about homogeneous spaces (needed to understand some issues in Quantum Field Theory) got me to the following 4 very non-trivial ideas in Riemannian manifolds which I am stating in my own way here , 1. That the isometry group of a Riemannian manifold is always a lie group. 2. The isotropy subgroup of any point on a Riemannian manifold under the smooth transitive action of its own isometry group on itself is a compact subgroup. (The context being what is called a "Riemannian Homogeneous Space") 3. {This point was earlier framed in a way which made the bi-implication false as pointed out by some people} The formulation should be as follows. A Riemannian Homogeneous Space is a riemannian manifold on which the isometry group acts transitively. Now the theorem is that such a space is compact IFF its isometry group is compact. Thats the statement whose intuition I am looking for. Apologies for the confusion caused. 4. { This question too was not framed properly. Basically I could not figure out how to write the nabla for the connection! It should be as Jose has pointed out.} A riemannian manifold is locally symmetric if and only if the Riemann curvature tensor is parallel with respect to the Levi-Civita connection. Can one give me the intuition behind these or give me specific references where these are proven in explicit details? - @Anirbit: you might consider editing the question to give a quick summary of which of your 4 questions are proved/disproved in the several answers below. – Scott Morrison♦ Dec 21 2009 at 15:34 Thanks Scott for the suggestion. I have made updates to the question based on the responses and added comments to the responses as I progress in my understanding of what is going on. – Anirbit Dec 23 2009 at 8:08 ## 7 Answers To get a better feel of the Riemann curvature tensor and sectional curvature: 1. Work through one of the definitions of the Riemann curvature tensor and sectional curvature with a $2$-dimensional sphere of radius $r$. 2. Define the hyperbolic plane as the space-like "unit sphere" of $3$-dimensional Minkowski space, defined using an inner product with signature $(-,+,+)$. Work out the sectional and Riemann curvature of that 3. Repeat #1 and #2 for the $n$-dimensional sphere and hyperbolic space, as well as flat space Sectional curvature determines Riemann curvature: That the sectional curvature uniquely determines the Riemann curvature is a consequence of the following: 1. The Riemann curvature tensor is a quadratic form on the vector space of $\Lambda^2T_xM$ 2. The sectional curvature function corresponds to evaluating the Riemann curvature tensor (as a quadratic form) on decomposable elements of $\Lambda^2T_xM$ 3. There is a basis of $\Lambda^2T_xM$ consisting only of decomposable elements Added in response to Anirbit's comment Perhaps you shouldn't try to compute the curvature too soon. First, make sure you understand the Riemannian metric of the unit sphere and hyperbolic space inside out. There are many ways to do this. But the most concrete way I know is to use stereographic projection of the sphere onto a hyperplane orthogonal to the last co-ordinate axis. Either the hyperplane through the origin or the one through the south pole works fine. This gives you a very nice set of co-ordinates on the whole sphere minus one point. Work out the Riemannian metric and the Christoffel symbols. Also, work out formulas for an orthonormal frame of vector fields and the corresponding dual frame of 1-forms. Figure out the covariant derivatives of these vector fields and the corresponding dual connection 1-forms. After you do this, do everything again with hyperbolic space, which is the hypersurface $-x_0^2 + x_1^2 + \cdots + x_n^2 = -1$ with $x_0 > 0$ in Minkowski space with the Riemannian metric induced by the flat Minkowski metric. You can do stereographic projection just like for the sphere but onto the unit $n$-disk given by $x_1^2 + \cdots + x_n^2 = 1$ and $x_0 = 0$, where the formula for the hyperbolic metric looks just like the spherical metric in stereographic co-ordinates but with a sign change in appropriate places. This is the standard conformal model of hyperbolic space. After you understand this inside out, you can use these pictures to figure out why the $n$-sphere and its metric is given by $O(n+1)/O(n)$ and hyperbolic space by $O(n,1)/O(n)$ and why the metrics you've computed above correspond to the natural invariant metric on these homogeneous spaces. You can then check that the formulas for invariant metrics on homogeneous spaces give you the same answers as above. Use references only for the general formulas for the metric, connection (including Christoffel symbols), and curvature. I recommend that you try to work out these examples by hand yourself instead of trying to follow someone else's calculations. If possible, however, do it with another student who is also trying to learn this at the same time. If, however, you want to peek at a reference for hints, I recommend the book by Gallot, Hulin, and Lafontaine. I suspect that the book by Thurston is good too (I studied his notes when I was a student). For invariant Riemannian metrics on a homogeneous space, I recommend the book by Cheeger and Ebin (available cheap from AMS! When I was a student, I had to pay a hundred dollars for this little book but it was well worth it). But mostly when I was learning this stuff, I did and redid the same calculations many times on my own. I was never able to learn much more than a bare outline of the ideas from either books or lectures. Just try to get a rough idea of what's going on from the books, but do the details yourself. - Thanks for your kind reply. I have earlier computed riemann and ricci and scalar curvatures of 4-manifolds in the sense of common space-times. Can you tell me a reference where I can see the computation of sectional curvature of a manifold of dimension > 2 (thats where the things are not so clear!) – Anirbit Dec 21 2009 at 18:40 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is one way to think about your first question which at least might provide a more geometric picture about what is going on. I want to think about the curvature $R(X,Y)$ as parallel transport around the infinitesimal parallelogram $X \wedge Y$. If I drag a vector $Z$ around the parallelogram $X \wedge Y$, the result is $R(X,Y)Z$. Since the connection is metric, the map $Z \mapsto R(X,Y)Z$ is actually an infinitesimal rotation; this is the observation that $$\langle R(X,Y)Z, W\rangle = -\langle Z, R(X,Y)W\rangle$$ Now I want to define a new operator $S$ which acts bilinearly on pairs of 2-vectors. This will be $$S(X\wedge Y, Z \wedge W) = \langle R(X,Y)Z,W\rangle$$ where I have summed over some basis of 2-planes in $\bigwedge^2 T_pM$. Geometrically, $S$ reports how much the infinitesimal 2-plane $Z \wedge W$ rotates as it is dragged around the 2-plane $X\wedge Y$. To see that this is well-defined we need only to check $S(-,Z\wedge W) = -S(-, W \wedge Z)$. But this follows precisely because of the previous equation for $R$. From here on, I'm going to use the metric to think of $S$ as $$S(X \wedge Y) = \sum_{2\text{-planes } Z\wedge W} \langle R(X,Y) Z,W \rangle ~Z\wedge W$$ The somewhat mysterious "pair swap" symmetry $\langle R(X,Y) Z, W\rangle = \langle R(Z,W)X,Y\rangle$ can now be interpreted as saying that the operator $S$ is symmetric. In particular, this means that we can take the spectral decomposition of $S$ to get a basis of orthogonal unit-area eigenplanes $X_i \wedge Y_i$, $$S(X_i \wedge Y_i) = \lambda_i \cdot X_i \wedge Y_i$$ The eigenvalues $\lambda_i$ are your sectional curvatures for this basis; any other sectional curvatures can be easily computed from these. Note that knowledge of $S$ is now clearly sufficient to reconstruct the curvature tensor, since $$\langle R(X,Y)Z, W\rangle = \langle S(X\wedge Y), Z \wedge W \rangle$$ so in fact the sectional curvature tensor $S$ determines the usual curvature tensor $R$. - do Carmo's "Riemannian Geometry" - 2) is very easy (assuming your manifold is connected; if not, it's false): you have an induced action of the isometries which fix the point x on the tangent space $T_xX$. This action preserves the metric, which is a positive definite inner product on this vector space. That is, this isometry subgroup is thus a closed subgroup of $SO(T_xX,g)$, which is a compact group. This actually also proves that it's a Lie group, since any closed subgroup of a Lie group is itself Lie. This also makes it easy to prove the true direction of 3), since the isometry group acting on x gives a submersion with compact image and compact fibers, showing the group is compact. - The "only if" direction in 3 is incorrect: take standard $\mathbb R^2$ and introduce several bumps to make the isometry group trivial. The "if" direction is the Kobayashi-Nomidzu "Foundations of Differential Geometry", vol I, around Theorem 4.6 for references. There and in vol II you also find answers to both of your 1-2 questions, I think. To see how sectional curvature is computed you need to go through a lot of examples. The "if" direction in 4 is incorrect: there are manifolds of constant scalar curvature that are not locally symmetric. - Just to add some things to Igor Belegradek's post: "1.That the isometry group of a Riemannian manifold is always a lie group." This is the famous Myers-Steenrod theorem, proven in 1939 (Myers, S.B. and N.E. Steenrod: The group of isometries of a Riemannian manifold. The Annals of Mathematics, Vol 40, No. 2, April 1939, p. 400-416.) It is in fact highly non-trivial, and I think you need that the manifold is connected Your point "3.That isometry group of a Riemannian manifold is compact IFF the Riemannian manifold is compact." is as Igor pointed out false, the only thing which is right is the following 3.If the (connected) Riemannian manifold is compact then the isometry group is compact. This is also a part of Myers-Steenrod theorem, and can be found in the reference above. The "idea" of the proof is the following: (Let $(M,g)$ be a Riemannian manifold) • Show that $(G=Iso(M,g), CO, op)$ is a locally compact topological transformationgroup.Here $CO$ is the compact-open topology, and $op: G \times M \rightarrow M$ the group action. Moreover $(M,g)$ compact implies $(G, CO, op)$ compact. • Show that any tangential subgroup $H$ of $Diff(M)$ inherits a differentiable structure $[b]$ such that $(H,[b],op)$ ($op$ being the natural operation on $M$) is a Lie-Transformation group which is first-countable. The underlying topology $\tau$ is finer than $CO$-topology. (If $(M,g)$ has countable many connected components, $G$ is a tangential subgroup of $Diff(M)$) • Show that the topology $\tau$ cannot be strictly finer than the $CO$-topology. (needs frame-bundles, etc.) - I have two favourite books on differential geometry where you can find answers to your questions: 1. do Carmo's Riemannian Geometry (as suggested by David Lehavi) 2. Besse's Einstein manifolds Let me just point out that your 4th point is not quite correct. The statement is that A riemannian manifold is locally symmetric if and only if the Riemann curvature tensor (and not just the scalar curvature) is parallel with respect to the Levi-Civita connection; i.e., $\nabla R = 0$ This presupposes that by "locally symmetric" you understand that the geodesic symmetry (i.e., changing the sign of the parameter of the geodesic) is an isometry at every point, otherwise it is a definition of locally symmetric. Edit (in response to Anirbit's comment) This is indeed a result of Élie Cartan and in fact, as far as I understand the history, Cartan started his research on symmetric spaces by studying the question of which riemannian manifolds have parallel curvature. He then classified the irreducibles and found the well-known relationship to the classification of simple Lie algebras. I'm not sure when the characterisation in terms of the geodesic symmetry was introduced. The proof is not complicated. It is basically that the curvature tensor is invariant under the map which interchanges opposite points along a geodesic. In other words, if you fix a point $p$ in your manifold and look at a geodesic $\gamma$ through $p$ in the direction $X$, then if you follow the geodesic a 'time' $s$ you get to some point $p(s)$. But there is also a geodesic through the same point with direction $-X$ and if you follow that geodesic for a time $s$ you end up at a point $p(-s)$. The map which sends $p(s)$ to $p(-s)$ for any (small, say) $s$ leaves the curvature invariant. The covariant derivative of the curvature along $X$ at $p$ can be understood as the difference between the curvature parallel transported to $p(s)$ and that transported to $p(-s)$ divided by $s$ in the limit as $s\to 0$, but even before you take the limit, the difference vanishes. Since you said your background is in relativity, I wonder whether you are not also interested in the case of locally symmetric spaces in lorentzian (or other indefinite) signature. In general signature this is still an open problem, but for lorentzian it was solved by Cahen and Wallach in this paper. - Thanks Jose for pointing that out. So can you give me the intuition behind this? This theorem seems to be from Cartan. – Anirbit Dec 23 2009 at 8:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381707906723022, "perplexity_flag": "head"}
http://www.reference.com/browse/Boolean+logic
Definitions # Boolean logic Boolean logic is a complete system for logical operations. It was named after George Boole, who first defined an algebraic system of logic in the mid 19th century. Boolean logic has many applications in electronics, computer hardware and software, and is the base of digital electronics. In 1938, Claude Shannon showed how electric circuits with relays were a model for Boolean logic. This fact soon proved enormously consequential with the emergence of the electronic computer. Using the algebra of sets, this article contains a basic introduction to sets, Boolean operations, Venn diagrams, truth tables, and Boolean applications. The Boolean algebra article discusses a type of algebraic structure that satisfies the axioms of Boolean logic. The binary arithmetic article discusses the use of binary numbers in computer systems. ## Set logic versus Boolean logic Sets can contain any elements. We will first start out by discussing general set logic, then restrict ourselves to Boolean logic, where elements (or "bits") each contain only two possible values, called various names, such as "true" and "false", "yes" and "no", "on" and "off", or "1" and "0". ## Terms Let X be a set: • An element is one member of a set. This is denoted by $in$. If it's not an element of the set, this is denoted by $notin$. • The universe is the set X, sometimes denoted by 1. Note that this use of the word universe means "all elements being considered", which are not necessarily the same as "all elements there are". • The empty set or null set is the set of no elements, denoted by $varnothing$ and sometimes 0. • A unary operator applies to a single set. There is one unary operator, called logical NOT. It works by taking the complement. • A binary operator applies to two sets. The basic binary operators are logical OR and logical AND. They perform the union and intersection of sets. There are also other derived binary operators, such as XOR (exclusive OR). • A subset is denoted by $A subseteq B$ and means every element in set A is also in set B. • A proper subset is denoted by $A subset B$ and means every element in set A is also in set B and the two sets are not equal. • A superset is denoted by $A supseteq B$ and means every element in set B is also in set A. • A proper superset is denoted by $A supset B$ and means every element in set B is also in set A and the two sets are not equal. ## Example Imagine that set A contains all even numbers (multiples of two) in "the universe" (defined in the example to the right as all integers between 0 and 30 inclusive) and set B contains all multiples of three in "the universe". Then the intersection of the two sets (all elements in sets A AND B) would be all multiples of six in "the universe". The complement of set A (all elements NOT in set A) would be all odd numbers in "the universe". ### Chaining operations together While at most two sets are joined in any Boolean operation, the new set formed by that operation can then be joined with other sets utilizing additional Boolean operations. Using the previous example, we can define a new set C as the set of all multiples of five in "the universe". Thus "sets A AND B AND C" would be all multiples of 30 in "the universe". If more convenient, we may consider set AB to be the intersection of sets A and B, or the set of all multiples of six in "the universe". Then we can say "sets AB AND C" are the set of all multiples of 30 in "the universe". We could then take it a step further, and call this result set ABC. ### Use of parentheses While any number of logical ANDs (or any number of logical ORs) may be chained together without ambiguity, the combination of ANDs and ORs and NOTs can lead to ambiguous cases. In such cases, parentheses may be used to clarify the order of operations. As always, the operations within the innermost pair is performed first, followed by the next pair out, etc., until all operations within parentheses have been completed. Then any operations outside the parentheses are performed. ### Application to binary values In this example we have used natural numbers, while in Boolean logic binary numbers are used. The universe, for example, could contain just two elements, "0" and "1" (or "true" and "false", "yes" and "no", "on" or "off", etc.). We could also combine binary values together to get binary words, such as, in the case of two digits, "00", "01", "10", and "11". Applying set logic to those values, we could have a set of all values where the first digit is "0" ("00" and "01") and the set of all values where the first and second digits are different ("01" and "10"). The intersection of the two sets would then be the single element, "01". This could be shown by the following Boolean expression, where "1st" is the first digit and "2nd" is the second digit: (NOT 1st) AND (1st XOR 2nd) ## Properties We define symbols for the two primary binary operations as $land / cap$ (logical AND/set intersection) and $lor / cup$ (logical OR/set union), and for the single unary operation $lnot$ / ~ (logical NOT/set complement). We will also use the values 0 (logical FALSE/the empty set) and 1 (logical TRUE/the universe). The following properties apply to both Boolean logic and set logic (although only the notation for Boolean logic is displayed here): | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-------------------------| | $a lor \left(b lor c\right) = \left(a lor b\right) lor c$ | $a land \left(b land c\right) = \left(a land b\right) land c$ | associativity | | $a lor b = b lor a$ | $a land b = b land a$ | commutativity | | $a lor \left(a land b\right) = a$ | $a land \left(a lor b\right) = a$ | absorption | | $a lor \left(b land c\right) = \left(a lor b\right) land \left(a lor c\right)$ | $a land \left(b lor c\right) = \left(a land b\right) lor \left(a land c\right)$ | distributivity | | $a lor lnot a = 1$ | $a land lnot a = 0$ | complements | | $a lor a = a$ | $a land a = a$ | idempotency | | $a lor 0 = a$ | $a land 1 = a$ | boundedness | | $a lor 1 = 1$ | $a land 0 = 0$ | | | $lnot 0 = 1$ | $lnot 1 = 0$ | 0 and 1 are complements | | $lnot \left(a lor b\right) = lnot a land lnot b$ | $lnot \left(a land b\right) = lnot a lor lnot b$ | de Morgan's laws | | $lnot lnot a = a$ | | involution | The first three properties define a lattice; the first five define a Boolean algebra. The remaining five are a consequence of the first five. ## Truth tables For Boolean logic using only two values, 0 and 1, the INTERSECTION and UNION of those values may be defined using truth tables such as these: $cap$ 0 1 0 0 0 1 0 1 $cup$ 0 1 0 0 1 1 1 1 • More complex truth tables involving multiple inputs, and other Boolean operations, may also be created. • Truth tables have applications in logic, interpreting 0 as FALSE, 1 as TRUE, $cap$ as AND, $cup$ as OR, and ¬ as NOT. ## Other notations Mathematicians and engineers often use plus (+) for OR and a product sign ($cdot$) for AND. OR and AND are somewhat analogous to addition and multiplication in other algebraic structures, and this notation makes it very easy to get sum of products form for normal algebra. NOT may be represented by a line drawn above the expression being negated ($overline\left\{x\right\}$). Programmers will often use a pipe symbol (|) for OR, an ampersand (&) for AND, and a tilde (~) for NOT. In many programming languages, these symbols stand for bitwise operations. "||", "&&", and "!" are used for variants of these operations. Another notation uses "meet" for AND and "join" for OR. However, this can lead to confusion, as the term "join" is also commonly used for any Boolean operation which combines sets together, which includes both AND and OR. ## Basic mathematics use of Boolean terms • In the case of simultaneous equations, they are connected with an implied logical AND: x + y = 2 AND x - y = 2 • The same applies to simultaneous inequalities: x + y < 2 AND x - y < 2 • The greater than or equals sign ($ge$) and less than or equals sign ($le$) may be assumed to contain a logical OR: X < 2 OR X = 2 • The plus/minus sign ($pm$), as in the case of the solution to a square root problem, may be taken as logical OR: WIDTH = 3 OR WIDTH = -3 ## English language use of Boolean terms Care should be taken when converting an English sentence into a formal Boolean statement. Many English sentences have imprecise meanings, e.g. "All that glitters is not gold," which could mean that "nothing that glitters is gold" or "some things which glitter are not gold". AND and OR can also be used interchangeably in English, in certain cases: • "I always carry an umbrella for when it rains and snows." • "I always carry an umbrella for when it rains or snows." Sometimes the English words AND and OR have the opposite meaning in Boolean logic: • "Give me all the red and blue berries" usually means "Give me all berries that are red or blue". An alternative phrasing for standard written English: "Give me all berries that are red as well as all berries that are blue". Also note that the word OR in English may correspond with either logical OR or logical XOR, depending on the context: • "I start to sweat when the humidity or temperature is high." (logical OR) • "You want ice cream and candy? You may have ice cream or candy." (logical XOR) The combination AND/OR is sometimes used in English to specify a logical OR, when just using the word OR alone might have been mistaken as meaning logical XOR: • "I'm having chicken and/or beef for dinner." (logical OR). An alternative phrasing for standard written English: "I'm having chicken, or beef, or both, for dinner." • The use of the "and/or" virgule is generally disfavored in formal written English. Such usage may introduce critical imprecision in legal instruments, research findings, and specifications for computer programs or electronic circuits. A case where this is an issue is when specifications for a computer program or electronic circuit are supplied as an English paragraph describing their function. For example, the statement: "the program should verify that the applicant has checked the male or female box", should be taken as an XOR, and a check added to ensure that one, and only one, box is selected. In other cases, the interpretation of English may be less certain, and the author of the specification may need to be consulted to determine their true intent. ## Applications ### Digital electronic circuit design Boolean logic is also used for circuit design in electrical engineering; here 0 and 1 may represent the two different states of one bit in a digital circuit, typically high and low voltage. Circuits are described by expressions containing variables, and two such expressions are equal for all values of the variables if, and only if, the corresponding circuits have the same input-output behavior. Furthermore, every possible input-output behavior can be modeled by a suitable Boolean expression. Basic logic gates such as AND, OR, and NOT gates may be used alone, or in conjunction with NAND, NOR, and XOR gates, to control digital electronics and circuitry. Whether these gates are wired in series or parallel controls the precedence of the operations. ### Database applications Relational databases use SQL, or other database-specific languages, to perform queries, which may contain Boolean logic. For this application, each record in a table may be considered to be an "element" of a "set". For example, in SQL, these SELECT statements are used to retrieve data from tables in the database: SELECT * FROM EMPLOYEES WHERE LAST_NAME = 'Smith' AND FIRST_NAME = 'John' ; SELECT * FROM EMPLOYEES WHERE LAST_NAME = 'Smith' OR FIRST_NAME = 'John' ; SELECT * FROM EMPLOYEES WHERE NOT LAST_NAME = 'Smith' ; Parentheses may be used to explicitly specify the order in which Boolean operations occur, when multiple operations are present: SELECT * FROM EMPLOYEES WHERE (NOT LAST_NAME = 'Smith') AND (FIRST_NAME = 'John' OR FIRST_NAME = 'Mary') ; Multiple sets of nested parentheses may also be used, where needed. Any Boolean operation (or operations) which combines two (or more) tables together is referred to as a join, in relational database terminology. In the field of Electronic Medical Records, some software applications use Boolean logic to query their patient databases, in what has been named Concept Processing technology. ### Search engine queries Search engine queries also employ Boolean logic. For this application, each web page on the Internet may be considered to be an "element" of a "set". The following examples use a syntax supported by Google. • Doublequotes are used to combine whitespace-separated words into a single search term. • Whitespace is used to specify logical AND, as it is the default operator for joining search terms: "Search term 1" "Search term 2" • The OR keyword is used for logical OR: "Search term 1" OR "Search term 2" • The minus sign is used for logical NOT (AND NOT): "Search term 1" -"Search term 2" ## External links • The Calculus of Logic, by George Boole, Cambridge and Dublin Mathematical Journal Vol. III (1848), pp. 183-98. • Logical Formula Evaluator (for Windows), a software which calculates all possible values of a logical formula • How Stuff Works - Boolean Logic • Maiki & Boaz BDD-PROJECT, a Web Application for BDD reduction and visualization.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 40, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9222121834754944, "perplexity_flag": "middle"}
http://www.conservapedia.com/Momentum
# Momentum ### From Conservapedia Momentum is the "quantity of motion" an object possesses. In classical physics, the linear form of momentum is defined as the product of mass and velocity: $\mathbf{p} = m\mathbf{v}$ Hence, the faster an object goes, or the more mass it posesses, the more momentum it has. Momentum is a vector quantity, and therefore has both a magnitude and direction. It is important to physicists because it is a conserved quantity, making it useful for solving problems. In common usage, the words "momentum" and "inertia" are sometimes used interchangeably. Inertia is the tendency for a body to resist changes in its motion until and unless a force acts on it. The motion of an object will continue until something makes it change its motion. A railroad car, once it gets going, will continue its motion for a long time, until the tiny forces of friction cause it to slow down and stop. This can take miles. Even putting on the brakes can take up to mile, because there is so much momentum. A force in the same direction as the body is moving will increase its speed. A force in the opposite direction will slow it down. A force coming from the side will cause a deviation from straight-line motion. An interesting case of a sideways force is a weight on the end of a string (like the Biblical slingshot used by David against Goliath). When you twirl the weight around above your head, the string is pulling the weight toward you - but it never gets any closer! This kind of force is called a centripetal, or center seeking, force. ## Angular momentum A rotating or orbiting body possesses angular momentum. Like linear momentum, angular momentum is a vector quantity and is conserved. An object's angular momentum changes only when a torque is applied to it. The magnitude of the angular momentum of a particle orbiting some origin (such as the earth orbiting the sun) is given by L = mvr where • L is angular momentum • m is the mass of the particle • v is the linear velocity of the particle • r is the distance from the particle to the origin The direction of the angular momentum vector points perpendicularly to the plane formed by the object's orbit, in accordance with the right hand rule. In addition to orbital angular momentum, the earth has rotational angular momentum due to its spin. The equations for calculating rotational angular momentum depend on the object's moment of inertia, and therefore the shape and density of the object. ### Generalized momentum The definition of momentum can be generalized in Lagrangian and Hamiltonian dynamics, to $p=\frac{\partial L}{\partial \dot x}$ where L is the Lagrangian and $\dot x$ is the velocity. In some cases the generalized momentum is the same as the momentum defined above. For example, for a free particle the Lagrangian equals the kinetic energy and so $p=\frac{\partial (m\dot x^2/2)}{\partial \dot x}=m\dot x$ as above.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9000265598297119, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/95557-very-complicated-algebra.html
# Thread: 1. ## very complicated algebra Could someone help me with the following algebraic fraction: $<br /> <br /> \frac {(a-b)(a^2+b^2)-a+b}{(a^3b^3)-(a^2-ab+b^2)+1}<br />$ 2. I think you have a typo somewhere because the denominator does not factor. The numerator factors to be $(a-b)(a^2+b^2-1)$ So if there is no typo, then the rational function does not simplify any.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280110001564026, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/06/20/
# The Unapologetic Mathematician ## The Existence Theorem for Limits Of course, though we’ve defined limits, we don’t know in general whether or not they exist. Specific limits in specific categories have been handled in an ad hoc manner. We show that the Cartesian product is a product in $\mathbf{Set}$, or that there is a subset which equalizes a pair of morphisms, but we have been doing this all by hand and there are so many different kinds of limits that it’s impossible to handle them all like this. Luckily, we can build complicated limits out of simpler ones in many cases. In fact, we’ve already seen this: we built pullbacks from products and equalizers. Actually we explicitly built pushouts from coproducts and coequalizers, but the pullback construction is just the dual. Anyhow, that construction shows the general idea. If a category $\mathcal{C}$ has finite products and equalizers of pairs then it has limits for all functors from finite categories $\mathcal{J}$. If it has all products (indexed by arbitrary sets) as well as pairwise equalizers then it is complete. Conversely, since products and equalizers are examples of limits completeness of a category implies their existence. That is, once we have these kinds of limits all the others come for free. The proof is summed up in this somewhat arcane diagram. So let’s unpack it. We’re in a category $\mathcal{C}$ and are considering a functor $F:\mathcal{J}\rightarrow\mathcal{C}$, where $\mathcal{J}$ is either a small or a finite category. Starting in the middle row we’ve got the product of all the objects in the image of $F$ and the product over all the morphisms of $\mathcal{J}$ of the images of their target objects. Now towards the top we have a projection $\pi_u$ from the second product onto each factor, and since each factor is in the image of $F$ we also have morphisms from the first product. There’s actually a triangle at the top for each morphism $u$ in the category $\mathcal{J}$, but we only draw one. Now by the universal property of the second product there exists a unique arrow $f$ from the first product to the second that makes all these triangles commute. We do a similar thing on the bottom. We again have the projections from the second product to its factors. For each morphism in $\mathcal{J}$ there’s a projection from the first product onto the image of its source, and then there’s an arrow $F(u)$ from the image of the source to the image of the target. Again, there’s one such square at the bottom for each morphism in $\mathcal{J}$, but we only draw one. Again, by the universal property of the second product there exists a unique arrow $g$ from the first product to the second that makes all of these squares commute. So now we have two parallel arrows from the first product to the second, and we take their equalizer, which gives an arrow into the first product. We also have an arrow out of the product for each object of $\mathcal{J}$, so we can compose to get an arrow $\mathrm{Equ}(f,g)\rightarrow F(J)$ for each object $J\in\mathcal{J}$. I claim that this is the limit we seek. First we need to check that this is a cone on $F$. For an arrow $u:J\rightarrow K$ in $\mathcal{J}$ we need to see that $\pi_K\circ e=F(u)\circ\pi_J\circ e$. The lower commuting square for $u$ tells us that $F(u)\circ\pi_J=\pi_u\circ g$. The upper commuting square tells us that $\pi_K=\pi_u\circ f$. So we calculate $\pi_K\circ e=\pi_u\circ f\circ e=\pi_u\circ g\circ e=F(u)\circ\pi_J\circ e$ as desired. Now if $(L,\{\lambda_J\}_{J\in\mathrm{Ob}(\mathcal{J})})$ is any other cone on $F$ then the arrows in the cone combine to give a unique arrow $h:L\rightarrow\prod_{J\in\mathrm{Ob}(\mathcal{J})}F(J)$. Since this is a cone, we can check that $f\circ h=g\circ h$. Thus $h$ factors uniquely through $e$, giving the universal property we need. In the finite case, our discussion of multiple products shows that all we need are binary products, a terminal object, and binary equalizers to have all finite products and binary equalizers, and thus to have all finite limits. In general, infinite products have to be dealt with on their own. Many of our algebraic categories can now be shown to be complete. For examples, each of $\mathbf{Set}$, $\mathbf{Grp}$, $\mathbf{Ab}$, $R-\mathbf{mod}$, and $\mathbf{mod}-R$ is complete. Dually, a category is cocomplete if and only if it has all coproducts and pairwise coequalizers. It has all finite colimits if and only if it has all finite coproducts and pairwise coequalizers. You should determine which of the above list of categories are cocomplete. Posted by John Armstrong | Category theory | 9 Comments ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 39, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390647411346436, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/200839-domain-function-true-false.html
# Thread: 1. ## Domain and function True/False ? True or false, The domain of the function f(x) = x^2-9/x is {X|X /= +/-3}. I don't understand this question. Also, what does X|X mean? Thanks. 2. ## Re: Domain and function True/False ? Originally Posted by kmerr98277 True or false, The domain of the function f(x) = x^2-9/x is {X|X /= +/-3}. I don't understand this question. Also, what does X|X mean? Thanks. $\{X|~X\ne\pm 3\}$ is read as The set of all X such that X is not equal to plus or minus three. 3. ## Re: Domain and function True/False ? It is, however, NOT true that the domain of $(x^2- 9)/x$ is the "set of all x such that x is not equal to plus or minus 3". The (natural) domain of a function is the set of all values of x for which the formula can be calculated. There is no problem with x= 3 or -3: f(3)= 0/3= 0 and f(-3)= 0/(-3)= 0. There is a problem with x= 0 because then the denominator is 0 and we cannot divide by 0. The domain of $(x^2- 9)/x$ is $\{x | x\ne 0\}$. If the the problem were were with f the reciprocal, $f(x)= \frac{x}{x^2- 9}$, then, because $x^2- 9= (x- 3)(x+ 3)$, the denominator would be 0 at x= 3 or x= -3 and the domain would be $\{x| x\ne \pm 3\}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445081949234009, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/98228?sort=oldest
## Is it known that every PDF continuous in all $R^n$ has a maximum? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm working with maximum a posteriori estimation and managed to show that every probability density function that is continuous in all $R^n$ always has at least one global maximum. I've search around and asked a few fellow engineers and professors but am not sure if this is widely known. This can actually be extended to any continuous Radon-Nikodym derivative of a finite measure. The proof is simple: let $f$ be the PDF, and be continuous in all $R^n$. If $L(v)$ is the closed superlevel set at $v$, that is: $L(v):=${$x\in R^n: f(x)\geq v$}, then it must be bounded. That is so because the neighbourhood of any unbounded set in $R^n$ has infinite Lebesgue measure. Due to continuity of $f$, any lower superlevel set of it, for example $L(v/2)$ contains a neighbourhood of $L(v)$. The probability of the superlevel sets is bounded below by $P[L(v)]\geq v \lambda[L(v)]$. This means that if any superlevel set of $f$ were unbounded, then a lower superlevel set would have probability greater than one. Since all closed superlevel sets are bounded, they are compact and attain their maximum. - I think that if $f$ is a continuous density with compact support $\sum p_i \lambda_i f(\frac x {\lambda_i} + v_i$ is a continuous and has no global maximum, choosing $\sum p_i = 1, \lambda_i \rightarrow \infty, \v_i \rightarrow \infty$, – mike May 28 at 23:40 1 The statement is false. L(v) need not be bounded, and the neighbourhood of an unbounded set in R^n need not have infinite measure. (Try constructing an open neighbourhood of the rationals in R with finite measure). – George Lowther May 28 at 23:57 Never mind, sorry for the fallacy... An $\epsilon$-neighbourhood of an unbounded set has infinite Lebesgue measure for any finite $\epsilon$, but as the counterexample showed it not necessarily is containded by the lower superlevel set. The case I'm working is simpler though, I actually know my function is bounded above, it is differentiable and its gradient is continuous. Does it make sense saying it attains the maximum? ps.: George, I'm a fan of your blog. – Dimas May 29 at 3:49 No, the modified proposition is still false. Smooth the previous counterexample and then apply $\arctan$ and rescale. – Douglas Zare May 29 at 4:03 @Dimas: You recover your result if it is assumed that the probability density is uniformly continuous, although that is a much stronger condition. – George Lowther May 31 at 20:16 show 1 more comment ## 1 Answer Take $n=1$ and put a triangle with height $2^m$ and width $2^{-2m}$ at each integer $m=0,1,2,\dots$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9529286026954651, "perplexity_flag": "head"}
http://mathoverflow.net/questions/52532/cardinality-of-local-bases-in-the-non-standard-reals
## cardinality of local bases in the non-standard reals ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a index set $S$ together with a ultrafilter $\mu$ on $S$ (such that no set of cardinality $< |S|$ has measure $1$). Let the ordered field $\mathbb{R}(S,\mu)$ denote the ultrapower of $\mathbb{R}$ with respect to $S$, i.e. $\mathbb{R}(S,\mu):=\mathbb{R}^S/\sim$, where two maps $f,g$ are called equivalent, if $\mu(\{i\in S| f(i)=g(i)\})=1$. This is again an ordered field. Equip it with the topology generated by $\{B_\varepsilon(x)|x\in \mathbb{R}(S,\mu),\varepsilon \in \mathbb{R}(S,\mu),\varepsilon>0\}$. Then my question is: What is the smallest cardinality of a local basis around $0$ depending on the cardinality of |S| and (possibly) on the ultrafilter? Examples: If $S=\{pt\}$, we get $\mathbb{R}(S,\mu)\cong \mathbb{R}$ and hence it has a countable base for the topology. In the case $S=\mathbb{N}$ and a non-principal ultrafilter one can show, that there is no countable base for the topology (saturation argument) and it is at most $|\mathbb{R}(\mathbb{N},\mu)|=|\mathbb{R}|$. CH would tell us, that it is $|\mathbb{R}|$. But maybe there is a good (avoiding CH) reason, why it has exactly this cardinality. - ## 1 Answer Your question is essentially equivalent to the one here; for what is known see in particular the bottom of Joel David Hamkins' answer there. To see the equivalence, note that choosing such a local base $B$ at zero is equivalent (at least assuming AC) to choosing a set $E$ of positive elements of $\mathbb{R}(S,\mu)$ such that for all $x>0$ there is a $y\in E$ with $y < x$. Given such a set $E$ we can take $B = \{(-x,x) \mid x\in E\}$. Given a basis $B$ we can take $E$ to contain one element $x$ for each neighborhood in $B$, with $x$ chosen so $(-x,x)$ is contained in the neighborhood. Choosing $E$ is in turn equivalent to choosing a cofinal set $F$, i.e. one which contains a $y>x$ for any $x\in\mathbb{R}(S,\mu)$. You can pass between $E$ and $F$ by taking the reciprocal of all the elements. The cofinality of $\mathbb{R}(S,\mu)$ is the smallest such cofinal set, so this is what you are asking for. According to Joel's answer, these are studied in Shelah's Possible Cofinality (PCF) theory. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432345628738403, "perplexity_flag": "head"}
http://cms.math.ca/10.4153/CMB-2005-019-9
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CMB Abstract view The Distribution of Totatives Read article [PDF: 136KB] http://dx.doi.org/10.4153/CMB-2005-019-9 Canad. Math. Bull. 48(2005), 211-220 Published:2005-06-01 Printed: Jun 2005 • Jam Germain Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: HTML LaTeX MathJax PDF PostScript Abstract The integers coprime to $n$ are called the {\it totatives} \rm of $n$. D. H. Lehmer and Paul Erd\H{o}s were interested in understanding when the number of totatives between $in/k$ and $(i+1)n/k$ are $1/k$th of the total number of totatives up to $n$. They provided criteria in various cases. Here we give an if and only if'' criterion which allows us to recover most of the previous results in this literature and to go beyond, as well to reformulate the problem in terms of combinatorial group theory. Our criterion is that the above holds if and only if for every odd character $\chi \pmod \kappa$ (where $\kappa:=k/\gcd(k,n/\prod_{p|n} p)$) there exists a prime $p=p_\chi$ dividing $n$ for which $\chi(p)=1$. MSC Classifications: 11A05 - Multiplicative structure; Euclidean algorithm; greatest common divisors 11A07 - Congruences; primitive roots; residue systems 11A25 - Arithmetic functions; related numbers; inversion formulas 20C99 - None of the above, but in this section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.798497200012207, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/20396/energy-required-to-reach-1-wavelength/20398
# Energy required to reach 1 wavelength [closed] I was curious if there was a forumla to find the energy required to reach 1 wavelength in a given substance. (or a vacumn if that's too hard). I am also wondering if this number can tell us anything about the way the wave acts? My knowledge of electromagnetic radiation is pretty small... Maybe these things don't really matter? - " reach 1 wavelength in a given substance" doesn't really mean anything - what are you trying to say? – Martin Beckett Feb 1 '12 at 23:11 How much energy does it take 1 wavelength does it take 144mhz radiation go one wavelength (6 feet) vs how much energy does it take 800hz radiation go one wavelength (250 miles). Since there is such a difference to reach different wavelengths... Wouldn't it take differenet amounts of energy to do so? – Kyle Hotchkiss Feb 1 '12 at 23:19 Energy is force over distance. You have the distance part (1 wavelength), but what force are you thinking about providing the energy (or absorbing it) ? – ja72 Feb 3 '12 at 19:45 ## closed as not a real question by Qmechanic♦, Manishearth♦, Ϛѓăʑɏ βµԂԃϔ, Sklivvz♦Dec 28 '12 at 12:42 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, see the FAQ. ## 2 Answers The energy of a photon depends only on its frequency, or equivalently its wavelength: $$E=h\nu=\frac{hc}{\lambda}$$ So the energy of a $144MHz$ photon is ~$6\times10^{-7}eV$; and the energy of a $800Hz$ photon is ~$3.3\times10^{-12}eV$. (I assume you mean MHz megahertz, and not mHz millihertz) It doesn't matter how 'far' they travel, since distance is irrelevant from the viewpoint of a photon that travels at the speed of light. Edits must be at least 6 characters... - Interesting. I don't have any formal education in physics and i guess I may have been applying mechanical wave properties this way. – Kyle Hotchkiss Feb 2 '12 at 0:10 So the initial energy doesn't impact distance... But a certain amount of initial energy is needed to reach a certain frequency? – Kyle Hotchkiss Feb 2 '12 at 0:13 2 As @Martin alludes to above, it's not correct to think of photons as "using" or "needing" energy to travel. They always travel; that's what they do, and always at the speed of light. A photon doesn't need to "reach" a frequency: when it is emitted, it already has that frequency. It's not like a mechanical wave. Photons essentially are energy. And absent hitting something, a photon will keep travelling to infinity. – Mark Beadles Feb 2 '12 at 0:21 Thanks, @Community♦ for catching my error in the sign of the exponent. – Mark Beadles Feb 2 '12 at 15:52 Thanks for the explanation! I had photons all confused. – Kyle Hotchkiss Feb 4 '12 at 7:21 It doesn't take any energy for a photon of a given wavelength to travel any distance. Assuming you are in empty space a photon will travel essentially forever - the cosmic microwave background is photons that have been travelling to us for nearly 15Bn years. Travelling in a medium light will lose some energy to the stuff it's travelling through, how strongly will depend on both the medium and the wavelength. Since light loses energy by interacting with the stuff the absorption is generally stronger for shorter wavelengths (higher energies) so X-rays and UV are absorbed very strongly in a short distance while infrared and radio go further through. You also need to differentiate between a beam of light losing power as individual photons are absorbed, and a photon losing energy as it is absorbed and remitted at a longer wavelength (lower energy) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548552632331848, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/207158-how-apply-langrange-equations-case.html
# Thread: 1. ## how to apply the Langrange equations in this case In the proof to Theorem 7.3 from this paper on FNTFs, the authors invoke the so-called "Langrange equations." I assume they mean the Euler-Lagrange equations. (But maybe not...?) Unfortunately I'm not at all familiar with the Euler-Lagrange equations, and in reading what they are, I have no idea how to apply them in this case. If anyone has some spare time and good will, can he/she please explain how to understand this? The set-up: Let $K=\mathbb{C}$ be the complex numbers and $S(K^d)$ the unit sphere in $K^d$ for some positive integer d. Let $\{x_n\}_{n=1}^N\subseteq S(K^d)$ be a fixed sequence in that unit sphere. Let $S=\{(a,b)\in\mathbb{R}^d\times\mathbb{R}^d:\lvert a\rvert^2+\lvert b\rvert^2=1\}$ be the unit sphere in $\mathbb{R}^d\times\mathbb{R}^d$, and define the function $\widetilde{FP}_l:S\to[0,\infty)$ by $(a,b)\mapsto 2\sum_{n\neq l}(\langle a,a_n\rangle+\langle b,b_n\rangle)^2+(\langle b,a_n\rangle-\langle a,b_n\rangle)^2+1+\sum_{m\neq l}\sum_{n\neq l}|\langle x_m,x_n\rangle|^2,$ where the sums are otherwise over 1 through N, and l is some integer between 1 and N. Let $(a_l,b_l)\in S\subset\mathbb{R}^d\times\mathbb{R}^d$ be a local minimizer of $\widetilde{FP}_l$. The problem: Show that there exists a scalar $c\in\mathbb{R}$ such that both of the following equations hold: (7.1) $\nabla_a\widetilde{FP}_l(a,b)|_{(a,b)=(a_l,b_l)}=c \nabla_a(\lvert a\rvert^2+\lvert b\rvert^2)|_{(a,b)=(a_l,b_l)};$ (7.2) $\nabla_b\widetilde{FP}_l(a,b)|_{(a,b)=(a_l,b_l)}=c \nabla_b(\lvert a\rvert^2+\lvert b\rvert^2)|_{(a,b)=(a_l,b_l)}.$ I assume that $\nabla_a,\nabla_b$ refer to the gradients on $a,b$, respectively. However I'm not sure about that. If possible, I would like someone to show me in a textbook (I can get almost anything online or from my university library) what theorem to use, and what choices to make in applying the theorem. For instance, if a theorem calls for a function f, then what is a suitable choice of f in this case? Thanks guys.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318748712539673, "perplexity_flag": "head"}
http://nrich.maths.org/981/index?nomenu=1
On a digital $24$ hour clock, at certain times, all the digits are consecutive (in counting order). You can count forwards or backwards.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9008059501647949, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-math-topics/195737-disconnected-graphs.html
# Thread: 1. ## Disconnected Graphs Let G be disconnected graph. Prove that if u and v are any two vertices of G complement then $d_{\overline{G}}(u,v) = 1 \;\; or \;\; 2$ therefore if G is disconnected graph then $diam(\overline{G} ) \leq 2$ .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8925596475601196, "perplexity_flag": "middle"}
http://polylogblog.wordpress.com/2012/07/29/data-streaming-in-dortmund-day-3/
# Data Streaming in Dortmund: Day 3 July 29, 2012 in conference report Continuing the report from the Dortmund Workshop on Algorithms for Data Streams, here are the happenings from Day 3. Previous posts: Day 1 and Day 2. Michael Kapralov started the day with new results on computing matching large matchings in the semi-streaming model, one of my favorite pet problems. You are presented with a stream of unweighted edges on n nodes and want to approximate the size of the maximum matching given the constraint that you only have O(n polylog n) bits of memory. It’s trivial to get a 1/2 approximation by constructing a maximal matching greedily. Michael shows that it’s impossible to beat a 1-1/e factor even if the graph is bipartite and the edges are grouped by their right endpoint. In this model, he also shows a matching (no pun intended) 1-1/e approximation and an extension to a $1-e^{-p}p^{p-1}/(p-1)!$ approximation given p passes. Next up, Mert Seglam talked about $\ell_p$ sampling. Here the stream consists of a sequence of updates to an underlying vector $\mathbf{x}\in {\mathbb R}^n$ and the goal is to randomly select an index where $i$ is chosen with probability proportional to $|x_i|^p$. It’s a really nice primitive that gives rise to simple algorithms for a range of problems including frequency moments and finding duplicates. I’ve been including the result in recent tutorials. Mert’s result simplifies and improves an earlier result by Andoni et al. The next two talks focused on communication complexity, the evil nemesis of the honest data stream algorithm. First, Xiaoming Sun talked about space-bounded communication complexity. The standard method to prove a data stream memory lower bound is to consider two players corresponding to the first and second halves of the data stream. A data stream algorithm gives rise to a communication protocol where the players emulate the algorithm and transmit the memory state when necessary. In particular, multi-pass stream algorithms give rise to multi-round communication protocols. Hence a communication lower bound gives rise to a memory lower bound. However, in the standard communication setting we suppose that the two players may maintain unlimited state between rounds. The fact that stream algorithms can’t do this may lead to suboptimal data stream bounds. To address this, Xiaoming’s work outlines a communication model where the players may maintain only a limited amount of state between the sending of each message and establishes bounds on classical problems including equality and inner-product. In the final talk of the day, Amit Chakrabarti extolled the virtues of Talagrand’s inequality and explained why every data stream researcher should know it. In particular, Amit reviewed the history on proving lower bounds for the Gap-Hamming communication problem (Alice and Bob each have a length n string and wish to determine whether the Hamming distance is less than n/2-√n or greater than n/2+√n) and ventured that the history wouldn’t have been so long if the community had had a deeper familiarity with Talagrand’s inequality. It was a really gracious talk in which Amit actually spent most of the time discussing Sasha Sherstov’s recent proof of the lower bound rather than his own work. BONUS! Spot the theorist… After the talks, we headed off to Revierpark Wischlingen to contemplate some tree-traversal problems. If you think your observation skills are up to it, click on the picture below to play “spot the theorist.” It may take some time, so keep looking until you find him or her. ### About A research blog about data streams and related topics. ### Recently Tweeted • Coding! Complexity! Sparsity! eecs.umich.edu/eecs/SPARC2013/ 2 days ago • None technical thought for the day: "You - Ha Ha Ha" really reminds me of "Fineshrine" fwiw #hashcollision. 1 week ago • SPIRE 2013 deadline extended to 9 May. u.cs.biu.ac.il/~porately/spir… 2 weeks ago • SODA 2014 CFP. siam.org/meetings/da14/ 4 weeks ago • I can't be the first person to have confused circadian and Acadian rhythms but I'm the most recent. #toomuchdaniellanois 1 month ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9048951268196106, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/37740/projective-dimension-of-zero-module/113917
## Projective dimension of zero module ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there any consensus on what the projective dimension of the zero module should be? Here are three statements one commonly encounters in textbooks, sometimes with or without the condition $M\neq 0$: (1) $\mbox{pd}(M)\leq n$ iff $\mbox{Ext}^{n+1}(M,-)=0$ (2) $\mbox{pd}(M)=0$ iff $M$ is projective (3) $\mbox{grade}(M):=\infty$ if $M=0$ If one attempts to define $\mbox{pd}((0))$ by extending one of these results, (1), (2), (3) suggest $\mbox{pd}=-1, 0, \infty$, respectively. - 2 I can't see the point in worrying about this. Is there any application where assigning the zero module a projective dimension is significant? – Robin Chapman Sep 4 2010 at 17:21 1 For example, change of rings formulas like $\mbox{pd}_{A/(a)}(M/aM)\leq\mbox{pd}_{A}(M)$ would be false if one defined $\mbox{pd}(0)=\infty$; Take $A=\mathbb{Z}, a=3, M=\mathbb{A}/5$. – ashpool Sep 4 2010 at 17:30 What is the dimension of the empty manifold? – Tom Goodwillie Sep 4 2010 at 20:01 3 There are many, many more such questions that can be asked (and some already have been asked, cf mathoverflow.net/questions/31621): one can replace projective dimension with injective or free dimension; or ask about the Krull dimension of the trivial ring; or the dimension of the trivial and empty simplicial complexes (trival: the only simplex is the empty simplex; empty: the set of simplices is the empty set). But does this mean that they $\textit{should}$ be asked? Then I would prefer to have one such catch-all trivial exceptions question and get done with it for good. – Victor Protsak Sep 4 2010 at 21:22 4 @Victor: I think that this question makes a lot more sense than a question about modules over the zero ring, so your comparing this to the question you linked is completely unfair. Modules over arbitrary rings can be zero without you knowing and so it makes sense to expect that applying a reasonable definition would give something reasonable for the zero module. – Sándor Kovács Nov 20 at 4:37 show 1 more comment ## 3 Answers Although I agree that one can easily decide to not worry about the case of the zero module, but as ashpool points out, it happens that sometimes we end up with the zero module whether we want or not and then each time we need to say (using ashpool's example) if $M/aM\neq 0$, then bluh and if $M/aM=0$ than something else happens. So, I think there is actually something to be gained from making a definition that makes sense for the zero module (or the zero object in a more general situation). Of course, sometimes the definition that makes one (in)equality work does not work for another. However, one could still say in a paper (less likely in a book I suppose) that we are using the following definition for whatever which is the usual one if the object is not zero and gives this or that when it is zero and makes the following inequality work. So having philosophized about this let me give a definition of projective dimension that gives $-\infty$ for the zero module. Definition Let $(R,\mathfrak m,k)$ be a noetherian local ring and $M$ a finite $R$-module. Define the projective dimension of $M$ as $$\mathrm{proj\, dim}_R M:=\sup \left\{ i\in \mathbb{Z} \ \vert \ \mathrm{Ext}_R^i (M,k)\neq 0 \right\},$$ where $\sup$ is taken in $\mathbb{Z}\cup\{\pm\,\infty\}$. This is actually essentially ashpool's definition (1), except that for $M=0$ it takes the $\sup$ of the empty set. (This may have been what samantha's professor told her). It also makes the change of rings formula to work. In fact, I would argue that this is the "right" definition anyway, because the point is those Ext groups that are non-zero, not those that are. Regarding adding the $\{\pm\,\infty\}$ possibilities: We definitely need to allow $+\infty$, so it makes sense to allow $-\infty$ as well, especially because we need it for $M=0$. Comment Of course one can start wondering what to do with non-local and/or non-noetherian rings, but I will leave that meditation to the reader. - 1 What needs to be defined is the sup of a subset $Y$ of a given (totally) ordered set $X$. It depends on both $Y$ and $X$. Specifically, the sup of the empty subset of $\mathbb{N}$ is clearly $0$; the sup of the empty subset of `$\mathbb{Z}\cup\{-\infty,+\infty\}$` is $-\infty$, and the sup of the empty subset of $\mathbb{Z}$ doesn't exist. – Laurent Moret-Bailly Nov 20 at 7:43 Dear Laurent, you are absolutely right, so I edited the definition. At the same time I would say that the essential part of your comment is whether one should allow negative numbers or not. One argument for allowing negative numbers is that that makes this definition give $-\infty$ for $M=0$ which seems to be the best choice. Another, perhaps better, argument is that if we view everything in the derived category, then we might encounter negative $i$'s such that $\mathrm{Ext}^i\neq 0$. So, in order to be consistent we have to allow $i$ to be negative. Then we obviously have to allow $-\infty$. – Sándor Kovács Nov 20 at 8:24 I would add that it is perhaps strictly speaking incorrect, but customary to allow $\pm\infty$ for values of $\sup$ and $\inf$ for a subset of $\mathbb{R}$ without explicitly saying that we extend the overset to include those. – Sándor Kovács Nov 20 at 8:26 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I came online to ask this question myself, as it came up today during a reading course on homological algebra. According to my professor, the zero module, even though it is trivially projective (as a trivial direct summand of any free module), does not have projective dimension 0, but rather $-\infty$. I cannot remember the precise reasoning he used. I believe it had something to do with the supremum of the empty set equaling negative infinity, although I don't remember how an empty set even came into the picture. The issue had come up from one of his papers in which a theorem he stated would turn out to be false for the zero module, as a reader pointed out. He responded that the theorem held only for modules of finite projective dimension and as such the zero module would be excluded. I also recognize that this question is over 2 years old, but for what it's worth, we have another candidate for the projective dimension of the zero module. And while I personally despise treating trivial cases, I acknowledge that a really good definition should always be able to account for them, making this question somewhat worthwhile? - Personally, I'd like more details (e.g. how the empty set came into play). Could you tell us which of his papers you are referencing above? Perhaps the reasoning is in there for why $-\infty$ is the right answer. By the way, welcome to MathOverflow – David White Nov 19 at 21:29 Let me explain a definition of projective dimension which gives the same result as the one given by Sándor Kovács, but without any restriction on the ring or the module we are talking about. This is, by the way, the one chosen by Bourbaki (A.X.8.1). Let $A$ be a ring. 0) We write `$\overline{\mathbb{Z}}=\mathbb{Z}\cup\{-\infty,\infty\}$` and furnish $\overline{\mathbb{Z}}$ with the ordering that extends the canonical ordering on $\mathbb{Z}$ and has $\infty$ as greatest and $-\infty$ as smallest element. We convene that suprema and infima of subsets of subsets of $\overline{\mathbb{Z}}$ are always understood to be taken in $\overline{\mathbb{Z}}$. 1) If $C$ is a complex of $A$-modules and $C_n$ denotes its component of degree $n\in\mathbb{Z}$, then we set `$$b_d(C)=\inf\{n\in\mathbb{Z}\mid C_n\neq 0\}$$` and `$$b_g(C)=\sup\{n\in\mathbb{Z}\mid C_n\neq 0\},$$` and we call $$l(C)=b_g(C)-b_d(C)$$ the length of $C$. Note that if $C$ is the zero complex then we have $b_d(C)=\infty$ and $b_g(C)=-\infty$, hence $l(C)=-\infty$. 2) If $M$ is an $A$-module and $(P,p)$ is a left resolution of $M$, then the length $l(P)$ of the complex $P$ is called the length of $(P,p)$. Note that if $P$ is the zero complex (which may be the case if and only if $M=0$) then the length of $(P,p)$ is $-\infty$. 3) If $M$ be an $A$-module, then the infimum of the lengths of all projective resolutions of $M$ is called the projective dimension of $M$. Hence, if $M=0$ then we have a projective resolution of length $-\infty$, and thus the projective dimension of $M$ is also $-\infty$. Conversely, if $M$ has projective dimension $-\infty$ then - since every $A$-module has a projective resolution - it necessarily has a projective resolution of length $-\infty$, and thus it follows $M=0$. Note: This clearly makes sense in every abelian category with enough projectives, and there are obvious variants of the above that yield analogous definitions of injective or flat dimensions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 88, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417088627815247, "perplexity_flag": "head"}
http://mathoverflow.net/questions/93704?sort=oldest
## Checking if one polytope is contained in another ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, I have two sets of inequalities, say, $Ax \leq 0$ and $Bx \leq 0$. I would like to know if they both define the same polytope. Or, even, whether one is contained in the other. At the moment I am checking, for each row $a \in A$, whether there exists some $x$ satisfying $ax > 0$ but $Bx \leq 0$. Is there a more efficient approach? Thanks - ## 1 Answer I would like to draw your attention to the 2002 survey by Volker Kaibel and Marc Pfetsch, "Some Algorithmic Problems in Polytope Theory," arXiv:math/0202204v1, which contains this on p.6: As you probably know, an $\cal{H}$-description is by halfspaces, whereas a $\cal{V}$-description is by vertices. Reference [20] is: R. M. Freund and J. B. Orlin, "On the complexity of four polyhedral set containment problems," Math. Program., 33 (1985), pp. 139–145. Reference [17] is B. C. Eaves and R. M. Freund, "Optimal scaling of balls and polyhedra," Math. Program., 23 (1982), pp. 138–147. - Thanks! That survey is just what I've been looking for. – bandini Apr 10 2012 at 23:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9085538387298584, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/87865?sort=newest
## What is the name for a finite-group representation that is the sum of all the irreducibles (once)? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I vaguely remember seeing a paper studying the concept of a totally multiplicity-one representation of a finite group, which concept, I recall, had a particular name, which I forget. What is this name, and is there a reference paper (for example, the one I might have been reading) where I can find out what is known about these representations? - 4 You might mean a Gelfand model. See Garge and Oesterle, and references degruyter.com/view/j/jgth.2010.13.issue-3/… – Junkie Feb 8 2012 at 3:31 It's going to be a bit hard to guess the paper, knowing only that it deals with models of finite groups... Are you interested in some specific kind of groups? (For example Lie type etc.) – Gjergji Zaimi Feb 8 2012 at 3:36 I believe Junkie has it (at least, the name, which rings a bell). I'm not wedded to the particular paper, if you have one you think is good. I believe the topic of that paper was a particular construction of a Gelfand model (i.e. an explicit vector space plus action), which may very well have been just for the symmetric groups. – Ryan Reich Feb 8 2012 at 3:41 2 A model for a finite group $G$ is a representaton which contains each irreducible representation of $G$ with multplicity $1$. It is not clear this is what is being asked: maybe Ryan is asking for the name of a representation in which no irreducible representation ocurs with multiplicity greater than one, but not all irreducibles of $G$ occur (that's a multiplicity-free representation). This is in the semisimple context. R. Richardson showed that the symmetric groups have models which are sums of representations induced from 1-dimensional representations of centralizers of involutons. – Geoff Robinson Feb 8 2012 at 8:45 1 I'm definitely asking for all irreducibles to have multiplicity exactly one. This may be clearer in the title, though. – Ryan Reich Feb 8 2012 at 17:21 show 4 more comments ## 1 Answer As others have commented, the most likely answer to the question in the header is Gelfand model, though you are apparently looking further for related literature. Probably the notion developed in the 1970s and 1980s in a series of papers by I.M. Gelfand and his collaborators. These are in Russian, but mostly published also in English translation journals and latter reprinted in the multi-volume collected works. It's clear that the ideal notion of "model" of representations for a given group is a representation containing each irreducible representation exactly once as a constituent. But in some situations this is weakened to require only that "most" representations occur and/or that "most" of them have multiplicity just one. The groups studied usually are of Lie type (finite, compact, etc.) or perhaps closely related (symmetric groups, for example). And the results are varied, some more computable or usable than others. Finding a model for a specific group is a natural goal but difficult to reach. MathSciNet and other databases provide good references and sometimes reviews. As a sample, here is a concise author summary which illustrates some of the typical motivation: Gel′fand, I. M.; Zelevinski ̆ı, A. V. [Zelevinsky, Andrei], Models of representations of classical groups and their hidden symmetries. Akad. Nauk SSSR Inst. Prikl. Mat. Preprint 1984, no. 71, 26 pp. Authors’ summary: “For all complex classical groups G we construct new realizations of the representation model of G, i.e., the direct sum of all irreducible algebraic finite-dimensional representations of G occurring with multiplicity one. These realizations have hidden symmetry: the action of the Lie algebra of G on them extends naturally to the action of a larger Lie (super-) algebra. The construction of hidden symmetries is based on a geometrical construction, similar to a twistor construction of Penrose.” Another paper is part of a series in Russian by Bernstein-Gelfand-Gelfand: Models of representations of compact Lie groups. (Russian) Funkcional. Anal. i Prilozˇen. 9 (1975), no. 4, 61–62. [ADDED] Maybe I should emphasize that constructing an abstract model of representations for a group isn't by itself the goal, since it may be too big to provide further insight. As Junkie indicates (with reference to a paper that is also on the arXiv), symmetric groups and other finite Coxeter groups have been studied in this framework with partial success. For finite groups of Lie type, the ideas of Gelfand-Graev led to a simple construction which is "almost" a model of the ordinary characters but doesn't capture all of them. This is developed by Carter in his 1985 book Finite Groups of Lie Type, Chapter 8, and less completely by Digne-Michel in their 1991 text Representations of Finite Groups of Lie Type. Starting with any "regular" character of a maximal unipotent subgroup (the choices here don't matter), induction to the whole finite group yields a multiplicity-free character of large degree which has most other characters as constituents. Taken in isolation this is not so helpful, but combined with Deligne-Lusztig theory it leads to interesting results. An underlying theme for groups of Lie type is that's it's fairly easy to construct big representations by induction from nice subgroups, but then it's not so easy to extract information about the irreducibles or multiplicities. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447916746139526, "perplexity_flag": "middle"}
http://calculus7.org/2012/03/31/diameter-vs-radius-part-ii/
being boring ## Diameter vs radius, part II Posted on 2012-03-31 by A set $A$ in a metric space $X$ has diameter $\mathrm{diam}\, A=\sup_{a,b\in A} |a-b|$ and radius $\mathrm{rad}\, A = \inf_{x\in X}\sup_{a\in A} |a-x|$. It’s easier to find the radius by expressing it in a different form: the smallest (or infimal) value of $r$ such that the $\bigcap_{a\in A} \overline{B}(a,r)\ne\varnothing$, where $\overline{B}(a,r)$ is the closed ball of radius $r$ with center $a$. Suppose $X$ is a normed linear space and $f\colon A\to X$ is a map that does not increase distances (hence does not increase the diameter of any set). I already said that the radius may increase under $f$, but my example was incorrect. Here is a correct one: the set $A$ consists of 3 points in red. Radius increases under a nonexpanding map The blue hexagon is the unit sphere in this space. The three points in red have distance 2 from one another. So do their images under $f$, but the radius increases from $1$ to $2/\sqrt{3}$. The set $f(A)$ consists of three vertices of the regular hexagon (in green) circumscribed about the blue one. I think this example is as bad as it gets in two dimensions: that is, we should have $\mathrm{rad}\, f(A) \le \frac{2}{\sqrt{3}} \mathrm{rad}\, A$ in any 2-dimensional normed space. Informally, the worst case is when the unit ball is triangular, which it can’t be because of the symmetry requirement. The hexagon is the next worst thing. In higher dimensions the constant cannot be smaller than $\frac{2}{\sqrt{3}}$, since the above construction can be implemented in a subspace. I don’t know whether the constant grows with dimension or not (either way it can’t exceed 2, as remarked before). This entry was posted in Uncategorized and tagged hexagon, norm, radius. Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 19, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935960054397583, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/234103/questions-about-assigning-a-probability-to-a-randomly-chosen-large-integer-n-b
# Questions about assigning a probability to a randomly chosen large integer $n$ being prime I heard this question a few days ago, so reciting from memory: If I were to randomly choose an arbitrarily large positive integer $n$, could I write a function that determines the likelihood of it being prime? Intuitively, what would it mean to assign a probability to an integer of being prime? Edit 1 I'm not sure how to incorporate the prime-counting function into this. Edit 2 Alright, so the page on the prime number theorem says that: Informally speaking, the prime number theorem states that if a random integer is selected in the range of zero to some large integer N, the probability that the selected integer is prime is about 1 / ln(N), where ln(N) is the natural logarithm of N. Looking through the references, though, I can't find a more formal proof. So I will revise my question. For a random integer selected in the range of $0$ to some large integer $N$, prove that the probability the selected integer is prime is $\frac{1}{\ln{N}}$. Edit 3 If the selected integer $n$ was prime, it's necessary that it has no prime factors $p\leq\sqrt{n}$. If we could find the probability that $n$ is divisible by $p$ as some function of $p$, then could we write$$\prod_{p=2}^{\sqrt{n}}\left(1-f(p)\right)$$where $f(p)$ is the probability that $n$ is divisible by $p$? - 1 At the very least you would first have to define exactly what you mean by picking an integer at random. – Brian M. Scott Nov 10 '12 at 10:48 1 Is this a field of study? See probabilistic number theory. – Did Nov 10 '12 at 11:01 – Dan Brumleve Nov 10 '12 at 11:47 ## 3 Answers Care should be taken in the sense we take the "density" of primes. The prime number theorem states that $$\pi(n)=\frac{n}{\log(n)}+O\left(\frac{n}{\log(n)^2}\right)\tag{1}$$ Thus, $$\begin{align} \pi(n(1+\alpha))-\pi(n) &=\frac{n(1+\alpha)}{\log(n)+\log(1+\alpha)}-\frac{n}{\log(n)}+O\left(\frac{n}{\log(n)^2}\right)\\ &=\frac{n(1+\alpha)}{\log(n)}-\frac{n}{\log(n)}+O\left(\frac{n}{\log(n)^2}\right)\\ &=\frac{\alpha n}{\log(n)}+O\left(\frac{n}{\log(n)^2}\right)\tag{2} \end{align}$$ Therefore, $$\frac{\pi(n(1+\alpha))-\pi(n)}{\alpha n} =\frac1{\log(n)}+O\left(\frac1{\alpha\log(n)^2}\right)\tag{3}$$ In the sense of $(3)$, the density of primes is $\dfrac1{\log(n)}$. - A very cool solution. Thanks! – rnmartingale Nov 10 '12 at 21:34 On your second question, what it might mean to assign a probability to the likelihood of a number being prime, you might want to take a look at this answer, the discussion in the comments under it, and in particular the book that I refer to there, Towards a Philosophy of Real Mathematics by David Corfield. - Thanks for the link, I'll take a day to read and understand what's going on before returning to this question. – rnmartingale Nov 10 '12 at 11:29 By the prime number theorem it is usually accurate to assume as a heuristic that the probability that $n$ is prime is about $\frac{1}{\log{n}}$. - 1 Thanks for the reference, I should incorporate this into my question. – rnmartingale Nov 10 '12 at 10:58 But you have reworded the title in such a way that the question is less answerable. You might also like to consider the Riemann hypothesis and Cramér's conjecture. – Dan Brumleve Nov 10 '12 at 11:12 1 I'm sorry, I'm trying to avoid vagueness. How would you suggest I change my question? – rnmartingale Nov 10 '12 at 11:19 There is no easy way to narrow it down. I think vaguer is better. – Dan Brumleve Nov 10 '12 at 11:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496573805809021, "perplexity_flag": "head"}
http://physics.stackexchange.com/users/6764/71ga?tab=activity
# 71GA reputation 112 bio website ziga-lausegger.netau.net/… location Slovenia age 27 member for 1 year, 5 months seen 2 mins ago profile views 174 I love to program and crosscompile baremetal C programs for ARM based microcontrollers, I love physics and i love writing science documents/books in LaTeX. It amazes me how physics is connecting all science and is helping mathematics to evolve. In order for science profession to comunicate on a high level i advise everyone to use Linux, LaTeX and a good vector imaging program like Inkscape. | | | bio | visits | | | |------------|----------------|----------|----------------------------|------------|------------------| | | 366 reputation | website | ziga-lausegger.netau.net/… | member for | 1 year, 5 months | | 112 badges | location | Slovenia | seen | 2 mins ago | | # 290 Actions | | | | |-------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 4h | comment | Vector $\vec{z}$ and its conjugate transpose $\overline{\vec{v}^\top}$ - is it the same as $\left|z\right\rangle$ and $\left\langle z \right|$Interesting but i don't understand quite that much yet. I hope i will in the future. | | 4h | accepted | Vector $\vec{z}$ and its conjugate transpose $\overline{\vec{v}^\top}$ - is it the same as $\left|z\right\rangle$ and $\left\langle z \right|$ | | 4h | comment | Vector $\vec{z}$ and its conjugate transpose $\overline{\vec{v}^\top}$ - is it the same as $\left|z\right\rangle$ and $\left\langle z \right|$Dirac notation is now somehow clearer to me :) | | 4h | comment | Vector $\vec{z}$ and its conjugate transpose $\overline{\vec{v}^\top}$ - is it the same as $\left|z\right\rangle$ and $\left\langle z \right|$If i look closer now i can see that $\left\langle A|B \right\rangle$ is an inner product between $\left|B\right\rangle$ and $\left|A\right\rangle$. This is correct right? Can I allso say that this is a matrix multiplication of an $\left|A\right\rangle^\dagger$ and $\left|B\right\rangle$? | | 5h | comment | Vector $\vec{z}$ and its conjugate transpose $\overline{\vec{v}^\top}$ - is it the same as $\left|z\right\rangle$ and $\left\langle z \right|$So my assumptions are correct :D TY! | | 6h | comment | Vector $\vec{z}$ and its conjugate transpose $\overline{\vec{v}^\top}$ - is it the same as $\left|z\right\rangle$ and $\left\langle z \right|$But they are all connected and i like it better like this. | | 7h | asked | Vector $\vec{z}$ and its conjugate transpose $\overline{\vec{v}^\top}$ - is it the same as $\left|z\right\rangle$ and $\left\langle z \right|$ | | May12 | comment | Some Dirac notation explanationsThank you i think i now understand a bit more. | | May12 | asked | Some Dirac notation explanations | | May11 | accepted | How do we know that $\psi$ is the eigenfunction of an operator $\hat{H}$ with eigenvalue $W$? | | May11 | comment | How do we know that $\psi$ is the eigenfunction of an operator $\hat{H}$ with eigenvalue $W$?This Dirac notation is confusing for a starters ... I tried reading Zetilli and got lost ... there is so much of this stuff / rules ... | | May10 | comment | How do we know that $\psi$ is the eigenfunction of an operator $\hat{H}$ with eigenvalue $W$?Thank you for this explaination. It was brief and provided lots of good info. There is only one more thing. I don't quite understand this equation: $\langle O \rangle = \langle \Psi | O | \Psi \rangle$. Is this a scalar product with itself? And then an operator acts on this scalar product? I know that if we use a $\dagger$ on a ket we get a bra, so it must hold that: $\langle O\rangle = \langle \psi |O| \psi \rangle = |\psi\rangle^\dagger O|\psi\rangle$... But where is the integral? Shouldnt it be: $\langle O\rangle = \int |\psi\rangle^\dagger O|\psi\rangle d x$ ? | | May10 | comment | How do we know that $\psi$ is the eigenfunction of an operator $\hat{H}$ with eigenvalue $W$?I dont know if you understood my question right. How do we know from this $\langle W \rangle = \int \limits_{-\infty}^{\infty} \overline{\Psi}\, \left(- \frac{\hbar^2}{2m} \frac{d^2}{d \, x^2} + W_p\right) \Psi \, d x$ or this $\hat{H} = - \frac{\hbar^2}{2m} \frac{d^2}{d \, x^2} + W_p$ that we have an eigenfunctiuion and eigenvalue. | | May10 | asked | How do we know that $\psi$ is the eigenfunction of an operator $\hat{H}$ with eigenvalue $W$? | | May9 | comment | QM formalism is one big confusion - lack of geometrical explaination with imagesThank you. It seems i complicate too much. | | May8 | comment | QM formalism is one big confusion - lack of geometrical explaination with imagesAnd one more questions... It is clear to me that if bra's are column vectors then kets are row vectors. But i don't know the physical meaning of bra's (do they even have one?). I know that ket's are QM states. | | May8 | comment | QM formalism is one big confusion - lack of geometrical explaination with imagesSo if i understood right we choose $\mathbb{R}^{n}$ as a space for probabilities, BUT in QM we have amplitudes and an amplitude is a square root of a probability. So i need a complex space $\mathbb{C}^n$ because of the square root which leads to complex numbers? | | May7 | comment | Energy eigenvalues of a Q.H.Oscillator with $[\hat{H},\hat{a}] = -\hbar \omega \hat{a}$ and $[\hat{H},\hat{a}^\dagger] = \hbar \omega \hat{a}^\dagger$Thank you! the most helpfull was the fact that $\hat{H} \neq W_n$ but $\hat{H} \psi_n = W_n \psi_n$. It looks a bit WIERD though... | | May7 | accepted | Energy eigenvalues of a Q.H.Oscillator with $[\hat{H},\hat{a}] = -\hbar \omega \hat{a}$ and $[\hat{H},\hat{a}^\dagger] = \hbar \omega \hat{a}^\dagger$ | | May7 | accepted | QM formalism is one big confusion - lack of geometrical explaination with images |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324173331260681, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/139865-maclaurin-series.html
# Thread: 1. ## Maclaurin Series How can I find the Maclaurin series for (z/z^3 + 8) and find the radiance of convergence? 2. Originally Posted by ihr02 How can I find the Maclaurin series for (z/z^3 + 8) and find the radiance of convergence? One way to do it is this: $\frac{z}{z^3+8}=\frac{z}{8}\cdot\frac{1}{1+\left(\ frac{z}{2}\right)^3}$ Now you can substitute $\left(\frac{z}{2}\right)^3$ for $u$ in $\frac{1}{1+u}=\sum_{k=0}^\infty(-1)^k u^k$ to get $=\frac{z}{8}\cdot\sum_{k=0}^\infty (-1)^k\left(\frac{z}{2}\right)^{3k} = \sum_{k=0}^\infty \frac{(-1)^k}{2^{3(k+1)}}z^{3(k+1)}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7855360507965088, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/flow+viscosity
# Tagged Questions 1answer 495 views ### What is the shear stress of a fluid? One book defines the shear stress $\tau$ of a (Newtonian) fluid as $$\tau = \eta \frac{\partial v}{\partial r}$$ where $\eta$ is the viscosity. There is not much context, so I've made some guesses. ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508923888206482, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/81800/the-lagrangian-formulation-of-mechanics-without-going-through-variational-princip
## The Lagrangian formulation of mechanics without going through variational principles. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In some texts on classical mechanics and not only, the Euler--Lagrange equations of motion are directly obtained as solution of variational problems. On the other side, sometimes reading about hamiltonian mechanics, one find the expression that this latter formulation is preferred to the lagrangian one because of it does completely avoid the appeal to variational principles. This observation suggested to myself the following question: Is the variational approach to the Euler--Lagrange equations the only one viable? If not, is there some reason that explain why the geometry of the Euler-Lagrange eqns is much more hidden than the geometry of the Hamilton eqns? I was searching for suggestion of reading for best tackle this question. As usual any feedback is welcome. - 2 I'm not sure what you mean by "much more hidden" geometry. Both approaches are important and useful in geometry and in physics. They are also intimately connected to each other via the Legendre transform. – Spiro Karigiannis Nov 24 2011 at 14:30 @Spiro Karigiannis: With "much more hidden" geometry, I would have meant that while I know many exposition of the symplectic geometry as the proper basis of hamiltonian mechanics, I lack references for the geometry behind the Euler-Lagrange eqns. I would better comprehend the connection between the geometries behind these two approaches, so that I could take the best of the one or the other. – Giuseppe Nov 24 2011 at 16:04 ## 3 Answers The Lagrangian (or variational) formulation of the Euler-Lagrange equations and the Hamiltonian formulation are equivalent. This equivalence can be made quite explicit and goes a bit deeper than the standard treatments show. The equivalence can be established in several steps, which I'll try to outline below with references. ### The Hamiltonian formalism is a special case of the Lagrangian formalism. There is a generic operation that can be performed on variational problems: adjunction and elimination of auxiliary fields or variables. Given a Lagrangian $L(x,y)$, the variable $y$ (which could be vector valued) is called auxiliary if the Euler-Lagrange equations obtained from the variation of $y$ can be algebraically solved for solved for in terms of the remaining variables and their derivatives $y=y(x)$. The important point here is that we need not solve any differential equations to obtain $y(x)$. The Lagrangian $L'(x) = L(x,y(x))$ gives a new variational principle with the auxiliary field $y$ eliminated. The critical points of $L'(x)$ are in one-to-one correspondence with those of the original $L(x,y)$. The adjunction of an auxiliary field is the reverse operation. Given $L(x)$, we look for another Lagrangian $L'(x,y)$ where $y$ is auxiliary and its elimination gives $L'(x,y(x))=L(x)$. It is straightforward to check that given a Lagrangian $L(x)$, with Hamiltonian $H(x,p)$, the new Lagrangian $L'(x,p) = p\dot{x} - H(x,p)$, associated to Hamilton's Least Action Principle, is a special case of adjoining some auxiliary fields, namely the momenta $p$. The elimination $p=p(x)$ is precisely the inverse Legendre transform. The moral here is that the Legendre transform is not sacred. I learned this point of view from the following paper of Barnich, Henneaux and Schomblond (PRD, 1991). This point of view is particularly helpful in higher order field theories (multiple independent variables, second or higher order derivatives in the Lagrangian) where a unique notion of Legendre transform is lacking. ### The phase space is the space of solutions of the Euler-Lagrange equations. When the Euler-Lagrange equations have a well-posed initial value problem. Then solutions can be put into one-to-one correspondence with initial data. The initial data are uniquely specified by the canonical position and momentum variables that are commonly used to define the canonical phase space in the Hamiltonian picture. This is a rather common identification nowadays and can be found in many places, so I won't give a specific reference. If the Euler-Lagrange equations do not have a well-posed initial value problem, the definitions change somewhat on either side, but an equivalence can still be established. See the references in the next step for more details. ### Any Lagrangian defines a (pre)symplectic form (current). This is, unfortunately, less well known than it should be. The (pre)symplectic structure of classical mechanics and field theory can be defined straightforwardly directly from the Lagrangian. In case the equations of motion are degenerate, the form is degenerate and hence only presymplectic, otherwise symplectic. There is more than one way to do this, but a particularly transparent one is referred to as the covariant phase space method. A very nice (though not original) reference is Lee & Wald (JMP, 1990). See also this nLab page, which also has a more extensive reference list. Applying this method to the Lagrangian of Hamilton's Least Action Principle gives the standard symplectic form in terms of the canonical position and momentum coordinates. Briefly, and without going into the details of differential forms on jet spaces where this is most easily formalized, the construction is as follows. Denote by $d$ the space-time (which is just 1-dimensional if you only have time) exterior differential and by $\delta$ the field variation exterior differential. Without dropping boundary or total divergence terms, the total variation of the Lagrangian can be expressed as $\delta L(x) = \mathrm{EL}\delta x + d\theta$. Here the Lagrangian is a space-time volume form, $\mathrm{EL}$ denotes the Euler-Lagrange equations and $d\theta$ consists of all the terms that are usually dropped during partial integration. Clearly $\theta$ is of 1-form in terms of field variations and as a spacetime form of one degree lower than a volume form (aka a current). Taking another exterior field variation, we get $\omega=\delta\theta$, which is the desired (pre)symplectic current. If $\Sigma$ is a Cauchy surface (in particular it is codimension-1), the (pre)symplectic form is defined as $\Omega=\int_\Sigma \omega$, now a 2-form in terms of field variations, as expected. It can be shown that the (pre)symplectic form is closed, $d\omega=0$, when evaluated on solutions of the Euler-Lagrange equations. Hence, by Stokes' theorem, $\Omega$ is independent of the choice of $\Sigma$. One space-time is 1-dimensional, $\Omega$ is just $\omega$ evaluated at a particular time. ### A (pre)symplectic form (current) defines a Lagrangian. As alluded to in the question, Hamilton's equations of motion are often expressed in a special form that highlights a certain geometrical structure that is not obvious in the original Lagrangian form. It is well known from the symplectic formulation of classical mechanics that this structure can be seen as a consequence of the fact that they correspond time evolution generated by a Hamiltonian via the Poisson bracket. The symplectic form is then preserved by the evolution. There are analogous statements for field theory. In fact, no variational formulation is necessary to discuss this geometrical structure of the equations. What is quite remarkable is that a kind of converse to this statement holds as well. Namely, given a system of (partial) differential equations, if there is a conserved (pre)symplectic current $\omega$ on the space of solutions ($\omega$ is field dependent and $d\omega=0$ when evaluated on solutions, see previous step), then a subsystem of the equations is derived from a variational principle. There is a subtlety here. Even if there exists a conserved $\omega$, if it is degenerate (not symplectic) other independent equations need to be added to the Euler-Lagrange equations of the corresponding variational principle to obtain a system of equations equivalent to the original one. Onece the Lagrangian of this variational principle is known, the Hamiltonian and symplectic forms could be defined in the usual way and the original system of equations recast in the canonical Hamilton form. To my knowledge, the above observation first appeared in Henneaux (AnnPhys, 1982) for ODEs and in Bridges, Hydon & Lawson (MathProcCPS, 2010) for PDEs. The calculation demonstrating this observation is given in a bit more detail on this nLab page. Another way to look at this result is to consider a conserved (pre)symplectic form as a certificate for the solution of the inverse problem of the calculus of variations. A final note about the usefulness of the Hamiltonian formulation. Despite the fact that as a consequence of the above discussion it's not strictly necessary. Any symplectic manifold has local coordinates in which the symplectic form is canonical (Darboux's theorem). The Legendre transform identifies this choice of coordinates explicitly. - "Any symplectic manifold has local coordinates in which the symplectic form is canonical (Darboux's theorem). The Legendre transform identifies this choice of coordinates explicitly." Sorry, can you clarify what you mean by this? I'm not seeing the connection between the Legendre transform and canonical coordinates. – Paul Skerritt Nov 25 2011 at 8:59 1 Consider the phase space coordinatized by initial data at time $t=0$, say for 1-dimensional particle motion. Then natural coordinates are $(x,\dot{x})$. In these coordinates the symplectic form will look like $\omega=\omega(x,\dot{x}) dx\wedge d\dot{x}$. The Legendre transform defines $p=p(x,\dot{x})$. The new coordinate system $(x,p)$ is special because $\omega=dp\wedge dq$ is now in canonical form, while in the $(x,\dot{x})$ coordinates it is not. Darboux's theorem guarantees that such special coordinates always exist (locally), but they need not always be easy to find. Here they are. – Igor Khavkine Nov 25 2011 at 10:28 Oh, ok, I see now what you're saying. I didn't realise you were starting with the symplectic form defined by the Lagrangian (although I should have). Thanks. – Paul Skerritt Nov 25 2011 at 14:14 This is a great answer! Thanks very much. – Spiro Karigiannis Nov 25 2011 at 14:14 Dear Igor Khavkine, your answer is fantastic, and even more is such your work at the nLab about the phase space. This is much more than I would have expected, thank you. – Giuseppe Nov 25 2011 at 16:28 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Have you tried reading Arnold's book? Other possibilities include the several books on the geometry of classical mechanics by Jerry Marsden (with various coauthors). You will probably find the answers to your questions in here. "Mechanics and Symmetry" by Marsden and Ratiu "Foundations of Mechanics" by Abraham and Marsden "Lectures on Mechanics" by Marsden "Mathematical Methods of Classical Mechanics" by Arnold - In case you read french, you can also have a look at this set of lecture notes by Colin de Verdiere: www-fourier.ujf-grenoble.fr/~ycolver/All-Articles/… (chapter 1, section 4). – DamienC Nov 24 2011 at 16:32 Dear Spiro Karigiannis, thanks for the references, indeed the works of Arnold and Marsden(with his co-authors) are the sources from which now I'm trying to learn more. – Giuseppe Nov 25 2011 at 16:10 I'm just going to make a basic point, so apologies if this is completely obvious to you (it's also contained in Igor Khavkine's much more thorough answer). Let $Q$ be the configuration manifold of the system. The corresponding cotangent bundle $T^*Q$ has an intrinsic symplectic form $\omega=-d\Theta$, where $\Theta$ is the tautalogical one-form on $T^*Q$. For a Hamiltonian $H:T^*Q\rightarrow \mathbb{R}$, Hamilton's equations can be expressed in terms of the Hamiltonian vector field $X_H$ (defined by $i_{X_H}\omega=dH$). Note $\omega$ is intrinsic to the phase space $T^*Q$, and doesn't depend on the Hamiltonian $H$. Now given a Lagrangian $L:TQ\rightarrow\mathbb{R}$, and corresponding Legendre transform $\mathbb{F}L:TQ\rightarrow T^*Q$, and assuming here for simplicity that $\mathbb{F}L$ is a diffeomorphism ("L is hyperregular"), one can use $\mathbb{F}L$ to pull everything back to $TQ$. $TQ$ becomes a symplectic manifold, with symplectic form $\omega_L=(\mathbb{F}L)^*\omega$, and the Euler-Lagrange equations are just the equations for the flow of the Hamiltonian vector field $X_E$ defined by $i_{X_E}\omega_L = dE$, where $E=(\mathbb{F}L)^*H = H\circ\mathbb{F}L$ is the energy function on $TQ$. I guess the main point though is that $TQ$ is not intrinsically a symplectic manifold. The symplectic form $\omega_L$ depends also on the choice of Lagrangian. This is one possible answer to why the geometric formulation is more common on the Hamiltonian side, and appears to be `hidden' on the Lagrangian side: the geometry of $TQ$ (as it pertains to the E-L eqns) is tied up with the particular Lagrangian $L$, whereas the geometry of $T^*Q$ is independent of the particular Hamiltonian $H$. I'd also recommend any of the books by Jerry Marsden mentioned in Spiro Karigiannis' answer as the best place to learn this (my notation is consisent with his). Edit: I should clarify that by `Legendre transform' above I mean (in coordinates) the map $p_i(x, \dot{x}) = \frac{\partial L}{\partial \dot{x}^i}(x, \dot{x})$. This is standard in the literature, but it differs from the classical meaning $H(x, p) = p\dot{x}-L(x, \dot{x})$ (where $p_i = \frac{\partial L}{\partial \dot{x}^i}$). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9211860299110413, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4003831
Physics Forums ## The color of an apple (light absorption and emission) From my textbook (explaining why an apple is red): We imagine that the red apple in the picture is illuminated by daylight, IE light which contains all the wavelengths of the visible spectrum. The apple is red because the main part of the light it reflects is in the red area of the visible spectrum. This is all good, and is what I expected. My understanding of this is that the atom is exited by a photon of a certain wavelength, and then when it returns to its original energy level it emits a photon of the same wavelength again. This is why it reflects the light. However the next sentence goes on to say: Molecules in the skin of the apple absorbs the photons in the blue and the green part of the spectrum. If we look at it with a strictly blue light, the apple will appear black. This threw me off. My understanding is that matter will only interact with photons of certain wavelengths, and the wavelengths an atom can absorb are the exact same which it can emit. Other wavelengths should not interact with it at all? So if the atoms in the skin of the apple absorbs the blue light, it should also emit it once the atoms return to their original energy level? It seems to me that the the first paragraph is enough to explain the red color of the apple, and I am at a loss of what they mean with the second one. Is my understanding of this way off? Any and all help appreciated. k PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Science Advisor As a matter of fact, an apple does not consist of isolated atoms but of molecules which are themselves interacting with each other. While an atom has in deed very sharp absorption lines, molecules absorb in broad frequency ranges (bands). Furthermore, as they have internal degrees of freedom, namely vibrations, the energy absorbed can be transformed into vibrational energy and finally into heat. Thats called internal conversion (IC). Hence molecules will re-emit (fluorescence) only a small fraction of the light they absorbed and even this usually not at the same wavelength, but at longer wavelengths. Quote by kenewbie This is all good, and is what I expected. My understanding of this is that the atom is exited by a photon of a certain wavelength, and then when it returns to its original energy level it emits a photon of the same wavelength again. This is why it reflects the light. No. That's not correct - that does not explain reflection. What you're thinking of is absorption and re-emission of light along the spectral lines of the atoms. This is not what's happening in reflection. I have my own fuzzy theory on reflection - but maybe someone could give the standard and accepted version. ## The color of an apple (light absorption and emission) Quote by DrDu Furthermore, as they have internal degrees of freedom, namely vibrations, the energy absorbed can be transformed into vibrational energy and finally into heat. Ok, that makes sense. My only question then is if absorption and conversion to heat is an emergent behavior in molecules, or if this happen at the layer of individual atoms as well? Recognitions: Gold Member Science Advisor Quote by kenewbie Ok, that makes sense. My only question then is if absorption and conversion to heat is an emergent behavior in molecules, or if this happen at the layer of individual atoms as well? That is an oxymoron. Either the atoms are individual or there is a layer of them on the surface of a 'condensed object'. There is a great difference between the interaction of photons with individual atoms and with large numbers of atoms in close proximity. When an isolated atom absorbs a photon, under normal circumstances, it will re-radiate light at the same frequency that it observed. But there is a possibility of the decay being in two or more jumps if the appropriate energy level in the atom. (Look at the operation of a Laser, for instance). When an atom or molecule on the surface of an object (dense material) absorbs an optical photon there are many more possible energy levels involved (bands, rather than discrete levels, in fact) and the incident photon's energy can be redistributed in the material of the object in many different ways and a broad range of frequencies will be absorbed, as a consequence. (No line absorption spectrum) So your apple will absorb all sorts of green(ish) and blue(ish) wavelength photons but not the red(ish) ones. Under white light illumination, the apple will look red. But if there is no red in the incident light, the apple will look black(ish). Recognitions: Science Advisor It is quite astonishing that already a hand full of atoms forming a molecule is sufficiently complex a system to justify a thermodynamical description. So yes, it is an emergent thermodynamical property of molecules. Just yesterday there was a related question in the chemistry forum. Maybe you want to have a look at the classic article by Bixon and Jortner I cited there: http://www.physicsforums.com/showthread.php?t=621627 Recognitions: Science Advisor Quote by sophiecentaur That is an oxymoron. Either the atoms are individual or there is a layer of them on the surface of a 'condensed object'. There is a great difference between the interaction of photons with individual atoms and with large numbers of atoms in close proximity. Still it came as a big surprise in the 1960's that essentially irreversible behaviour was found already for rather small isolated molecules like benzene in the gas phase. Recognitions: Gold Member Science Advisor Quote by DrDu It is quite astonishing that already a hand full of atoms forming a molecule is sufficiently complex a system to justify a thermodynamical description. So yes, it is an emergent thermodynamical property of molecules. Just yesterday there was a related question in the chemistry forum. Maybe you want to have a look at the classic article by Bixon and Jortner I cited there: http://www.physicsforums.com/showthread.php?t=621627 I guess it's just a matter of nCm, which involves Factorials. It's very easy to get large numbers from a molecule with three or more atoms in it and when several of the electrons in each atom are interacting with those in the other atoms. And then there are the vibrational modes etc. The old Hydrogen Atom stuff we start with is just not enough to deal with anything more complicated - but, deep down, we want it to. Recognitions: Science Advisor Quote by sophiecentaur I guess it's just a matter of nCm, ... That's the point. In a molecule like benzene you have already 31 vibrational modes. Given that the electronic excitation energy corresponds to an order of magnitude $\sqrt{m/M}\approx 100$ (with m electron and M nuclear mass) vibrational quanta, the number of vibrational states to decay into is already astronomical. Recognitions: Gold Member Homework Help Hi kenewbie. I think your question is a very good one. Since no one else has yet replied, I will try to give some explanation as I understand it. You are right that an atom (molecule) that jumps to an excited state by absorbing a photon of a specific frequency could "de-excite" by emitting a photon of the same frequency. But, when atoms are crowded together in a solid or liquid, an excited atom is more likely to pass the energy to its neighboring atoms in the form of motional energy of the atoms (heat) rather than re-emit the energy as a photon. So, the original photon that excited the atom has become absorbed by the material with no light being re-emitted. That's why if you shine only blue light onto an apple, the apple will appear dark. But then why is it that when you shine red light on an apple, the apple reflects the red light? Even though red light does not have the right energy of photons to excite the atoms or molecules to higher energy states, the light nevertheless does interact with the electrons in the molecules and causes the light to be scattered. The reflection of the red light from the apple is this scattered light. The best non-mathematical discussion on this topic that I have ever seen is the article How Light Interacts with Matter by Victor Weisskopf which was published in the book Lasers and Light, Readings From Scientific American 1968. Could be hard to find, but well worth the effort. Quote by sophiecentaur That is an oxymoron. Either the atoms are individual or there is a layer of them on the surface of a 'condensed object'. I had to re-read this a few times but I think i have found that my choice of the word "layer" was poor, as it seems to have a specific meaning. What I meant to ask was if individual atoms sometimes absorb photons without emitting photons, or if this is behavior which only happens in molecules. And you did answer my question, so thanks :) Quote by DrDu It is quite astonishing that already a hand full of atoms forming a molecule is sufficiently complex a system to justify a thermodynamical description. So yes, it is an emergent thermodynamical property of molecules. Just yesterday there was a related question in the chemistry forum. Maybe you want to have a look at the classic article by Bixon and Jortner I cited there: http://www.physicsforums.com/showthread.php?t=621627 Thanks, I'll read that. The question now is, of course, HOW can this behavior arise, what is happening which makes this possible. But I'll read that thread and do a little digging on my own, it is far ahead of what I am supposed to be looking at anyway. Thanks again. Recognitions: Homework Help I'd go along with that ... I found: http://www.madsci.org/posts/archives...1368.Ph.r.html ... which aught to be a nice overview but longer. It would be nice if Weisskopf's article were not behind a paywall: it's very commonly cited. BTW: the absorption description of color for opaque matter also explains the commonly observed difference between mixing light and mixing paint ... in paint, if you combine blue and yellow you get green ... but with light, blue+yellow=blue+(red+green)=white. Also how come you get brown when you mix all the colors in paint. Recognitions: Gold Member Science Advisor Quote by kenewbie Thanks, I'll read that. The question now is, of course, HOW can this behavior arise, what is happening which makes this possible. But I'll read that thread and do a little digging on my own, it is far ahead of what I am supposed to be looking at anyway. Thanks again. It's what you get with 'large numbers'. Combinations within small numbers of items can rapidly lead to large numbers. Just take the game of Poker and see how much theory and lore is associated with picking just five cards out of 52. Tags absorption, color, emission, light Thread Tools | | | | |----------------------------------------------------------------------------|---------------------------|---------| | Similar Threads for: The color of an apple (light absorption and emission) | | | | Thread | Forum | Replies | | | Classical Physics | 1 | | | Advanced Physics Homework | 0 | | | General Physics | 6 | | | Quantum Physics | 1 | | | Quantum Physics | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945841372013092, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/38970-distinct-value.html
# Thread: 1. ## distinct value of given $cos ^2 \frac {4k\pi\pm\pi}{12}$ how can i find the three distinct values of it beside putting k = 1,2,3,4,5..... to evaluate it's value to see they are the same or not? 2. Originally Posted by afeasfaerw23231233 given $cos ^2 \frac {4k\pi\pm\pi}{12}$ how can i find the three distinct values of it beside putting k = 1,2,3,4,5..... to evaluate it's value to see they are the same or not? note that $\cos^2 \left( \frac {4k \pi \pm \pi}{12} \right) = \frac {1 + \cos \frac {(4k \pm 1) \pi}6}2$ it should be easier for you to find the values when k = 1,2,3,4.... now (hope you're good with reference angles ) #### Search Tags Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8667632937431335, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/212560-positive-definite-symmetric-matrix-has-maximal-elements-diagonal.html
# Thread: 1. ## Positive definite symmetric matrix has maximal elements on diagonal Let A be a positive definite symmetric matrix. Prove that for any column (or row) the maximal element of that list is on the diagonal. I've only been able to prove the weaker statement a_(i,i)+a_(j,j)>2*a_(i,j) for i not equal to j. 2. ## Re: Positive definite symmetric matrix has maximal elements on diagonal WLOG () $z^T A z = \sum\limits_{kl}z_k A_{kl} z_l = z_j A_{jj} z_j+z_i A_{ii} z_i+2 z_i A_{ij} z_j>0$ So I guess you got up to there and you set $z_j = 1$ and $z_i = -1$. You need to set $z_i$ and $z_j$ equal to values that are functions of $A_{ij}$. You should get cancellations and the desired result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8798916339874268, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/tagged/list-manipulation+combinatorics
# Tagged Questions 0answers 33 views ### Generating partitions of a set with a specified size of the parts [duplicate] I tried the following (inspired by the answer here) myList = {a, b, c}; Needs["Combinatorica`"]; SetPartitions[myList] and I got this answer, ... 2answers 128 views ### Partition a set into $k$ non-empty subsets The Stirling number of the second kind is the number of ways to partition a set of $n$ objects into $k$ non-empty subsets. In Mathematica, this is implemented as ... 6answers 847 views ### Insert $+$, $-$, $\times$, $/$, $($, $)$ into $123456789$ to make it equal to $100$ Looks like a question for pupils, right? In fact if the available math symbol is limited to $+$, $-$, $\times$, $/$ then it's easy to solve: ... 0answers 194 views ### Generating a function which outputs possible chemical reactions I want to make a list of chemical reactions and I write them down in a $\require{mhchem}\LaTeX$ format. They are of the following form NA_n^i+MB_m^j \rightarrow \hat NA_{\hat n}^{\hat i}+\hat ... 0answers 160 views ### Find all permutations with reversals / cyclic permutations removed I have a list of all non-cyclic permutations of n labels. How can I get rid of all elements which are redundant in the sense that they are the inverse of another one. For instance if n=4, the ... 3answers 267 views ### Generating Linear Extensions of a Partial Order Given a set $S$ and a partial order $\prec$ over $S$, I'm looking for a way to "efficiently" generate a list of linear extensions of $\prec$. Suppose the partial order is given by a ... 2answers 414 views ### Finding all partitions of a set I'm looking for straightforward way to find all the partitions of a set. IntegerPartitions seems to provide a useful start. But then things get a bit complicated. ... 3answers 336 views ### Determining all possible traversals of a tree I have a list: B={423, {{53, {39, 65, 423}}, {66, {67, 81, 423}}, {424, {25, 40, 423}}}}; This list can be visualized as a tree using ... 1answer 201 views ### Finding all length-n words on an alphabet that have a specified number of each letter For example, I might want to generate all length n=6 words on the alphabet {A, B, C} that have one ... 5answers 721 views ### Partition a set into subsets of size $k$ Given a set $\{a_1,a_2,\dots,a_{lk}\}$ and a positive integer $l$, how can I find all the partitions which includes subsets of size $l$ in Mathematica? For instance, given ... 4answers 442 views ### Efficiently Visualising Very Large Data Sets (without running out of memory) I have put a few really hard problems in combinatorics up against Mathematica 8. I'd have to say that it works really well, until you want to view the data. If you look at my question Advanced ... 7answers 809 views ### How to Derive Tuples Without Replacement Given a couple of lists like a={1,2,3,4,6} and b={2,3,4,6,9} I can use the built-in Mathematica symbol ... 3answers 248 views ### Permutations[Range[12]] produces an error instead of a list This input: Permutations[Range[12]] Results in this (error) output: ... 7answers 1k views ### Combination and Permutation How could I obtain the list of all the groups of 5 numbers taken from Range[12] such that the 2 lists have an empty intersection : ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8728111982345581, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/9017/could-the-precisiongoal-for-ndsolve-be-a-negative-number/9018
# Could the PrecisionGoal for NDSolve be a negative number? The help of Mathematica doesn't say so much about the `PrecisionGoal` for `NDSolve`, and I never considered much about it even after I met the warning message NDSolve::eerr for several times when I was trying to solve a set of PDEs these days: every time I met it I would simply considered it as a proof for the defect of my assumption for the initial or boundary condition, and my resolution strategy for it is to modify the conditions or sometimes I will also lower the value of the `PrecisionGoal` as the help says. However, this time I can't ignore it anymore, for I accidentally found that for one of my assumptions, the suitable `PrecisionGoal` for it is a negative integer… The thing named "Precision" in my mind was always a positive integer, but this time it is negative! Why?…… What's the exact effect of "PrecisionGoal" option for `NDSolve`? - ## 1 Answer If you look at the documentation for `Precision`, it says that if `x` is the value and `dx` the "absolute uncertainty", `Precision[x]` is `-Log[10,dx/x]`. This, whenever the estimated error is larger than the value, Mathematica will give a negative precision. Thus, for an estimated error $dx$ and a value $x$ such that $dx/x<1$ here is how the precision as defined above looks like: You see that it is really the number of digits in the ratio of the error to the value. If the error $dx>x$ then the log is negative: `Accuracy` is probably closer to what you have in mind: it's basically to the number of effective digits in the number, as it is `-Log[10,dx]`. If $dx>1$, it can be negative, indicating that a number of digits to the left of the decimal point is incorrect (and of course, it can be non-integer just like the precision). If you really want the number of digits, take the integer part (or round it). About the last bit of the question, ie, "what does `PrecisionGoal` option for `NDSolve` do?", I think the above plus the explanation in the docs covers it: it specifies the precision (as defined above) to be sought in the solution. - – xzczd Aug 2 '12 at 12:50 It's mentioned under `More Information` (click on the yellow box to open it). About the log, if you look at the plots, you'll notice that it reduces to exactly what you'd expect if I fix eg `dx/x` to be exactly 0.01 (ie, it's 2 then). About the "best place" thing, I meant that this is really a general question about putting things in dimensionless form; I talked a little about this here. But in general I guess some sort of math site would be better for that (it's not a Mathematica question really) – acl Aug 2 '12 at 13:00 – xzczd Aug 2 '12 at 13:57 1 No, if they are in SI units they're not dimensionless. – acl Aug 2 '12 at 14:07 No, they're certainly dimensionless as every formula has the same unit, it's then equivalent to dimensionless, for example, `Solve[(s == 2 v) /. s -> 6, {v}]`,now I know that s has the unit m, 2 has the unit s and v has the unit m/s , so after I set s as 6 and solve the equation, though I only get v->3 I will still know the unit of 3 is m/s. – xzczd Aug 2 '12 at 14:22 show 1 more comment lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503888487815857, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/108899/how-close-are-these-events
# How close are these events? I'm a computer programmer and we're running into a weird error on our website. We have a large number users who do a certain task. Between the entire set of users, this task happens about once every 20 seconds, but of course each user is unique. We're seeing a strange problem about once every four hours that may be caused by users doing the task at the exact same time as another. I tried to find this answer on my own, and got as far as the Poisson distribution, but I haven't dealt with statistics beyond basic Poker strategy since my stats class in college 15 years ago. So, my question: Given a large number of independent events that happen every 20 seconds, what is the exact level of unusual closeness that I would expect to see every four hours? - ## 1 Answer If we assume the events are random with a rate of 1 every 20 seconds, in a time period $t$ you expect $t/20$ events. Following the Wikipedia article, this is $\lambda$. In $4$ hours you have $\frac{4*3600}t=\frac{14400}t$ tries. The probability of exactly $2$ events in time $t$ is $\frac{\lambda^ke^{-\lambda}}{k!}=\frac{t^2\exp(-\frac t{20})}{400\cdot 2}$. We want $\frac{\lambda^ke^{-\lambda}}{k!}=\frac{t^2\exp(-\frac t{20})}{400\cdot 2}=\frac t{14400}.$ Numerically they are equal at t=0.0557, so every four hours you would expect two hits within about 56 msec. Of course, this will vary, but it gives the order of magnitude. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9609788060188293, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/11009/including-the-interaction-but-not-the-main-effects-in-a-model/11022
# Including the interaction but not the main effects in a model Is it ever valid to include a two-way interaction in a model without including the main effects? What if your hypothesis is only about the interaction, do you still need to include the main effects? - 1 My philosophy is run lots of models, check their predictions, compare, explain, run more models. – Michael Bishop May 20 '11 at 3:43 6 If the interactions are only significant when the main effects are in the model, it may be that the main effects are significant and the interactions not. Consider one highly significant main effect with variance on the order of 100 and another insignificant main effect for which all values are approximately one with very low variance. Their interaction is not significant, but the interaction effect will appear to be significant if the main effects are removed from the model. – Thomas Levine May 20 '11 at 16:26 @Thomas should your first line read 'if the interactions are only significant when the main effects are NOT in the model, ...'? – Glen May 20 '11 at 16:46 Oh yes, it should! – Thomas Levine May 22 '11 at 21:55 ## 11 Answers In my experience, not only is it necessary to have all lower order effects in the model when they are connected to higher order effects, but it is also important to properly model (e.g., allowing to be nonlinear) main effects that are seemingly unrelated to the factors in the interactions of interest. That's because interactions between x1 and x2 can be stand-ins for main effects of x3 and x4. Interactions sometimes seem to be needed because they are collinear with omitted variables or omitted nonlinear (e.g., spline) terms. - This means that we should start deleting the terms from y ~ x1 * x2 * x3 * x4, starting deleting the highest-order terms, i.e. the normal deletion method, right? – Tomas Oct 25 '12 at 8:11 1 Deletion of terms is not recommended unless you can test entire classes of terms as a "chunk". For example it may be reasonable to either keep or delete all interaction terms, or to keep or delete all interactions that are 3rd or 4th order. – Frank Harrell Oct 25 '12 at 19:13 You ask whether it's ever valid. Let me provide a common example, whose elucidation may suggest additional analytical approaches for you. The simplest example of an interaction is a model with one dependent variable $Z$ and two independent variables $X$, $Y$ in the form $$Z = \alpha + \beta' X + \gamma' Y + \delta' X Y + \varepsilon,$$ with $\varepsilon$ a random term variable having zero expectation, and using parameters $\alpha, \beta', \gamma',$ and $\delta'$. It's often worthwhile checking whether $\delta'$ approximates $\beta' \gamma'$, because an algebraically equivalent expression of the same model is $$Z = \alpha \left(1 + \beta X + \gamma Y + \delta X Y \right) + \varepsilon$$ $$= \alpha \left(1 + \beta X \right) \left(1 + \gamma Y \right) + \alpha \left( \delta - \beta \gamma \right) X Y + \varepsilon$$ (where $\beta' = \alpha \beta$, etc). Whence, if there's a reason to suppose $\left( \delta - \beta \gamma \right) \sim 0$, we can absorb it in the error term $\varepsilon$. Not only does this give a "pure interaction", it does so without a constant term. This in turn strongly suggests taking logarithms. Some heteroscedasticity in the residuals--that is, a tendency for residuals associated with larger values of $Z$ to be larger in absolute value than average--would also point in this direction. We would then want to explore an alternative formulation $$\log(Z) = \log(\alpha) + \log(1 + \beta X) + \log(1 + \gamma Y) + \tau$$ with iid random error $\tau$. Furthermore, if we expect $\beta X$ and $\gamma Y$ to be large compared to $1$, we would instead just propose the model $$\log(Z) = \left(\log(\alpha) + \log(\beta) + \log(\gamma)\right) + \log(X) + \log(Y) + \tau$$ $$= \eta + \log(X) + \log(Y) + \tau.$$ This new model has just a single parameter $\eta$ instead of four parameters ($\alpha$, $\beta'$, etc.) subject to a quadratic relation ($\delta' = \beta' \gamma'$), a considerable simplification. I am not saying that this is a necessary or even the only step to take, but I am suggesting that this kind of algebraic rearrangement of the model is usually worth considering whenever interactions alone appear to be significant. Some excellent ways to explore models with interaction, especially with just two and three independent variables, appear in chapters 10 - 13 of Tukey's EDA. - While it is often stated in textbooks that one should never include an interaction in a model without the corresponding main effects, there are certainly examples where this would make perfect sense. I'll give you the simplest example I can imagine. Suppose subjects randomly assigned to two groups are measured twice, once at baseline (i.e., right after the randomization) and once after group T received some kind of treatment, while group C did not. Then a repeated-measures model for these data would include a main effect for measurement occasion (a dummy variable that is 0 for baseline and 1 for the follow-up) and an interaction term between the group dummy (0 for C, 1 for T) and the time dummy. The model intercept then estimates the average score of the subjects at baseline (regardless of the group they are in). The coefficient for the measurement occasion dummy indicates the change in the control group between baseline and the follow-up. And the coefficient for the interaction term indicates how much bigger/smaller the change was in the treatment group compared to the control group. Here, it is not necessary to include the main effect for group, because at baseline, the groups are equivalent by definition due to the randomization. One could of course argue that the main effect for group should still be included, so that, in case the randomization failed, this will be revealed by the analysis. However, that is equivalent to testing the baseline means of the two groups against each other. And there are plenty of people who frown upon testing for baseline differences in randomized studies (of course, there are also plenty who find it useful, but this is another issue). - 1 Problems arise when the time zero (baseline) measurement is used as a first response variable. The baseline is often used as an entry criterion for the study. For example, a study might enroll patients with systolic blood pressure (bp) > 140, then randomize to 2 bp treatments and follow the bps. Initially, bp has a truncated distribution and the later measurements will be more symmetric. It is messy to model 2 distributional shapes in the same model. There are many more reasons to treat the baseline as a baseline covariate. – Frank Harrell May 22 '11 at 14:39 1 That's a good point, but recent studies suggest that this is not an issue. In fact, it seem that there are more disadvantages to using baseline scores as a covariate. See: Liu, G. F., et al. (2009). Should baseline be a covariate or dependent variable in analyses of change from baseline in clinical trials? Statistics in Medicine, 28, 2509-2530. – Wolfgang May 22 '11 at 18:33 1 – Frank Harrell May 22 '11 at 19:19 1 Thanks for the link. I assume you are referring to the discussion under 8.2.3. Those are some interesting points, but I don't think this gives a definite answer. I am sure that the paper by Liu et al. isn't the ultimate answer either, but it does suggest for example that non-normality of the baseline values is not a crucial issue. Maybe this is something for a separate discussion item, as it does not directly relate to the OP's question. – Wolfgang May 22 '11 at 22:07 1 Yes, it depends on the amount of non-normality. Why depend on good fortune when formulating a model? There are also many purely philosophical reasons to treat time zero measurements as baseline measurements (see quotes from Senn and Rochon in my notes). – Frank Harrell May 24 '11 at 20:31 show 1 more comment The reason to keep the main effects in the model is for identifiability. Hence, if the purpose is statistical inference about each of the effects, you should keep the main effects in the model. However, if your modeling purpose is solely to predict new values, then it is perfectly legitimate to include only the interaction if that improves predictive accuracy. - 3 Can you please be a litte bit more explicit about the identifiability problem? – ocram May 20 '11 at 4:58 2 I don't believe that a model omitting main effects is necessarily unidentified. Perhaps you mean "interpretability" rather than "identifiability" (which is a technical term with a precise definition) – JMS May 21 '11 at 18:40 2 @JMS: Yes, it kills interpretability. However, the term "identifiability" is used differently by statisticians and by social scientists. I meant the latter, where (loosely speaking) you want to identify each statistical parameter with a particular construct. By dropping the main effect you no longer can match construct to parameter. – Galit Shmueli Jul 18 '11 at 2:09 this is implicit in many of answers others have given but the simple point is that models w/ a product term but w/ & w/o the moderator & predictor are just different models. Figure out what each means given the process you are modeling and whether a model w/o the moderator & predictor makes more sense given your theory or hypothesis. The observation that the product term is significant but only when moderator & predictor are not included doesn't tell you anything (except maybe that you are fishing around for "significance") w/o a cogent explanation of why it makes sense to leave them out. - Arguably, it depends on what you're using your model for. But I've never seen a reason not to run and describe models with main effects, even in cases where the hypothesis is only about the interaction. - What if the interaction is only significant when the main effects are not in the model? – Glen May 20 '11 at 13:25 – Michael Bishop May 21 '11 at 15:28 I would suggest it is simply a special case of model uncertainty. From a Bayesian perspective, you simply treat this in exactly the same way you would treat any other kind of uncertainty, by either: 1. Calculating its probability, if it is the object of interest 2. Integrating or averaging it out, if it is not of interest, but may still affect your conclusions This is exactly what people do when testing for "significant effects" by using t-quantiles instead of normal quantiles. Because you have uncertainty about the "true noise level" you take this into account by using a more spread out distribution in testing. So from your perspective the "main effect" is actually a "nuisance parameter" in relation to the question that you are asking. So you simply average out the two cases (or more generally, over the models you are considering). So I would have the (vague) hypothesis: $$\newcommand{\int}{\mathrm{int}}H_{\int}:\text{The interaction between A and B is significant}$$ I would say that although not precisely defined, this is the question you want to answer here. And note that it is not the verbal statements such as above which "define" the hypothesis, but the mathematical equations as well. We have some data $D$, and prior information $I$, then we simply calculate: $$P(H_{\int}|DI)=P(H_{\int}|I)\frac{P(D|H_{\int}I)}{P(D|I)}$$ (small note: no matter how many times I write out this equation, it always helps me understand the problem better. weird). The main quantity to calculate is the likelihood $P(D|H_{int}I)$, this makes no reference to the model, so the model must have been removed using the law of total probability: $$P(D|H_{\int}I)=\sum_{m=1}^{N_{M}}P(DM_{m}|H_{\int}I)=\sum_{m=1}^{N_{M}}P(M_{m}|H_{\int}I)P(D|M_{m}H_{\int}I)$$ Where $M_{m}$ indexes the mth model, and $N_{M}$ is the number of models being considered. The first term is the "model weight" which says how much the data and prior information support the mth model. The second term indicates how much the mth model supports the hypothesis. Plugging this equation back into the original Bayes theorem gives: $$P(H_{\int}|DI)=\frac{P(H_{\int}|I)}{P(D|I)}\sum_{m=1}^{N_{M}}P(M_{m}|H_{\int}I)P(D|M_{m}H_{int}I)$$ $$=\frac{1}{P(D|I)}\sum_{m=1}^{N_{M}}P(DM_{m}|I)\frac{P(M_{m}H_{\int}D|I)}{P(DM_{m}|I)}=\sum_{m=1}^{N_{M}}P(M_{m}|DI)P(H_{\int}|DM_{m}I)$$ And you can see from this that $P(H_{\int}|DM_{m}I)$ is the "conditional conclusion" of the hypothesis under the mth model (this is usually all that is considered, for a chosen "best" model). Note that this standard analysis is justified whenever $P(M_{m}|DI)\approx 1$ - an "obviously best" model - or whenever $P(H_{\int}|DM_{j}I)\approx P(H_{\int}|DM_{k}I)$ - all models give the same/similar conclusions. However if neither are met, then Bayes' Theorem says the best procedure is to average out the results, placing higher weights on the models which are most supported by the data and prior information. - Both x and y will be correlated with xy (unless you have taken a specific measure to prevent this by using centering). Thus if you obtain a substantial interaction effect with your approach, it will likely amount to one or more main effects masquerading as an interaction. This is not going to produce clear, interpretable results. What is desirable is instead to see how much the interaction can explain over and above what the main effects do, by including x, y, and (preferably in a subsequent step) xy. As to terminology: yes, β 0 is called the "constant." On the other hand, "partial" has specific meanings in regression and so I wouldn't use that term to describe your strategy here. Some interesting examples that will arise once in a blue moon are described at this thread. - This one is tricky and happened to me in my last project..I would explain it this way..lets say you had variables A and B which came out significant independently and by a business sense you thought that an interaction of A and B seems good. You included the interaction which came out to be significant but B lost its significance. You would explain your model initially by showing two results. The results would show that initially B was significant but when seen in light of A it lost its sheen. So B is a good variable but only when seen in light of various levels of A ( if A is a categorical variable)..Its like saying Obama is a good leader when seen in the light of its SEAL army..so Obama*seal will be a significant variable. But Obama when seen alone might not be as important..( no offense to Obama, just an example) - Here it is kind of the opposite. The interaction (of interest) is only significant when the main effects are not in the model. – Glen May 20 '11 at 13:27 It is very rarely a good idea to include an interaction term without the main effects involved in it. David Rindskopf of CCNY has written some papers about those rare instances. - I will borrow a paragraph from the book An introduction to survival analysis using Stata by M.Cleves, R.Gutierrez, W.Gould, Y.Marchenko edited by Stata press to answer to your question. It is common to read that interaction effects should be included in the model only when the corresponding main effects are also included, but there is nothing wrong with including interaction effects by themselves. [...] The goal of a researcher is to parametrize what is reasonably likely to be true for the data considering the problem at hand and not merely following a prescription. - Absolutely terrible advice. – Frank Harrell Jan 10 '12 at 13:48 1 @Frank, would you mind expanding on your comment? On the face of it, "parameterize what is reasonably likely to be true for the data" makes a lot of sense. – whuber♦ Jan 10 '12 at 13:57 1 – Frank Harrell Jan 10 '12 at 14:01 @Frank: Thanks, I found it :-). It is now part of this thread. – whuber♦ Jan 10 '12 at 14:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9482736587524414, "perplexity_flag": "middle"}
http://www.cfd-online.com/W/index.php?title=Reynolds_number&direction=next&oldid=13668
[Sponsors] Home > Wiki > Reynolds number # Reynolds number ### From CFD-Wiki Revision as of 09:15, 3 January 2012 by Peter (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) The Reynolds number characterises the relative importance of inertial and viscous forces in a flow. It is important in determining the state of the flow, whether it is laminar or turbulent. At high Reynolds numbers flows generally tend to be turbulent, which was first recognized by Osborne Reynolds in his famous pipe flow experiments. Consider the momentum equation which is given below $\frac{\partial}{\partial t}\left( \rho u_i \right) + \frac{\partial}{\partial x_j} \left[ \rho u_i u_j + p \delta_{ij} \right] = \frac{\partial}{\partial x_j} \tau_{ij}$ The terms on the right are the inertial forces and those on the left correspond to viscous forces. If $U$, $L$, $\rho$ and $\mu$ are the reference values for velocity, length, density and dynamic viscosity, then inertial force ~ $\frac{\rho U^2}{L}$ viscous force ~ $\frac{\mu U}{L^2}$ Their ratio is the Reynolds number, usually denoted as $Re$ $Re = \frac{\mbox{inertial force}}{\mbox{viscous force}} = \frac{\rho U L}{\mu}$ In terms of the kinematic viscosity $\nu = \frac{\mu}{\rho}$ the Reynolds number is given by $Re = \frac{U L}{\nu}$ ## Reynolds number as a ratio of time scales Consider an impulsively started flat plate moving in its own plane with velocity $U$. Due to the no-slip condition on the plate a boundary layer gradually develops on the plate. At time $t$, the thickness of the boundary layer is of the order of $\sqrt{\nu t}$ (see Batchelor(1967), section 4.3). Let $L$ be the characteristic length scale. The time taken for viscous and convective effects to travel a distance $L$ is $T_{v} = \frac{L^2}{\nu}$ and $T_{c} = \frac{L}{U}$ The ratio of viscous to convective time scales is $\frac{ T_{v} }{ T_{c} } = \frac{(L^2/\nu)}{(L/U)} = \frac{UL}{\nu} = Re$ Thus the Reynolds number is a measure of the viscous and convective time scales. A large Reynolds number means that viscous effects propagate slowly into the fluid. This is the reason why boundary layers are thin in high Reynolds number flows because the fluid is being convected along the flow direction at a much faster rate than the spreading of the boundary layer, which is normal to the flow direction. ## References • Batchelor, G K (1967), An Introduction to Fluid Dynamics, Cambridge University Press. • Rott, N (1990), Note on the history of the Reynolds number, Annual Review of Fluid Mechanics, Vol. 22, 1990, pp. 1–11.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9208095073699951, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/47224/what-does-a-sphere-moving-close-to-the-speed-of-light-look-like/47245
What does a sphere moving close to the speed of light look like? What shape does the viewer in a reference frame with $v=0$ perceive? I suppose that since the sphere moves in one direction only (oX only, not oY) its section would change into an ellipse, where the horizontal diameter would be shorter. However, my textbook says that the viewer still perceives a regular spherical shape. How come? - 2 Answers This is just a footnote to Crazy Buddy's answer (which is correct! :-): Length contraction is a real phenomenon, and indeed the RHIC observes this every day because the nuclei are moving so fast that the collision is between two disks not two spheres. However to see something you need to have light emitted from the object reach your eye, and the light from different parts of the moving sphere takes different times to reach your eye. This distorts the image of the contracted object and has the apparently paradoxical effect of making it look spherical even though it is contracted. So the moving sphere looks spherical even though it isn't spherical. The calculation of how light from the object reaches your eye is quite involved, and I'm afraid I don't know of a simple analogy to understand it. There are various animations showing this effect on the web. See for example this one. - Does the sphere look exactly or approximately spherical? – leongz Dec 20 '12 at 7:54 @leongz: Whatever you use to measure (anyways, you'd use your eyes or telescope or camera -lie that since you're an observer), you'd find the sphere still in its original outline. So, it is exactly the same. The spherical approximation should be done because all instruments have some least count + or - something or upto some % --- :-) – Ϛѓăʑɏ βµԂԃϔ Dec 20 '12 at 8:00 – John Rennie Dec 20 '12 at 8:03 Thanks everyone. Sorry I couldn't check your answers earlier. I guess a simple answer like this makes up perfectly for now. – menislici Dec 21 '12 at 15:37 The sphere is contracted in the horizontal axis and perceived as an ellipsoid. This is what we believe about length contraction and this happens only, when we take Einstein's simultaneity into account. But, the stationary observer would see the sphere appearing as the sphere always (i.e) the circular outline would still be there at any velocity relative to the observer. This is because when we deal with space-time and as the sphere moves accordingly near the speed of light, the wave-fronts emitted from other parts (which don't show up when we stare at it) would reach the observer at that instant and the wavefront at the other end of its static appearance would pass out. Thus, the observer would see the sphere being rotated as it passes past him. - 1 +1, though I couldn't resist adding a footnote! – John Rennie Dec 20 '12 at 7:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593353271484375, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/7002/derive-a-short-rate-model-from-hjm/7005
# Derive a short rate model from HJM Suppose we are assuming the HJM framework. My question is, if it is possible to derive for different choices of the volatility function $\sigma$ (and hence of the drift function) the most common short rate models, i.e. Vasicek, CIR,Dothan, Ho-Lee,Hull-White? I know if we choose the volatility function $\sigma$ as constant then we would end up with the Ho-Lee model. I guess that it is possible for exogenous models, but could be rather hard for endogenous such as Vasicek and CIR for example. However I'm particular interested in finding a condition on $\sigma$ such that we can derive from HJM framework the Vasicek model. - ## 1 Answer It looks as if you are actually asking the following: given a short rate model, how does the HJM volatility function look like. If your short rate model has an analytic bond price formula (many do have this, because this makes them "pratical") then you get the instanteous forward rate from the bonds and via Ito the HJM process and the HJM volatility. Examples for HJM volatility functions $\sigma(t,T)$: • Hull-White Model with mean reversion parameter a and volatility $\sigma_{r}$.: Set $\sigma(t,T) = \sigma_{r} * \exp(-a (T-t))$ • Vasicek Model: Same as Hull-White Model (the difference of the two is the initial data of the forward rate curve, not the volatiltiy). • Ho-Lee Model with volatility $\sigma_{r}$: Set $\sigma(t,T) = \sigma_{r} = constant$ (Ho-Lee is Hull-Whilte with a=0). For the derivation of the short rate model from the HJM model see Chapter 24 in http://www.amazon.com/Mathematical-Finance-Theory-Modeling-Implementation/dp/0470047224 (you can preview the page with Amazon LOOK INSIDE). - Thanks for your answer. I had a look at the chapter. I just skimmed through,but as far as I see, you do exactly this two approaches ($\sigma$ constant and $\sigma(t,T)=\sigma\exp{(-a(T-t)))}$. However, I do not see, how you should choose $\sigma(t,T)$ in general to obtain the right short rate dynamics. At least for me, it is not an obvious choice for $\sigma(t,T)$ to get the Hull-White model. Assuming HJM, I can write down the dynamics of the short rate. But in this formula just $\sigma(t,t)$ appears. And it seem rather difficult to get from $\sigma(t,t)$ to $\sigma(t,T)$, or am I wrong? – hulik Jan 16 at 8:16 1 The Chapter in the book looks at "Given an HJM volatility function, what does the short rate process look like". You can always do this, but in general you will obtain a path dependent drift for the short rate and hence the model is no longer Markovian. If "short rate model" is short for "Markovian short rate model", we wouldn't call this a short rate model. - As I mentioned in my answer, you are looking at the reverse: Given a short rate model, what is HJM process. This can be calculated as described via Ito's lemma. – Christian Fries Jan 16 at 11:59 thats some cool use of MathJax Christian :) Do you know if its any good to use in Google Docs vs the Google Docs equation editor? – Nikos Jan 19 at 22:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325198531150818, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/139936/paradox-of-general-comprehension-in-set-theory-other-than-russells-paradox
# Paradox of General Comprehension in Set Theory, other than Russell's Paradox As is well known, the General Set Comprehension Principle (any class is a set) leads to the Russell Paradox (the class $x \notin x$ cannot be a set). As a result, set theories must restrict the Comprehension Principle to avoid self-reference. For example, in the case of ZFC, this is done by enumerating a small list of "safe" comprehension schema such as Separation. Does General Comprehension lead to other types of paradox than Russell's? Or is this basically the only thing that can go wrong? - 2 – Qiaochu Yuan May 2 '12 at 15:32 – Arturo Magidin May 2 '12 at 15:35 1 @Arturo, isn't that essentially the same paradox? If the class of singletons is a set, then the class of all sets is a set, then the class of all $x \not\in x$ is a set. – David Harris May 2 '12 at 15:42 @David: Of course, you can deduce many different contradictions from this problem; the proof I linked to uses the result that there can be no one-to-one function from $\mathcal{P}(X)$ to $X$, rather than going through Russell. – Arturo Magidin May 2 '12 at 15:51 ## 2 Answers Early attempts to repair Russell's paradox tried simple patches, like forbidding the predicate $x\notin x$. But there are infinite families of predicates that all cause essentially the same problem. For example, let $P(x)$ be the predicate $\lnot\exists y. x\in y \wedge y\in x$. Then there is no set of all $x$ such that $P(x)$ holds. I think there is one of these for any cyclic directed graph; the original Russell predicate $x\notin x$ corresponds to the graph with one vertex and one directed edge. - There is a magnificent construction of models with ZF without regularity which essentially says that almost any exetensional relationship can be embedded into $\in$ of some model of the theory. – Asaf Karagila May 2 '12 at 16:08 ## Did you find this question interesting? Try our newsletter Curry's paradox is somewhat different. It considers the set $X_Y = \{ x : x\in x \implies Y \}$. One can show that if this set exists, then $Y$ is true. (See the Wikipedia article for the simple proof.) So if your theory allows the $X_Y$ to exist for all $Y$, then all $Y$ are true and the theory is inconsistent.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232891798019409, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Boltzmann_equation
# Boltzmann equation For other uses, see Boltzmann's entropy formula, Stefan–Boltzmann law, and Maxwell–Boltzmann distribution. In physics, specifically non-equilibrium statistical mechanics, the Boltzmann equation or Boltzmann transport equation describes the statistical behaviour of a fluid not in thermodynamic equilibrium, i.e. when there are temperature gradients in space causing heat to flow from hotter regions to colder ones, by the random (and biased) transport of particles. It was devised by Ludwig Boltzmann in 1872.[1] The equation arises not by statistical analysis of all the individual positions and momenta of each particle in the fluid; rather by considering the probability that a number of particles all occupy a very small region of space (mathematically written d3r, where d means "differential", a very small change) centered at the tip of the position vector r, and have very nearly equal small changes in momenta from a momentum vector p, at an instant of time. The Boltzmann equation can be used to determine how physical quantities change, such as heat energy and momentum, when a fluid is in transport, and other properties characteristic to fluids such as viscosity, thermal conductivity also electrical conductivity (by treating the charge carriers in a material as a gas) can be derived.[1] See also convection-diffusion equation. The equation is a linear stochastic partial differential equation, since the function to solve the equation for is a continuous random variable. In fact - the problem of existence and uniqueness of solutions is still not fully resolved, but some recent results are quite promising.[2][3] ## Overview ### The phase space and density function The set of all possible positions r and momenta p is called the phase space of the system; in other words a set of three coordinates for each position coordinate x, y, z, and three more for each momentum component px, py, pz. The entire space is 6-dimensional: a point in this space is (r, p) = (x, y, z, px, py, pz), and each coordinate is parameterized by time t. The small volume ("differential volume element") is written d3rd3p = dxdydzdpxdpydpz. Since the probability of N molecules which all have r and p within d3rd3p is in question, at the heart of the equation is a quantity f which gives this probability per unit phase-space volume, or probability per unit length cubed per unit momentum cubed, at an instant of time t. This is a probability density function: f(r, p, t), defined so that, $dN = f (\mathbf{r},\mathbf{p},t)\,d^3\mathbf{r}\,d^3\mathbf{p}$ is the number of molecules which all have positions lying within a volume element d3r about r and momenta lying within a momentum space element d3p about p, at time t.[4] Integrating over a region of position space and momentum space gives the total number of particles which have positions and momenta in that region: $N = \int\limits_\mathrm{positions} d^3\mathbf{r} \int\limits_\mathrm{momenta} d^3\mathbf{p} f (\mathbf{r},\mathbf{p},t) = \iiint\limits_\mathrm{positions} \quad \iiint\limits_\mathrm{momenta} f (x,y,z,p_x,p_y,p_z,t) dxdydz dp_xdp_ydp_z$ which is a 6-fold integral. While f is associated with a number of particles, the phase space is for one-particle (not all of them, which is usually the case with deterministic many-body systems), since only one r and p is in question. It is not part of the analysis to use r1, p1 for particle 1, r2, p2 for particle 2, etc. up to rN, pN for particle N. It is assumed the particles in the system are identical (so each has an identical mass m). For a mixture of more than one chemical species, one distribution is needed for each, see below. ### Principal statement The general equation can then be written:[5] $\frac{\partial f}{\partial t} = \left(\frac{\partial f}{\partial t}\right)_\mathrm{force} + \left(\frac{\partial f}{\partial t}\right)_\mathrm{diff}+ \left(\frac{\partial f}{\partial t}\right)_\mathrm{coll}$ where the "force" term corresponds to the forces exerted on the particles by an external influence (not by the particles themselves), the "diff" term represents the diffusion of particles, and "coll" is the collision term - accounting for the forces acting between particles in collisions. Expressions for each term on the right side are provided below.[5] Note that some authors use the particle velocity v instead of momentum p; they are related in the definition of momentum by p = mv. ## The force and diffusion terms Consider particles described by f, each experiencing an external force F not due to other particles (see the collision term for the latter treatment). Suppose at time t some number of particles all have position r within element d3r and momentum p within d3p. If a force F instantly acts on each particle, then at time t + Δt their position will be r + Δr = r + pΔt/m and momentum p + Δp = p + FΔt. Then, in the absence of collisions, f must satisfy $f \left (\mathbf{r}+\frac{\mathbf{p}}{m} \Delta t,\mathbf{p}+\mathbf{F}\Delta t,t+\Delta t \right )\,d^3\mathbf{r}\,d^3\mathbf{p} = f(\mathbf{r},\mathbf{p},t)\,d^3\mathbf{r}\,d^3\mathbf{p}$ Note that we have used the fact that the phase space volume element d3rd3p is constant, which can be shown using Hamilton's equations (see the discussion under Liouville's theorem). However, since collisions do occur, the particle density in the phase-space volume d3rd3p changes, so $\begin{align} dN_\mathrm{coll} & = \left(\frac{\partial f}{\partial t} \right)_\mathrm{coll}\Delta td^3\mathbf{r} d^3\mathbf{p} \\ & = f \left (\mathbf{r}+\frac{\mathbf{p}}{m}\Delta t,\mathbf{p} + \mathbf{F}\Delta t,t+\Delta t \right)d^3\mathbf{r}d^3\mathbf{p} - f(\mathbf{r},\mathbf{p},t)d^3\mathbf{r}d^3\mathbf{p} \\ & = \Delta f d^3\mathbf{r}d^3\mathbf{p} \end{align}$ () where Δf is the total change in f. Dividing (1) by d3rd3pΔt and taking the limits Δt → 0 and Δf → 0, we have $\frac{d f}{d t} = \left(\frac{\partial f}{\partial t} \right)_\mathrm{coll}$ () The total differential of f is: $\begin{align} d f & = \frac{\partial f}{\partial t}dt +\left(\frac{\partial f}{\partial x}dx +\frac{\partial f}{\partial y}dy +\frac{\partial f}{\partial z}dz \right) +\left(\frac{\partial f}{\partial p_x}dp_x +\frac{\partial f}{\partial p_y}dp_y +\frac{\partial f}{\partial p_z}dp_z \right)\\ & = \frac{\partial f}{\partial t}dt +\nabla f \cdot d\mathbf{r} + \frac{\partial f}{\partial \mathbf{p}}\cdot d\mathbf{p} \\ & = \frac{\partial f}{\partial t}dt +\nabla f \cdot \frac{\mathbf{p}dt}{m} + \frac{\partial f}{\partial \mathbf{p}}\cdot \mathbf{F}dt \end{align}$ () where ∇ is the gradient operator, · is the dot product, $\frac{\partial f}{\partial \mathbf{p}} = \mathbf{\hat{e}}_x\frac{\partial f}{\partial p_x} + \mathbf{\hat{e}}_y\frac{\partial f}{\partial p_y}+\mathbf{\hat{e}}_z\frac{\partial f}{\partial p_z}= \nabla_\mathbf{p}f$ is a shorthand for the momentum analogue of ∇, and êx, êy, êz are cartesian unit vectors. ### Final statement Dividing (3) by dt and substituting into (2) gives: $\frac{\partial f}{\partial t} + \frac{\mathbf{p}}{m}\cdot\nabla f + \mathbf{F}\cdot\frac{\partial f}{\partial \mathbf{p}} = \left(\frac{\partial f}{\partial t} \right)_\mathrm{coll}$ In this context, F(r, t) is the force field acting on the particles in the fluid, and m is the mass of the particles. The term on the right hand side is added to describe the effect of collisions between particles; if it is zero then the particles do not collide. The collisionless Boltzmann equation is often mistakenly called the Liouville equation (the Liouville Equation is a many-particle equation). This equation is more useful than the principal one above, yet still incomplete, since f cannot be solved for unless the collision term in f is known. This term cannot be found as easily or generally as the others - it is a statistical term representing the particle collisions, and requires knowledge of the statistics the particles obey, like the Maxwell-Boltzmann, Fermi-Dirac or Bose-Einstein distributions. ## The collision term (Stosszahlansatz) and molecular chaos A key insight applied by Boltzmann was to determine the collision term resulting solely from two-body collisions between particles that are assumed to be uncorrelated prior to the collision. This assumption was referred to by Boltzmann as the "Stosszahlansatz", and is also known as the "molecular chaos assumption". Under this assumption the collision term can be written as a momentum-space integral over the product of one-particle distribution functions:[1] $\left(\frac{\partial f}{\partial t} \right)_{\mathrm{coll}} = \iint gI(g, \Omega)[f(\mathbf{p'}_A,t) f(\mathbf{p'}_B,t) - f(\mathbf{p}_A,t) f(\mathbf{p}_B,t)] \,d\Omega\,d^3\mathbf{p}_A.$ where pA and pB are the momenta of any two particles (labeled as A and B for convenience) before a collision, p′A and p′B are the momenta after the collision, $g = |\mathbf{p}_B - \mathbf{p}_A| = |\mathbf{p'}_B - \mathbf{p'}_A|$ is the magnitude of the relative momenta (see relative velocity for more on this concept), and I(g, Ω) is the differential cross section of the collision, in which the relative momenta of the colliding particles turns through an angle θ into the element of the solid angle dΩ, due to the collision. ## General equation (for a mixture) For a mixture of chemical species labelled by indices i = 1,2,3...,n the equation for species i is:[1] $\frac{\partial f_i}{\partial t} + \frac{\mathbf{p}_i}{m_i}\cdot\nabla f_i + \mathbf{F}\cdot\frac{\partial f_i}{\partial \mathbf{p}_i} = \left(\frac{\partial f_i}{\partial t} \right)_\mathrm{coll}$ where fi = fi(r, pi, t), and the collision term is $\left(\frac{\partial f_i}{\partial t} \right)_{\mathrm{coll}} = \sum_{j=1}^n \iint g_{ij} I_{ij}(g_{ij}, \Omega)[f'_i f'_j - f_if_j] \,d\Omega\,d^3\mathbf{p'}.$ where f′ = f′(p′i, t), the magnitude of the relative momenta is $g_{ij} = |\mathbf{p}_i - \mathbf{p}_j| = |\mathbf{p'}_i - \mathbf{p'}_j|$ and Iij is the differential cross-section as before, between particles i and j. The integration is over the momentum components in the integrand (which are labelled i and j). The sum of integrals describes the entry and exit of particles of species i in or out of the phase space element. ## Applications and extensions ### Conservation equations The Boltzmann equation can be used to derive the fluid dynamic conservation laws for mass, charge, momentum and energy[6]:p 163. For a fluid consisting of only one kind of particle, the number density n is given by: $n=\int f\,d^3p$ The average value of any function A is: $\langle A \rangle=\frac{1}{n}\int A f\,d^3p$ Since the conservation equations involve tensors, the Einstein summation convention will be used where repeated indices in a product indicate summation over those indices. Thus $\mathbf{x}\rightarrow x_i$ and $\mathbf{p}\rightarrow p_i = m w_i$ where $w_i$ is the particle velocity vector. Define $g(p_i)$ as some function of momentum $p_i$ only, which is conserved in a collision. Assume also that the force $F_i$ is a function of position only, and that f is zero for $p_i\rightarrow\pm \infty$. Multiplying the Boltzmann equation by g and integrating over momentum yields four terms which, using integration by parts, can be expressed as: $\int g \frac{\partial f}{\partial t}\,d^3p=\frac{\partial }{\partial t} (n\langle g \rangle)$ $\int \frac{p_j g}{m}\frac{\partial f}{\partial x_j}\,d^3p=\frac{1}{m}\frac{\partial}{\partial x_j}(n\langle g p_j \rangle)$ $\int g F_j \frac{\partial f}{\partial p_j}\,d^3p=-nF_j\left\langle \frac{\partial g}{\partial p_j}\right\rangle$ $\int g \left(\frac{\partial f}{\partial t}\right)_{coll}\,d^3p=0$ where the last term is zero since g is conserved in a collision. Letting $g=m$, the mass of the particle, the integrated Boltzmann equation becomes the conservation of mass equation[6]:pp 12,168: $\frac{\partial}{\partial t}\rho + \frac{\partial}{\partial x_j}(\rho V_j) =0$ where $\rho=mn$ is the mass density and $V_i=\langle w_i\rangle$ is the average fluid velocity. Letting $g=m w_i$, the momentum of the particle, the integrated Boltzmann equation becomes the conservation of momentum equation[6]:pp 15,169: $\frac{\partial}{\partial t}(\rho V_i) + \frac{\partial}{\partial x_j}(\rho V_i V_j+P_{ij}) - nF_i=0$ where $P_{ij}=\rho\langle (w_i-V_i) (w_j-V_j) \rangle$ is the pressure tensor. (The viscous stress tensor plus the hydrostatic pressure.) Letting $g=\tfrac{1}{2}m w_i w_i$, the kinetic energy of the particle, the integrated Boltzmann equation becomes the conservation of energy equation[6]:pp 19,169: $\frac{\partial}{\partial t}(u+\tfrac{1}{2}\rho V_i V_i) + \frac{\partial}{\partial x_j}(uV_j+\tfrac{1}{2}\rho V_i V_i V_j + J_{qj}+P_{ij}V_i)-nF_iV_i =0$ where $u=\tfrac{1}{2}\rho\langle (w_i-V_i) (w_i-V_i) \rangle$ is the kinetic thermal energy density and $J_{qi}=\tfrac{1}{2}\rho\langle (w_i-V_i)(w_k-V_k)(w_k-V_k)\rangle$ is the heat flux vector. ### Hamiltonian mechanics In Hamiltonian mechanics, the Boltzmann equation is often written more generally as $\hat{\mathbf{L}}[f]=\mathbf{C}[f], \,$ where L is the Liouville operator describing the evolution of a phase space volume and C is the collision operator. The non-relativistic form of L is $\hat{\mathbf{L}}_\mathrm{NR} = \frac{\partial}{\partial t} + \frac{\mathbf{p}}{m} \cdot \nabla + \mathbf{F}\cdot\frac{\partial}{\partial \mathbf{p}}\,.$ ### General relativity and astronomy It is also possible to write down relativistic Boltzmann equations for systems in which a number of particle species can collide and produce different species. This is how the formation of the light elements in big bang nucleosynthesis is calculated. The Boltzmann equation is also often used in dynamics, especially galactic dynamics. A galaxy, under certain assumptions, may be approximated as a continuous fluid; its mass distribution is then represented by f; in galaxies, physical collisions between the stars are very rare, and the effect of gravitational collisions can be neglected for times far longer than the age of the universe. The generalization to general relativity is $\hat{\mathbf{L}}_\mathrm{GR}=p^\alpha\frac{\partial}{\partial x^\alpha}-\Gamma^\alpha{}_{\beta\gamma}p^\beta p^\gamma\frac{\partial}{\partial p^\alpha},$ where Γαβγ is the Christoffel symbol of the second kind (this assumes there are no external forces, so that particles move along geodesics in the absence of collisions), with the important subtlety that the density is a function in mixed contravariant-covariant (xi, pi) phase space as opposed to fully contravariant (xi, pi) phase space.[7][8] ## Notes 1. ^ a b c d Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3 2. DiPerna, R. J.; Lions, P.-L. (1989). "On the Cauchy problem for Boltzmann equations: global existence and weak stability". Ann. Of Math. (2) 130 (2): 321–366. doi:10.2307/1971423. 3. Philip T. Gressman and Robert M. Strain (2010). "Global classical solutions of the Boltzmann equation with long-range interactions". Proceedings of the National Academy of Sciences. 107 (13) (13): 5744–5749. arXiv:1002.3639. Bibcode:2010PNAS..107.5744G. doi:10.1073/pnas.1001185107. 4. Huang, Kerson (1987). Statistical Mechanics (Second ed.). New York: Wiley. p. 53. ISBN 0-471-81518-7. 5. ^ a b 6. ^ a b c d de Groot, S.R.; Mazur, P. (1984). Non-Equilibrium Thermodynamics. New York: Dover Publications Inc. ISBN 0-486-64741-2. Retrieved 2013-01-31. 7. Debbasch, Fabrice; Willem van Leeuwen (2009). "General relativistic Boltzmann equation I: Covariant treatment". Physica A 388 (7): 1079–1104. Bibcode:2009PhyA..388.1079D. doi:10.1016/j.physa.2008.12.023. 8. Debbasch, Fabrice; Willem van Leeuwen (2009). "General relativistic Boltzmann equation II: Manifestly covariant treatment". Physica A 388 (9): 1818–34. Bibcode:2009PhyA..388.1818D. doi:10.1016/j.physa.2009.01.009. ## References • Arkeryd, Leif (1972). "On the Boltzmann equation part II: The full initial value problem". Arch. Rational Mech. Anal. 45: 17–34. Bibcode:1972ArRMA..45...17A. doi:10.1007/BF00253393. • Arkeryd, Leif (1972). "On the Boltzmann equation part I: Existence". Arch. Rational Mech. Anal. 45: 1–16. Bibcode:1972ArRMA..45....1A. doi:10.1007/BF00253392. • DiPerna, R. J.; Lions, P.-L. (1989). "On the Cauchy problem for Boltzmann equations: global existence and weak stability". Ann. Of Math. (2) 130: 321–366.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 41, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8619252443313599, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/education
# Tagged Questions How is physics taught and learned. Teaching strategies, class examples and demonstrations; learning resources, career advice, etc. For explicit problems, use the 'homework' tag instead. 5answers 276 views ### Theoretical physics and education: Does it really matter a great deal about what happens inside a black hole, or about Hawking radiation? [closed] I stumbled across this article http://blogs.scientificamerican.com/cross-check/2010/12/21/science-faction-is-theoretical-physics-becoming-softer-than-anthropology/ It got me thinking. Why do we ... 0answers 49 views ### Studying QM without math and physics background [duplicate] I rode all posted answers about this topic but i need to ask you another information. I have done a semester course called "Principle of Physics" (i am studying Biotechnology) and one called ... 0answers 43 views ### University admission in physics in USA University [closed] I think it's kinda off topic but I didn't find any option to post to other sister site. I have completed BSc in physics from Bangladesh with cgpa 3.95 (75%) and HSC (12 class)--> (70%) and in ... 0answers 47 views ### Undergrad project advice [closed] I am presently in my senior year and I am considering fluid mechanics for my thesis. What area of research of fluid mechanics which is purely analytical and very mathematical since I am an applied ... 0answers 49 views ### Geometry for Physics [duplicate] I am currently a high school student interested in a research career in physics. I have self taught myself single variable calculus and elementary physics upto the level of IPHO . And I am comfortable ... 1answer 237 views ### Goldstein's Classical Mechanics exercises solutions [duplicate] Does anyone know where I can find some (good) solution of Goldstein's book Classical Mechanics? 0answers 42 views ### Book suggestion : geometric approach to electromagnetism [duplicate] I´m looking for a book on electromagnetism that is introducing the topic from a geometric point of view, focusing more on the theoretical structure than on the application. 1answer 68 views ### Why does a plane wave have definite momentum? Apologies if this is a little vague. It might not have a good answer. Given the interpretation of $|\psi(x)|^2$ as a probability distribution it's unsurprising that a wave function that is ... 2answers 177 views ### How to teach myself physics needed at undergraduate electrical engineer level? [closed] I want to learn electrical engineering on my own, specifically because I'm interested in loudspeaker design, more specifically how to design active dipole loudspeakers using DSP crossovers. I have ... 1answer 47 views ### A simple example of symmetry setting the properties of a Physical System Does anybody know of an example were one could derive some important properties of a physical system from a symmetry of said system. I´m specially looking for simple classical examples, which could ... 1answer 181 views ### How deep can my knowledge of particle physics go without the maths? Successfully just got my first question answered on here, and now time for the second. So I recently gained interest in particle physics and was wondering. By no means do I have the mathematical ... 0answers 24 views ### What is the appropriate route to go from undergrad math to M.S. level physics [closed] I recently graduated with a B.A. in mathematics from a CalState university and now I'm looking into graduate programs in physics. The schools that I'm interested in applying are UC Riverside, San ... 1answer 84 views ### The length of an antenna is twice the amplitude of the wave I have seen it remarked in some problem sets that if you have an electromagnetic wave traveling in the $x$-direction with it's $y$-coordinate given as $y(x,t)=y_0\sin (\omega t +kx)$ and you want a ... 2answers 159 views ### How much pure math should a physics/microelectronics person know [duplicate] I do condensed matter physics modeling in my phd and I was struck up learning quite an amount of physics. But while having done lot of physics courses, I see that if I learn pure math I would ... 0answers 61 views ### Where to earn an engineering degree online part time in India after physics graduation? [closed] I'm a Physics Graduate from India. I want to earn some Engineering degree with the base of my graduation in physics. Also suggest any degree which I can earn online and can be helpful for getting ... 1answer 75 views ### Can electromagnetic momentum be introduced at pre-university level as for electromagnetic energy? Electromagnetic energy is introduced at pre-university level, starting with static electric energy followed by static magnetic energy. But the introduction of electromagnetic momentum usually has to ... 2answers 142 views ### Can a student with a heavy math background start learning physics with Goldstein's “Classical Mechanics”? [duplicate] Can a student with a heavy math background start learning physics with Goldstein's "Classical Mechanics"? Or is the book too obtuse with basic physics that I need to start elsewhere? 0answers 141 views ### Starting string theory studies in grad school How is it possible for a grad student to do research in any modern area of string theory like AdS/CFT or ABJM if they need to start grad school by having to learn QFT from scratch? Is there a ... 1answer 171 views ### Learning roadmap for solid state physics [duplicate] I am a PhD student in mathematics who knows little more about physics than what one learns in high school. For my research on tilings of space and aperiodic order, every now and then I have to skim a ... 0answers 253 views ### What are the mathematical prerequisites to understand this paper? [closed] What are the mathematical prerequisites to understand this paper? Blumenhagen et al. Four-dimensional String Compactifications with D-Branes, Orientifolds and Fluxes. Phys. Rept. 445 no. 1-6, pp. ... 0answers 109 views ### Course advice for someone interested in strings and mathematical physics [closed] I'll be doing Introductory General Relativity and Graduate Quantum Mechanics II next semester. I still need to choose 2 (or maybe 3, but I don't want to overload too much) from the following: ... 2answers 141 views ### What is a good way to reason in physics? [closed] I have recently made the decision to study Physics seriously. However, in the past, I've had some difficulty with the subject because of my primarily mathematical background. I find that sometimes ... 1answer 281 views ### Quick introduction to electromagnetism / Maxwell's equations [duplicate] Possible Duplicate: Electrodynamics textbook that emphasizes applications I am a graduate student in applied mathematics and I am looking for a concise introduction to Maxwell's equations / ... 1answer 155 views ### Is it possible to take a QFT class knowing only basic quantum mechanics? I'm in grad school and notice there are no prerequisites required for QFT in the physics department. In fact, the system allows me to sign up for the course just fine as a technical elective. But... ... 0answers 192 views ### Interesting Math Topics Useful for Physics [closed] What are some interesting, but less popular, math topics that are useful for physics that can be self-studied? Specifically, topics that might ultimately be useful in high energy theory (even if it is ... 0answers 177 views ### What's the most efficient way to study physics? [duplicate] I'm CS major trying to learn QFT on my own . I'm trying to make an efficient study plan .The problem is that I've never read any textbook from cover to cover and solved all the problems .What of the ... 1answer 300 views ### How do I start learning particle physics? [closed] I am 16 at the moment. I am really interested in physics. Especially particle physics. Can someone please tell me how to start learning the subject. like what to learn first. like which fundamental ... 2answers 202 views ### Is the proper interpretation of temperature missing in this book? In Randall T. Knight’s textbook “Physics for Scientists and Engineers” in the first chapter on thermodynamics (Ch. 16: A Macroscopic Description of Matter) one of the first conceptual questions is ... 11answers 971 views ### Why quantum mechanics? Imagine you're teaching a first course on quantum mechanics in which your students are well-versed in classical mechanics, but have never seen any quantum before. How would you motivate the subject ... 2answers 385 views ### How should a theoretical physicist study maths? [duplicate] Possible Duplicate: How should a physics student study mathematics? If some-one wants to do research in string theory for example, Would the Nakahara Topology, geometry and physics book and ... 2answers 319 views ### How can some-one independently do research in particle physics? I'm not affiliated with a physics department and I want to do independent research. I'm working my way through Peskin et. al. QFT now. Let's say that I've finished Peskin et. al. and Weinberg QFT ... 0answers 55 views ### Quantum Mechanics Text for Electrical Engineers [duplicate] Possible Duplicate: What is a good introductory book on quantum mechanics? What is a good introductory text on quantum mechanics that could be used to train electrical engineers in device ... 1answer 131 views ### What is the importance of electrodynamics and magnetism in physics as a whole? [closed] At my university the second half of a year long sequence in basic calculus based physics focuses on electrodynamics and magnetism. I am wondering what is the significance of these topics to physics in ... 4answers 780 views ### Help an aspiring physicists what to self-study [closed] This is probably not the kind of question you'll often encounter on this forum, but I think a bit of background is needed for this question to make sense and not seem like a duplicate: 2012 has been ... 0answers 101 views ### Dirac action and conventions I have a (possibly) fundamental question, which is driving me crazy. Notation When considering the Dirac action (say reading Peskin's book), one have \$\int ... 2answers 124 views ### Will a one year undergraduate course of Linear Algebra be enough for QM? [duplicate] Possible Duplicate: Linear Algebra for Quantum Physics Can you get all/most of the knowledge you need of Linear Algebra for QM in a one year course? I know for certain my course also ... 2answers 428 views ### What math is needed to understand the Schrödinger equation? If I now see the Schrödinger equation, I just see a bunch of weird symbols, but I want to know what it actually means. So I'm taking a course of Linear Algebra and I'm planning on starting with PDE's ... 1answer 178 views ### How does this problems are solved (modeling/simulation)? [closed] Can somebody guide me in what to read and learn in order to be able to solve or understand how to solve the following types of problems: The modeling/simulation of the bullet, shot into the water ... 5answers 763 views ### The Z-Torque: how can it be shown intuitively that it does not work? There is a new kickstarter project that claims to increase torque and power compared to a normal crank on a bicycle (Z-Torque on kickstarter). If this patented (US Patent Number 5899119) approach ... 5answers 437 views ### Linear Algebra for Quantum Physics A week ago I asked people on this forum what mathematical background was needed for understanding Quantum Physics, and most of you mentioned Linear Algebra, so I decided to conduct a self-study of ... 1answer 71 views ### Didactics question (“teams and times”) [closed] In sports it is commonplace to distinguish a "team" (as characterized by the players who took part in a match, playing together against another team), from the "score" (such as the final score of ... 1answer 459 views ### Walter Lewin Lectures in HD I like the lectures by Walter Lewin 8.0x. However the quality of the videos is pretty bad. Is there any way (DVD, web,...) to get the lecture videos in a good quality, best in HD? 0answers 34 views ### Undergraduate Math Major Wanting to Learn Physics [duplicate] Possible Duplicate: Book recommendations So I'm a Junior level math major. I've seen some abstract algebra, some differential geometry, and some lie theory. I'm currently working through ... 5answers 636 views ### Math or Physics degree? I am hoping to become a physicist focusing mainly on the theoretical side in the future. I am trying to decide whether to go for a physics or math undergrad course. Assuming that I am capable of ... 1answer 354 views ### Applications of the particle in a box and the finite square well What are some "real" world applications of the particle in a box (PIB) and the finite square well (FSW) which are discussed in an intro quantum mechanics class? For instance, I know that the PIB can ... 0answers 147 views ### Describing the Higgs mechanism to non-particle physicists I'm sure I'm not the only person with this problem at the moment. I have been asked to give a public (not quite public, scientists, just not physicists) about 'this Higgs boson thing'. I am trying to ... 0answers 137 views ### Best physics toys/models for learning [closed] I'm trying to gather as many different physical or generally thought-provoking toys as possible. I need to know what physics toys are out there, and where to get them. I remember coming across an ... 2answers 118 views ### Physics and what it means [closed] I got into a debate with a friend about the meaning of physics and its purpose, he is the sort who will test you and if you get it wrong it somehow gratifies his own self-reflection and self-worth. ... 3answers 367 views ### What should a physics undergrad aspiring to be a string theorist learn before grad school? The question I guess is pretty clear. I am a physics undergrad wishing to pursue research in quantum gravity(string theory?). What are the subjects I should learn other than the usual compulsory ... 0answers 148 views ### Is it possible to get accepted at a graduate program in theoretical physics without having a bachelor degree in physics? [closed] Is it possible to get accepted at a graduate program in theoretical physics without having a bachelor degree in physics ? I'm self taught in physics .I teach myself topics that are usually taught in ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391432404518127, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/51815/where-inside-a-large-uniformly-dense-symmetrical-sphere-would-its-gravity-towar/51817
# Where inside a large uniformly dense, symmetrical sphere would its gravity toward the center be the strongest? Imagine a sphere of uniform density with similar volume and average density of our Earth. There is a bore leading to the center of the sphere from the surface with a scale at regular intervals. At what part of your journey towards the center would the scale read your greatest weight? There are some caveats to consider. The sphere is in a very isolated region of space and has no angular motion. I wrote this question down last night while I was having trouble sleeping. My intuition tells me that the surface of the sphere will have the greatest force of gravity. Any distance beyond the surface and the mass of the sphere above you will be reducing your weight. Is this correct? If however the core is much more massive similar to Earth, one would feel a greater strain to stand upright as they descend. - 1 The question title does not reflect your question. Are you allowing nontrivial radial density profiles, as your last remark suggests? If so then gravity can be stronger below the surface - just don't call the sphere uniform. – Emilio Pisanty Jan 21 at 20:11 My question is the first paragraph plus the caveats. After the line breaks are my thoughts. – Leonardo Jan 21 at 20:14 @Leonardo My personal suggestion is renaming to reflect that this is about a spherically symmetric distribution of mass, and not a uniform sphere. The strange reality of matter and gravity distribution in rocky planets is a topic that has tangentially come up in several questions and I don't think it adds much to the site to have another untitled discussion of it. – AlanSE Jan 21 at 20:41 Suggestion to the question(v3): Replace the word sphere with e.g. planet, because in mathematics a sphere by definition is just the $2$-dimensional surface of a massive $3$-dimensional ball or planet. – Qmechanic♦ Jan 21 at 21:08 1 Sorry to those who feel the question is worded so incorrectly. Apparently I do not understand how to write it correctly but those who answered it understood it just fine. If it is unclear then feel free to edit it yourselves. – Leonardo Jan 22 at 0:10 ## 2 Answers Your intuition is correct, the gravity force will be the largest at the surface. The contribution of a spherical shell will cancel as soon as you are inside this shell (you can prove this by integrating the gravitational force along this shell). This can intuitively be explained by the fact that the mass of this shell will have contributions in all directions (which appear to exactly cancel). Therefor, as you are descending down the bore, the effective mass attracting you, is only the mass closer to the center than you. This reduces the gravitational pull. The mass depends on the cube of the distance to the center In addition, you are coming closer to the center of that part of the earth that is attracting you. This increases the force in inverse proportion to the square of your distance from the center. Combining these, the force of gravity is proportional to your distance from the center, In the edit you describe a much denser core. In that case, it depends on the density distribution what happens. Simple example: Suppose you have a core with certain density and a thick outer shell with zero density, than the largest gravitational pull will occur at the surface of the core. As long as you have spherical symmetry, you can in principle, given the density as a function of radial position, calculate the maximum. - 1 – AlanSE Jan 21 at 20:28 Correct to the first part, it is strongest at the surface. This follows from the Shell theorem. It is true that if the core were much denser then gravity would increase as you go down. The critical threshold is if the density of the material you are passing through is 2/3 as dense as the average density of everything under you. So for the Earth you would need an estimate of the densities of various layers. Wikipedia has a graph showing the gravity of the earth as a function of depth based on a certain density model. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9487133622169495, "perplexity_flag": "head"}
http://mathoverflow.net/questions/108503/constant-averages-along-orbits
## constant averages along orbits ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What should one say to describe the situation in which a function $T$ from some set $X$ to itself, and a function $f$ from $X$ to some characteristic-zero field $K$, have the property that the average of $f$ over a $T$-orbit in $X$ is the same for each orbit? (I'm mostly concerned with the case in which every orbit is finite, so that the notion of average is the naive one.) For about a year, in collaboration with Tom Roby and others, I've been studying such situations, which turn out to crop up everywhere in combinatorics. I think that the topic needs to have some appropriate (and not too unwieldy) terminology associated with it, but nothing seems to exist in the literature, so I'm left with the choice of adopting a parallel notion from an allied field or coining something new. But I don't love anything I've come up with so far. (See http://mathoverflow.net/questions/94813/functions-whose-average-along-orbits-is-zero-or-a-constant for an earlier post of mine on this topic.) Here are some terms I've considered using to denote "Property X" (with explanatory notes following the list): #1. "$T$ is pseudotransitive relative to $f$" #2. "The triple $(X,T,f)$ has the CAAO (Constant Averages Along Orbits) Property" #3. "$f$ is CPC (Constant Plus Coboundary) relative to $T$" #4. "The triple $(X,T,f)$ exhibits combinatorial ergodicity" #5. "$f$ is convariant [sic] under $T$" #6. "$f$ is mixed by $T$", "$T$ mixes $f$" #7. "$f$ is balanced with respect to $T$" #8. "$f$ is centered relative to $T$" #9. "$f$ is Cesaro-constant under the action of $T$" #10. "$f$ is $T$-constant" #11. "$T$ and $f$ are disjoint" I'm hoping a community wiki discussion might help me settle on some good nomenclature (or at least point me toward analogues of what I'm looking at in other fields of mathematics). Notes: #1. "$T$ is pseudotransitive relative to $f$" If $X$ consists of a single $T$-orbit, then Property X holds trivially. So one might paraphrase Property X as: "The action is behaving like a transitive action even though it isn't (necessarily) one." But one problem with saying that $T$ is pseudotransitive relative to $f$ is that is doesn't suggest a companion nomenclature for what $f$ is relative to $T$ (which is important in my research). One can't say "$f$ is pseudotransitivized by $T$"! #2. "The triple $(X,T,f)$ has the CAAO (Constant Averages Along Orbits) Property" This has the virtue of being quite descriptive. And I think I could write both "$T$ is CAAO relative to $f$" and "$f$ is CAAO relative to $T$" without embarrassment. Moreover, CAAO works well as an acronym; that is, unlike CPC, which works only as an initialism, CAAO can be pronounced ("cow"). But the initially amusing homophony may not age well. (Whimsy can wear thin after a few decades.) #3. "$f$ is CPC (Constant Plus Coboundary) relative to $f$" A function $f$ has Property X relative to $T: X \rightarrow X$ if and only $f$ can be written as $f(x) = c + g(x) - g(T(x))$, where $c$ is some constant and $g$ is some (non-unique) function from $X$ to $K$. (One can also say that $f$ is cohomologous to a constant.) I'd be happier with the constant-plus-coboundary nomenclature if the $g$-functions turned out to play an important role in examples, which so far hasn't been the case. Also, I fear that the phrase "CPC phenomenon" would invite confusion with the phrases "CSP" and "cyclic sieving phenomenon", which frequently arises in the same combinatorial situations as Property X. #4. "The triple $(X,T,f)$ exhibits combinatorial ergodicity" I've used this one in my talks. I like it, since Boltzmann's notion of ergodicity is precisely that long-term averages are the same for all orbits (and if all orbits are finite, long-term averages are the same as orbit averages). People have sometimes objected that in dynamics and in physics, ergodicity is something that pertains to a mapping $T$, not a mapping $T$ relative to a function $f$. I've replied that the word ergodic always means relative to a set of functions (measurable functions if one is doing ergodic theory, macroscopic functions if one is doing physics), even if that relativity is left implicit. So why not make that relationship explicit, and say that what's really ergodic is a map $T$ with respect to a function or set of functions? I was happy with this for a while. But one can't say "$f$ is ergodic relative to $T$"; that stretches the metaphor too far for my taste. And I need a crisp way of referring to the functions $f$ such that $(X,T,f)$ has Property X, for some particular map $T: X \rightarrow X$. #5. "$f$ is convariant under $T$" Yes, I mean convariant, not covariant, which already means something else. "Convariant" is meant to be a counterpart to "invariant", since every function from $X$ to $K$ can be written as the sum of an invariant function (that is, a function $h$ satisfying $h(T(x))=h(x)$ for all $x$) and a function with Property X. Note that the invariant functions form a subspace, as do the convariant functions. So from a linear algebra perspective, it's a nice situation. #6. "$f$ is mixed by $T$", "$T$ mixes $f$" I like this in part because of the underlying physical intuition (we say a solution of 90% water and 10% salt had been mixed if every portion of the solution has water and salt in those same proportions; replace "portion" by "orbit" and you're fairly close to Property X). Also, one can refer to the "invariant" and "convariant" functions as being respectively "fixed" and "mixed" by $T$, which is cute (but not too cute!) and certainly succinct. Yet I worry that the ergodic theory meaning of the word "mixing", which carries connotations stronger than ergodicity, may be distracting or even confusing for some people. #7. "$f$ is balanced with respect to $T$" This is on the bland and vague side, but I can't completely dismiss it. #8. "$f$ is centered relative to $T$" Ditto. #9. "$f$ is Cesaro-constant under the action of $T$" Property X says that if we define $F(x)$ as the Cesaro mean of $f(x),f(T(x)),f(T(T(x))),...$, then $F$ is constant over $X$ (where the Cesaro mean of a sequence is the limit as $n$ goes to infinity of the mean of the first $n$ terms). #10. "$f$ is $T$-constant" This is intended as a shorthand for "$f$ is constant modulo $T$-coboundaries". #11. "$T$ and $f$ are disjoint" I don't know a sense (coming from some allied field) in which the word "disjoint" might be applicable to Property X, but I suspect that there might be. - 4 Regarding #11: Disjointness already has an established meaning in ergodic theory: ams.org/mathscinet-getitem?mr=213508 . It is not the same usage as the one given here, so I would not recommend this usage. Personally, I would go with something straightforward, such as "f has constant T-averages". – Terry Tao Sep 30 at 23:54 $f$ is a constant plus a coboundary? – Anthony Quas Oct 1 at 20:28 @ Anthony Quas: Yes; see #3 in my post. – James Propp Oct 3 at 4:48 @Terry Tao: I think there might be a connection between Property X and the notion of disjointness in ergodic theory for maps with discrete spectrum, though I haven't been able to find a satisfactory link. E.g., let $X = R^n/Z^n$, $T(x) = x+u$ for some fixed $u$, and $f(x) = \exp(2 \pi i x \cdot v)$ for some fixed $v$. In this situation, or something like it, I think $(X,T,f)$ has Property X iff the spectra associated with $T$ and $f$ are disjoint (though I don't know what "the spectrum associated with $f$" might actually mean). – James Propp Oct 3 at 4:53 @James: I don't know if you still care about mathoverflow.net/questions/62340/…, but I saw a problem with your proofs of 17 and 18. The claim "every non-empty bounded interval has endpoints" is itself equivalent to Dedekind completeness, but you only show 17 and 18 for closed intervals with endpoints. – Ricky Demer Oct 18 at 21:22 show 3 more comments ## 1 Answer I've decided to go with the terms "homomesy" and "homomesic" (from Greek roots meaning "same middle"), suggested by my collaborator Tom Roby. To see why I think the the concept deserves a name, check out the examples given in the slide-presentation http://jamespropp.org/mitcomb13a.pdf (which barely scratches the surface of all the examples of the phenomenon that have come to light over the past year). To see a context in which the concept of homomesy is precisely dual to the concept of invariance, see http://jamespropp.org/Dec2012a.pdf . -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 107, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9531241655349731, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/114775/list
## Return to Question 2 added 93 characters in body Hi I have a function F:R^n->R^n $F:\mathbb{R} ^ n\rightarrow \mathbb{R}^n$ for which I know there exist a unique fixed point $x ^ *$ (say). I also know that the Jacobian of F $F$ at each point x $x$ in R^n $\mathbb{R} ^ n$ has all of its eigenvalues in [0,1) $[0,1)$ (but they are different for each x). $x$). Are these facts enough for me to say that the iterative sequence x_{n+1} $x _ {n+1} = F(x_n) F(x_ n)$ converges to $x ^ *$ independently of the initial point x_0$x_ 0$? (I know that if x_0 $x_0$ is close enough to $x ^ *$ then the sequence coverges but my question concerns any x_0 $x_0$ in R^n.$\mathbb{R} ^ n$.) Whatever the answer is, could you give me a reference to some theorem that justifies that? Thank you 1 # fixed point of a particular vector valued function Hi I have a function F:R^n->R^n for which I know there exist a unique fixed point x* (say). I also know that the Jacobian of F at each point x in R^n has all of its eigenvalues in [0,1) (but they are different for each x). Are these facts enough for me to say that the iterative sequence x_{n+1} = F(x_n) converges to x* independently of the initial point x_0? (I know that if x_0 is close enough to x* then the sequence coverges but my question concerns any x_0 in R^n.) Whatever the answer is, could you give me a reference to some theorem that justifies that? Thank you
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379382729530334, "perplexity_flag": "head"}
http://motls.blogspot.com/2012/12/prediction-isnt-right-method-to-learn.html?m=1
# The Reference Frame Our stringy Universe from a conservative viewpoint ## Monday, December 31, 2012 ### Prediction isn't the right method to learn about the past Happy New Year 2013 = 33 * 61! The last day of the year is a natural moment for a blog entry about time. At various moments, I wanted to write about the things that the year 2012 brought us. The most important event in science was the discovery of the $$126\GeV$$ Higgs boson (something that made me \$500 richer but that's of course the least important consequence of the discovery) but those of us who were following the events and thinking about them rationally have known about the $$126\GeV$$ Higgs boson since December 2011. Lots of other generic popular science sources recall the landing of Curiosity and other things. But let's discuss something else. Something related to time. Cara Santa Maria of The Huffington Post (I thought that Santa Maria was a ship, not a car) posted an article about the arrow of time and embedded the following video interview with Sean Carroll. Clearly, he hasn't learned or understood anything at all over those years. Maybe it is difficult to get a man to understand something when his job depends on not understanding it. ;-) Once again, we hear that the hottest thing in cosmology is the fact that the early Universe had a low entropy (in reality, it really follows from a defining property of the entropy which has been known from the first moment when entropy was introduced in the 19th century). The picture with the most concentrated wrongness appears around 2:24 in the video above: Starting from the dot at the "present", Carroll proposes to predict the future and to "predict the past" [sic]. In both cases, the entropy increases relatively to the entropy of the present state. A very similar picture appears in Brian Greene's book The Fabric of the Cosmos. Brian's picture is even worse because he suggests that the graph of the entropy is smooth, like $$S=(t-t_0)^2$$, so its derivative vanishes at $$t=t_0$$. It surely has no reason to vanish. Moreover, Brian omits the helpful part of the graph "actual past". Now, look at the picture again. You see that Carroll "predicts the past" but his "prediction" for the entropy completely and severely disagrees with the "actual past" (whatever is the way how he determined that the entropy was "actually" lower in the past, he wasn't able to derive this elementary fact because his derivation led to the wrong result "predicted past"; he must have some above-the-science method to find the right answers without science even when his scientific methods produce wrong predictions). Prague clearly resembled a military front again last night. In science, when your prediction disagrees with the facts, you must abandon your theory. Instead, Sean Carroll just doesn't care. He isn't thinking as a scientist at all. The disagreement between his predictive framework and the empirical fact means nothing for him; he just continues to use and promote his wrong predictive framework, nevertheless. It's easy to see why his "prediction" of the past is wrong. The reason is that he is using the same method – prediction – that we use to predict the future. He thinks about the past in the same way as if it were the future. However, the very term "prediction of the past" is a logical oxymoron. It is exactly as inconsistent a sequence of words as "sweeten your tea by adding lemon". You just can't make your tea any sweeter by adding lemon! Instead, you need sugar, stupid. In the same way, it is wrong to use the particular method of "prediction" when you want to say/guess/reconstruct/determine something about the past. The method of "prediction" is, by definition, only good for learning something about the moment $$t_2$$ out of the data about the physical system at time $$t_1$$ when $$t_2\gt t_1$$: you may only predict a later moment (a moment in the future, if we talk about predictions that are being made now) out of an earlier one, not vice versa! All successfully verified predictions in science – where we use the usual methodology of predictions – satisfy this property that the predicted moment occurs later than the moment(s) at which some facts are known and inserted as input to the problem. If you use the methodology in the opposite way, it just doesn't work! This method of determining the past is as wrong as an attempt to sweeten your tea by lemon. The wrong graph of the entropy in the past on the picture above is the easiest – and a rather universal – way to see that the methodology doesn't work for "predictions of the past". Instead, if you want to say something valid about the past, you need to use a different methodology: retrodiction. But retrodictions obey completely different rules than predictions. Predictions produce objective values of probabilities of future events out of known facts about the past; in this sense, predictions "emulate" what Nature Herself is doing when She actually decides what to do with the world at a later moment out of the state at an earlier moment, when She is evolving the world. On the other hand, retrodictions can never produce any objective probabilities at all. The reason is that retrodictions are a form of Bayesian inference Bayesian inference is a method to update our opinions about the probability of a hypothesis once we see some new evidence. Now, the state (or a statement about some properties) of the physical system in the past is an example of a "hypothesis" and the data collected now (at a later moment) are an example of the "evidence". What's important is that the Bayesian inference is a "reverse process" or a solution to an "inverse problem". The straightforward calculation starts from a hypothesis (an initial state is a part of a hypothesis about evolution) and this hypothesis predicts objective probabilities for the later moment, for the future, if you wish. These probabilities are objectively calculable because the future literally evolves out of the earlier moment (the past). But it is not guaranteed that you may revert this evolution – or this reasoning. And indeed, in general, you can't. In fact, in statistical physics, you can't. And in quantum physics, you can't do it, either. The reason is that whenever you discuss the fate of any facts or measurements that may only be predicted statistically – and it is true both in quantum mechanics as well as in statistical physics (even in classical statistical physics) – things are simply irreversible. If you start with a hot tea on the table, you may predict when the tea-desk temperature difference drops below 1 Celsius degree. However, if you start with a tea that is as cold as the desk, you can't say when it was 60 °C hot. This problem simply has no unique solution because the evolution isn't one-to-one, it isn't reversible. Whatever is the moment when the tea is boiling and poured to the cup, it will ultimately end up as a cold tea. People such as Sean Carroll or Brian Greene correctly notice that the microscopic laws of Nature are time-reversal-invariant (more precisely, CPT-invariant if we want to include subtle asymmetries of the weak nuclear force) but they're overinterpreting or misinterpreting this fact. This symmetry doesn't mean that every statement about the future and past may be simply reverted upside down. It only means that the microscopic evolution of particular microstates – pure states – to particular other microstates – pure states – may be reverted. But no probabilistic statements may actually be reverted in this naive way. They can't be reverted for the same reason why $$A\Rightarrow B$$ is inequivalent to the logical proposition $$B\Rightarrow A$$. The laws of Nature imply facts of the type $${\rm Past}\Rightarrow{\rm Future}$$ but these facts can't be translated to $${\rm Future}\Rightarrow{\rm Past}$$ because you would have to check all other conceivable initial states in the past and prove that all of them imply something about the future (i.e. evolve to states in the future that still obey a certain special condition) – which is virtually never the case. The past and the future play asymmetric roles in mathematical logic because of the $$A$$-$$B$$ asymmetry of the logical proposition $$A\Rightarrow B$$, the implication. To deal with the microstates only – for which the time-reversal symmetry holds – means to deal with equivalences $$A\Leftrightarrow B$$ only. But this template doesn't allow us to make any realistic statements about physics because the pure states "equivalent" to some states in the past (the future states that evolve from them) are complicated probabilistic superpositions or mixtures that can't be measured. Whenever we make some measurement, we need to talk about microstates that aren't inequivalent to some natural states/information at an earlier moment which is why we need the statements of the type $$A\Rightarrow B$$ almost all the time and these implications simply violate the $$A$$-$$B$$ symmetry. In particular, if you fail to specify the precise coordinates and velocities of all atoms in your tea, or if you're talking about a large/nonzero entropy of your tea at all, then you are clearly not talking about a particular microstate. You are only talking about some ensembles of operationally indistinguishable microstates (which is why the entropy is nonzero) or, equivalently, about partial, probably macroscopic properties of your tea. And statements of this sort – for example all statements about the entropy of the tea or the tea-desk temperature difference – simply refuse to be time-reversal-invariant! Lots of friction forces, viscosity, diffusion, and other first-time-derivative terms breaking the time reversal symmetry inevitably emerge in the effective laws controlling these quantities and propositions. All the laws that govern the macroscopic quantities average and/or sum over the microstates and the right way to do so inevitably breaks the past-future symmetry "maximally". For example (and it is the most important example), the entropy-decreasing processes are exponentially less likely than their time-reversed partners that increase the entropy. As I have emphasized many times, the asymmetry arises because the calculated probabilities must be averaged over the initial microstates but summed over the final microstates. Averaging and summing isn't quite the same thing and this difference is what favors the higher-entropy final states. There is one more consequence I have emphasized less often. The averaging (over initial state) requires "weights". If you have a finite number $$N$$ of microstates, you may assign the weights $$p_i=1/N$$ to each of them. However, it's not necessarily the choice you want to make or believe. There may exist evidence that the actual probabilities of initial microstates $$p_i$$ – the prior probabilities – are not equal to each other. The only thing that will hold is\[ \sum_i p_i = 1. \] The possible initial microstates differ, at least in principle. You may accumulate evidence $$E$$ – it means a logical proposition you know to be true because you just observed something that proves it – which will force you to change your beliefs about the probabilities of possible initial states according to Bayes' theorem:\[ P(H_i|E) = \frac{P(H_i)\cdot P(E|H_i)}{P(E)} \] The vertical line means "given". So the probability of the $$i$$-th hypothesis (the hypothesis that the initial state was the $$i$$-th state) given the evidence (which means "after the evidence was taken into account") is equal to the prior probability $$P(H_i)$$ of the initial state (the probability believed before the evidence was taken into account) multiplied by the probability that the just observed evidence $$E$$ occurs according to the hypothesis $$H_i$$ and divided by the normalization factor $$P(E)$$, the "marginal likelihood", which must be chosen so that the total probability of all mutually excluding hypotheses remains equal to one:\[ \sum_i P(H_i|E) = \sum_i \frac{P(H_i)\cdot P(E|H_i)}{P(E)} = 1. \] Note that $$P(H_i|E)$$ and $$P(E|H_i)$$ aren't the same thing (another potential critical mistake that the people believing in a naive "time reversal symmetry" are probably making all the time as well) but they're proportional to each other. The hypothesis (initial microstate) for which the observed evidence is more likely becomes more likely by itself; the initial states that imply that the evidence (known to be true) cannot occur at all are excluded. A particular observer has collected certain kinds of evidence $$E_j$$ and he has some subjective knowledge which determines $$P(H_i|E_{\rm all})$$. It's important that these probabilities of the hypotheses are subjective, they depend on the evidence that a particular observer has accumulated and labeled trustworthy and legitimate. They become prior probabilities when a new piece of evidence emerges. And indeed, one of the most notorious properties of the prior probabilities is that they are totally subjective and there's no way for everyone to agree about the "right priors". There aren't any objective "right priors". Except for the Czechoslovak communist malls, Priors, which had to be believed to be objectively right. However, Prior is an acronym for "Přijdeš rychle i odejdeš rychle" (You quickly arrive as well as quickly depart) which quantified the product selection. That's why the retrodicted probabilities of initial states $$p_i=P(H_i)$$ always depend on some subjective choices. What we think about the past inevitably depends on other things we have learned about the past. This is a totally new property of retrodictions that doesn't exist for predictions. Predictions may be probabilistic (and in quantum mechanics and statistical physics, they are inevitably "just" probabilistic) but the predicted probabilities are objectively calculable for certain input data. The formulae that objectively determine these probabilities are known as the laws of physics. But the retrodicted probabilities of the past are not only probabilistic; their values inevitably depend on the subjective knowledge, too! Of course, when the past is determined by the correct method – the method of retrodictions which is a form of Bayesian inference – we will find out that the lower-entropy states are exponentially favored. We won't be able to become certain about any property of the Universe in the past but some most universal facts such as the increasing entropy will of course follow from this Bayesian inference. In particular, the correctly "retrodicted past entropy" will more or less coincide with the "actual past" curve. I think that even the laymen implicitly know how to reconstruct the past. They know that it's a "reverse problem" of a sort and they secretly use the Bayes theorem even if they don't know the Bayes formula and other pieces of mathematics. They are aware of the fact that the tea-desk temperature difference was higher in the past exactly because this difference is decreasing with time. More generally, they know that the entropy was lower in the past exactly because the entropy is increasing, was increasing, and will be increasing with time. They know that determining the past by the same logic by which we predict or expect the future is wrong, stupid, and it contradicts common sense. Too bad that Sean Carroll hasn't been able to get this basic piece of common sense yet, after a decade of futile attempts to understand the basics of statistical physics. And that's the memo. #### 27 comments: 1. victor Happy New Year, Lubos! Všechno nejlepší v novém roce! :) Thanks for the post, it is a wonderful gift! 2. Dilaton Happy new year Lumo and to the whole TRF community :-D 3. Dilaton ... and thanks a lot for this nice end of the year article :-) But Priors are always objectively right by definition, see :-P, see: http://cache.gawker.com/assets/images/io9/2009/09/stargate-sg-1-ori.jpg 4. Eugene S Doncha just love Soviet-era brutalist architecture! Happy New Year lazy Lubos (too lazy to type out all three prime factors of 2013!) 5. There are simple situations in which one can make retodictions successfully. The best example is orbital and planetary mechanics. Some of the ancient Egyptian monuments have a stellar alignment that is incorrect today because of the precession of the equinox. Even in that case if one tried to go too far back in time chaotic dynamics would get you. Your main point still holds since one is assuming an initial condition. 6. Happy new year to Lubos and all the followers of this blog ! We were looking for a not so politically correct science blog ( physics in particular ), a friend of ours ( ex maths student ) recommended your blog ! It`s good to see a Physicist with balls to say what he thinks without fear of being perceived politically incorrect ! You know what I meant ! :) 7. Happy New Year! 't Hooft probably has the earliest correct "announcement" of the Higgs Particle mass from this 2001(!) interview [quote]In fact, most of us are convinced that the observation of the Higgs particle is just around the corner. In fact, you may have heard the rumor that at CERN they were just about to make the discovery but unfortunately the machine had to be shut down. There is going to be a more powerful machine there. We just keep our fingers crossed that they were probably right and the mass of the Higgs is around 125 or so GeV. If not, it might then be a little bit heavier but even then it will be detected fairly soon, say, within about five to ten years.[/quote] From: Candid Science IV - Conversations with Famous Physicists p. 123 ( google "candid science iv djvu" ) 8. Happy New Year, Lubošet al !!! 9. Sparks Happy New Year! :) 10. brothersmartmouth Time doesn't scatter the papers on my desk, I do. And it takes less effort to reorganize them. As a verified layman, this seems like a bad analogy. Is energy just lost into space? Happy New Year! Keep it up Mr. Pilsen, and a predictably informative 2013. A new years question that I need to know, Will our universe eventually end up at absolute zero forever? Cheers 11. It's not laziness, it's hard work I did to simplify the material for the readers as much as possible because it's surely easier to remember 2 factors and "discover" that 33 = 3*11 than to remember 3 factors and feel that all the discoveries have been scooped by others. ;-) Happy New Year, LM 12. Right, celestial dynamics and especially planetary orbits is "reversible" in this sense so the retrodictions ultimately end up being fully analogous to predictions. The reason is that we deal with "complete information" about the relevant degrees of freedom (except for the limited precision; but all "qualitative" relevant pieces of information are known). I was talking about a more general or generic case. 13. This prediction, especially with the right Higgs mass, surely sounds as a prophesy but all the evidence I see - correct me if I am missing something - indicates that he was just guessing or choosing a reasonably low number that was still far enough from the exclusion limits of that time. Moreover, the estimate of the discovery date wasn't right because it was 11, not 5-10, years away. ;-) 14. anna v t'Hooft was probably talking about the ALEPH 115GeV excess events in the Higgs search http://cds.cern.ch/record/531810/files/ep-2001-095.pdf which were not confirmed by the other three experiments. There was not enough statistics and if LEP2 had continued maybe the HIggs would have been found then.. 15. anna v Happy New Year to all, and let entropy increase :). 16. Except that he apparently said the correct 125, not 115, GeV. ;-) 17. Marcel van Velzen Happy New Year Lubos, "If you knew everything about you and the universe then the future would be clear to you" WHAT??? I thought quantum mechanics was about non commuting operators and probabilities of things happening? 18. delusionbuster Happy new year Lubos. I do not pretend to understand a fraction of your posts but I look forward to the bits I do with relish. 19. Maybe he accidentally said 125 instead of 115 or maybe the interviewer misheard him, or maybe... 't Hooft suggesting the mass at around 125GeV in 2001 sealed its fate in some kind of weird superdeterministic cellular automatasic fashion. :-) 20. thejollygreenman All the best for 2013 Squire! May you have a rich harvest from the tree of knowledge that evidently grows in your back garden. 21. there is something i don't understand if someone could answer. when you try to predict the past if you have all the necessary information it can't get predicted? it could be predicted if you know all the changes that happened in the system, right? 22. happy new year 23. Happy New Year! 24. Shannon Bonne et heureuse Année à tous sur TRF. And if you can't sweeten your tea with lemon then pour rum in it... great if you have a cold ;-) (it also works without the tea) 25. Pavel Hi Luboš. The sentence "sweeten your tea by adding lemon" is not an oxymoron, if you have previously eaten the Synsepalum dulcificum 26. LOL, fun plant. But I would say that you only sweeten that tea once you pour it into your mouth i.e. you sweeten it by drinking it. ;-) 27. In the same volume an impressive prediction from a September 2000 interview from one of the most remarkable theoretical physicists ever (Yuval Ne'eman - read the entire interview to understand why he is so remarkable), They are waiting for the new accelerator, the LHC, which will be completed in 2005. My model predicts that the mass of the Higgs is twice the mass of the W, which is 85 GeV, so it will be 170 GeV. However, there is a renormalization correction because at high energy the mass is a function of energy. The energy at which this Pythagorean result holds is a higher energy. We calculated the mass of the Higgs at lower energy and it comes out as 130 ± 10 GeV. My prediction is the only prediction in the field. No other theory says anything about the Higgs — except for ordinary supersymmetry, which then requires the existence of lots of new particles.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 33, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467481970787048, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/71957-stoke-s-theorem-jacobian-problem.html
# Thread: 1. ## Stoke's Theorem - Jacobian Problem Question asks to evaluate the surface integral for the given $F$ and $S$ with stokes theorem. $F= [z^2, x^2, y^2], s: z^2 = x^2 + y^2$ for $y >= 0$, $0<=z<=2$ My question is that in transformation from x,y to u,v shouldn't there be jacobian multiplied ? I dont see it anywhere in the solution. Attached Thumbnails 2. Originally Posted by Altair Question asks to evaluate the surface integral for the given $F$ and $S$ with stokes theorem. $F= [z^2, x^2, y^2], s: z^2 = x^2 + y^2$ for $y >= 0$, $0<=z<=2$ My question is that in transformation from x,y to u,v shouldn't there be jacobian multiplied ? I dont see it anywhere in the solution. There is no trouble or mystery here. The surface is being represented by the parametrised position vector $\vec{r}(u, v) = x(u, v) \vec{i} + y(u, v) \vec{j} + z(u, v) \vec{k}$. Using this representation, $\vec{dS} = \frac{\partial \vec{r}}{\partial u} du \times \frac{\partial \vec{r}}{\partial v} dv$. This is a formula that should be somewhere in your notes or textbook.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9106927514076233, "perplexity_flag": "middle"}
http://cs.stackexchange.com/questions/7406/intuitive-meaning-of-modal-mu-calculus-formula
# Intuitive meaning of modal $\mu$-calculus formula I am solving one of the past exams and I am not certain with my solution to one of the exercises. The exercise is asking to give intuitive meaning to modal $\mu$-calculus formula: $$\phi = \mu Z. \langle - \rangle tt \wedge [-a]Z$$ According to an article Modal logics and mu-calculi: an introduction by Bradfield and Stirling[1] the intuition behind $\mu$ operator is "finite looping". So my reasoning is following: on every path through states in $Z$ there must be only a finite number of transitions with labels different from $a$ and then we must reach a state which is both non-terminal (from the first condition) and all transitions from it are labelled $a$ (from finiteness). Hence on every path through states in $Z$ there must eventually be a transition labelled $a$. (similar to CTL formula $\forall F(a)$). Is my reasoning correct? I am unable to find any formal reason for my solution to be right, can you give me a little hint? [1] http://homepages.inf.ed.ac.uk/jcb/Research/bradfield-stirling-HPA-mu-intro.ps.gz - ## 1 Answer Let's break it down. First, let's look at $[-a]\phi$. This means every non-$a$ transition leads to a state where $\phi$ holds. It follows then that $[-a]\mathrm{ff}$ holds for states that have no non-$a$ transitions, which we will use when looking at the least fixed point semantics. $\langle-\rangle\mathrm{tt}$ is pretty simple. It holds in any state that has any transition, i.e. is not deadlocked. So together $\langle-\rangle\mathrm{tt} \land [-a]\phi$ means the state can take a transition and $\phi$ holds after every non-$a$ transition. One way to view the meaning of $\mu Z.\phi(Z)$ is by the approximants referenced in your linked tutorial. If the formula is satisfied in state $s$ then there is some $\beta$ such that $\bigvee_{\alpha<\beta} \phi^{(\alpha)}(\mathrm{ff})$ is satisfied in $s$. The notation $\phi^{(n)}(x)$ means $\phi$ iterated on $x$, $n$ times, i.e. $\underbrace{\phi(\phi(\dots\phi(x)))}_{\text{$n$times}}$. Let's look at some of these. \begin{align} \phi^{(0)}(\mathrm{ff}) &= \mathrm{ff} \\ \phi^{(1)}(\mathrm{ff}) &= \langle-\rangle\mathrm{tt} \land [-a]\phi^{(0)}(\mathrm{ff}) \\ &= \langle-\rangle\mathrm{tt} \land [-a]\mathrm{ff} \\ \phi^{(2)}(\mathrm{ff}) &= \langle-\rangle\mathrm{tt} \land [-a]\phi^{(1)}(\mathrm{ff}) \\ &= \langle-\rangle\mathrm{tt} \land [-a](\langle-\rangle\mathrm{tt} \land [-a]\mathrm{ff}) \\ \phi^{(3)}(\mathrm{ff}) &= \langle-\rangle\mathrm{tt} \land [-a]\phi^{(2)}(\mathrm{ff}) \\ &= \langle-\rangle\mathrm{tt} \land [-a](\langle-\rangle\mathrm{tt} \land [-a](\langle-\rangle\mathrm{tt} \land [-a]\mathrm{ff})) \end{align} Hopefully it is clear that these have the meanings 1. $\phi^{(1)}(\mathrm{ff})$: States that can take only $a$ transitions 2. $\phi^{(2)}(\mathrm{ff})$: Live states that 1. have only $a$ transitions; or 2. all length 1 non-$a$ paths lead to a live state with only $a$ transitions 3. $\phi^{(3)}(\mathrm{ff})$: Live states that 1. have only $a$ transitions; or 2. all length 1 non-$a$ paths lead to a live state with only $a$ transitions; or 3. all length 2 non-$a$ paths lead to a live state with only $a$ transitions If that is unclear, remember that $[-a]\phi$ is trivially satisfied for states with no non-$a$ transitions. Now you should see that $\phi^{(n)}(\mathrm{ff})$ is true if and only if the state can take at most $n-1$ non-$a$ transitions before reaching a live state with only $a$ transitions. It turns out that $\phi^{(n)}(\mathrm{ff}) \implies \phi^{(n+1)}(\mathrm{ff})$ so we don't need to take the disjunction with lesser approximants and can simply say $\mu Z. \langle-\rangle\mathrm{tt} \land [-a]Z \iff \exists \beta \in \mathbb{N}. \phi^{(\beta)}(\mathrm{ff})$, or in english, after a finite number of non-$a$ transitions we reach a live state with only $a$ transitions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8835461139678955, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/213152-cot-x-3cot-2x-1-0-a.html
2Thanks • 1 Post By ILikeSerena • 1 Post By ibdutt # Thread: 1. ## cot(x) + 3cot(2x) - 1 = 0 Hello this question looked straight forward enough, but I'm stuck and would really appreciate some help. Solve on the interval $0\leq x \leq\2\pi$ $\cot\theta + 3\cot2\theta-1=0$ I tried writing it in terms of $\cos\theta$ and $\sin\theta$ And got this; $\dfrac{5\cos^2\theat - 3\sin^2\theta}{2\sin\theta\cos\theta} =1$, among various similar equations. I have worked on from here a bit, but can't seem to get it in terms of a trigonometric equation I can then solve. I'd really appreciate it if somone would tell me if I'm on the right track and should keep going or if I need to approach the question in a different way. Thank you very much. 2. ## Re: cot(x) + 3cot(2x) - 1 = 0 Originally Posted by Furyan Hello this question looked straight forward enough, but I'm stuck and would really appreciate some help. Solve on the interval $0\leq x \leq\2\pi$ $\cot\theta + 3\cot2\theta-1=0$ I tried writing it in terms of $\cos\theta$ and $\sin\theta$ And got this; $\dfrac{5\cos^2\theat - 3\sin^2\theta}{2\sin\theta\cos\theta} =1$, among various similar equations. I have worked on from here a bit, but can't seem to get it in terms of a trigonometric equation I can then solve. I'd really appreciate it if somone would tell me if I'm on the right track and should keep going or if I need to approach the question in a different way. Thank you very much. Hey Furyan! How about writing it in terms of $\cos 2\theta$ and $\sin 2\theta$? 3. ## Re: cot(x) + 3cot(2x) - 1 = 0 Hi ILikeSerena, Thanks, I'll try that 4. ## Re: cot(x) + 3cot(2x) - 1 = 0 Hi ILikeSerena Thank you very much indeed, that worked! I would never have gone that way if you hadn't suggested it. Although in the end it was simple, it took me rather a long time to get there. I got: $4\cos2\theta - \sin2\theta = -1$ Then I used: $4\cos2\theta - \sin2\theta \equiv R\cos(2\theta + \alpha)$ And ended up with: $\sqrt{17}\cos(2\theta + \arctan\dfrac{1}{4}) = -1$ 6. ## Re: cot(x) + 3cot(2x) - 1 = 0 Thank you very much indeed, that worked! I would never have gone that way if you hadn't suggested it. ____________________ Find true love in The Tapout XT Fitness DVD and enjoy the moving time with them!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9670250415802002, "perplexity_flag": "middle"}
http://nrich.maths.org/1110
nrich enriching mathematicsSkip over navigation ### Polydron This activity investigates how you might make squares and pentominoes from Polydron. ### Symmetry Challenge Systematically explore the range of symmetric designs that can be created by shading parts of the motif below. Use normal square lattice paper to record your results. ### Hidden Meaning What is the missing symbol? Can you decode this in a similar way? # A Cartesian Puzzle ##### Stage: 2 Challenge Level: Here are the coordinates of some quadrilaterals but in each case one coordinate is missing! 1. $(2,11), \; (0,9),\; (2,7),\; (?,?)$ 2. $(3,7),\; (3,4),\; (8,4),\; (?,?)$ 3. $(18,3),\; (16,5), \;(12,5),\; (?,?)$ 4. $(13,12),\; (15,14),\; (12,17),\; (?,?)$ 5. $(7,14),\; (6,11),\; (7,8),\; (?,?)$ 6. $(15,9),\; (19,9),\; (16,11),\; (?,?)$ 7. $(11,3),\; (15,2),\; (16,6),\; (?,?)$ 8. $(9,16),\; (2,9),\; (9,2),\; (?,?)$ The quadrilaterals are all symmetrical. This may be rotational or line symmetry or both. Can you work out what the missing coordinates are if you know they are all positive? Is there more than one way to find out? Now plot those eight missing coordinates on a graph like this. What shape do they make and what sort of symmetry does it have? The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8881595134735107, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/tagged/algebraic-manipulation?sort=votes&pagesize=15
Tagged Questions The art of manipulating an algebraic expression into the desired form. 6answers 2k views Finding real roots of negative numbers (for example, $\sqrt[3]{-8}$) Say I want to quickly calculate $\sqrt[3]{-8}$, to which the most obvious solution is $-2$. When I input $\sqrt[3]{-8}$ or Power[-8, 3^-1], Mathematica gives the ... 4answers 415 views How do I introduce a new variable in a trigonometric equation? I have the trigonometric equation \begin{equation*} \sin^8 x + 2\cos^8 x -\dfrac{1}{2}\cos^2 2x + 4\sin^2 x= 0. \end{equation*} By putting $t = \cos 2x$, I have \begin{equation*} \dfrac{3}{16} t^4+ ... 5answers 484 views How do I get my equation to have the form $(x-a)^2 + (y-b)^2 + (z-c)^2-d = 0$? I want Mathematica to express the equation $$-11 - 2 x + x^2 - 4 y + y^2 - 6 z + z^2=0$$ in the form $$(x - 1)^2 + (y - 2)^2 + (z - 3)^2 - 25=0$$ How do I tell Mathematica to do that? 2answers 294 views How to define a non-standard algebra in Mathematica? I want to define an algebra, where there are three elements: 0, 1 and $\infty$ and two operations, addition and multiplication defined, both commutative: \begin{align*} 0+0&=0\\ 0+1&=1\\ ... 4answers 788 views Is it possible to have Mathematica move all terms to one side of an equation? I have an inequality expression that I would like to express in terms of the relation of the parameters to zero. More simply, I want to have mathematica move all the terms to one side of the ... 4answers 407 views Is there a way to Collect[] for more than one symbol? Oftentimes you find yourself looking for polynomials in multiple variables. Consider the following expression: a(x - y)^3 + b(x - y) + c(x - y) + d as you can ... 6answers 900 views Replacing composite variables by a single variable To replace a single variable by another variable, one can simply use the the replace all (/.) operator (e.g., ... 2answers 967 views Why doesn't Mathematica expand Cos[x]^3 Sin[x]^2? I found some examples of Mathematica's commands usage in an old manual but the program gives me different result than expected ... 1answer 151 views Why is ReplaceAll behaving like this? I'm learning to use the ReplaceAll function and I found the behavior of which is quite confusing. For Sqrt[f[x, y]] /. f[___] -> u Mathematica returns ... 2answers 474 views Expand modulus squared Is it possible to make a function in Mathematica that expands expressions of the form $$|z + w|^2 = |z|^2 + 2\text{Re} \overline{z}w + |w|^2?$$ Preferably it should also be able to handle things ... 3answers 257 views InverseSeries of multiple variables and multiple equations CONTEXT Let us consider a bit of the Universe in which we draw spheres (see a high resolution image here). Astronomers have shown that the density within these spheres could be predicted quite ... 3answers 329 views How can I convert a complex number a+b I to the exponent form A Exp(I phi)? When I have an expression such as: (1/4 + I/4) ((1 - 2 I) x + Sqrt[3] y) it is hard to get an intuition of the number. So I want to convert it to the complex ... 3answers 475 views How can I convert x^2 to x*x? When I try the following code: a b^2 c /. b c -> e Mathematica gives me: a b^2 c but what I want is: ... 4answers 299 views “Evaluating” polynomials of functions (Symbols) I want to implement the following type evaluation symbolically $$(f^2g + fg + g)(x) \to f(x)^2 g(x) + f(x) g(x) + g(x)$$ In general, on left hand side there is a polynomial in an arbitrary number of ... 2answers 261 views Shaping/simplifying equations in a certain way A problem I am occasionally facing is to simplify an equation not to it's shortest form but to a form that is simple by other means. Often, this is grouping the term according to certain functions, ... 1answer 193 views Most efficient way to determine conclusively whether an algebraic number is zero Let x be an algebraic number of unspecified degree, expressed using arithmetic, rational powers, and algebraic integers (edit: ... 6answers 761 views How do I replace a variable in a polynomial? How do I substitue z^2->x in the following polynomial z^4+z^2+4? z^4+z^2+4 /. z^2->x ... 3answers 253 views Distances between points in periodic cube How can one implement more efficiently/elegantly/memory savvily the following function which returns a matrix of all Euclidian distances between points in 3D within a cube of width ... 4answers 358 views How to implement dual numbers in Mathematica? I wonder how can I implement dual numbers in Mathematica, so that all functions work well with them (as with complex numbers). Particularly, for each function $f$, ... 1answer 253 views How do I expand a sum? I have a problem with Mathematica's symbolic manipulations. As an example, consider the following expression: $$\sum _{i=1}^n -2 x_i \left(-a x_i-b+y_i\right)=0$$ How do I get Mathematica to expand ... 1answer 71 views ToNumberField won't recognize Root[…] as explicit algebraic number In Mathematica 9.0.1, it appears that ToNumberField will not always recognize a Root object as an explicit algebraic number. ... 2answers 629 views How can I rationalize the denominator of an expression? Mathematica doesn't rationalize the denominator automatically, and I haven't found anything in the documentation about it. But I found an old post on MathGroup, which proposes a solution using ... 2answers 293 views Manipulating an equation into standard quadratic form? Say I have an equation of the form $$u s + \frac{1}{v} + \frac{1}{p s + q} = 0$$ (or any form that can be written as a standard quadratic, really, the above form is just an example; they'll all be ... 1answer 187 views RootSum result manipulation/simplification Consider the sum sum1 = Sum[ k/( k^7 - 2 k + 3), {k, Infinity}] ... 4answers 854 views Checking if two trigonometric expressions are equal Say I have two trigonometric expressions which are a bit complicated. Is there a quick way to check if they reduce to the same thing (that they are equal) using Mathematica? I was solving this: \$y'' ... 4answers 2k views Factoring polynomials to factors involving complex coefficients I've run into some problems using Factor on polynomials with complex coefficient factors. Reading the documentation it looks like it only factors over the ... 3answers 115 views Is there any way to collect only variables with a specific power? Suppose I've got this: In[13]:= Expand[(a + b) (b + c) (c + a)] Out[13]= a^2 b + a b^2 + a^2 c + 2 a b c + b^2 c + a c^2 + b c^2 And I want to collect only ... 0answers 57 views Apart may use Padé method: what's that? How does Apart work? The page tutorial/SomeNotesOnInternalImplementation#7441 says, "Apart ... 3answers 460 views What function can I use to evaluate $(x+y)^2$ to $x^2 + 2xy + y^2$? What function can I use to evaluate $(x+y)^2$ to $x^2 + 2xy + y^2$? I want to evaluate It and I've tried to use the most obvious way: simply typing and evaluating $(x+y)^2$, But it gives me only ... 3answers 366 views Square both sides of an equation? Can I define an equation (for example, x+1 == y^2 + 2), and tell Mathematica to square both sides? If not, what is an equivalent way to achieve this? 6answers 304 views How to simplify a complicated Sum in terms of power Sums? For example, I have: $a=\sum _{r=1}^n x_r \left(\left(\sum _{i=1}^n x_i-x_r\right){}^2-\sum _{i=1}^n x_i^2\right)$ ... 2answers 108 views How to protect pattern or subexpression when distributing / expanding expression? I've got an expression like expr = (1-x)(a+b) that I would like to distribute / expand while keeping factors of (1-x) intact, ... 4answers 511 views How to get exact roots of this polynomial? The equation $$64x^7 -112x^5 -8x^4 +56x^3 +8x^2 -7x - 1 = 0$$ has seven solutions $x = 1$, $x = -\dfrac{1}{2}$ and $x = \cos \dfrac{2n\pi}{11}$, where $n$ runs from $1$ to $5$. With ... 1answer 145 views Numbered symbols I work with an exterior algebra over $R^n$. I have the basis $\{1,\omega_i\}_{i=1}^n$ in this algebra, and my differential operator is defined as d\omega_k=\sum_{i>j>0,i+j=k} (i-j)w_i\wedge ... 1answer 75 views How to apply tags to expression terms? I often see on this site and at the mathgroup the repeated questions on how to rearrange expression that Mathematica "likes" to keep in one form, but the user prefers in another. Consider this trivial ... 1answer 438 views Reduce an equation by putting a new variable I have the following equation given: $$(26-x)\cdot\sqrt{5x-1} -(13x+14)\cdot\sqrt{5-2x} + 12\sqrt{(5x-1)\cdot(5-2x) }= 18x+32.$$ In order to solve it, I want to substitute \$t = \sqrt{5x - ... 2answers 278 views expanding a polynomial and collecting coefficients I'm trying to expand the following polynomial ... 1answer 223 views Move variable to one side of the equation Say if I have a formula like so: a1*a2*a3^(a4 + 1)*(1 - E^(a5*a6/a3^a4/a2)) == 0 How do I move a3 to the right? I've tried to follow other examples here on stack ... 1answer 162 views Finding mappings between expressions Suppose we have an expression of the form: $j=\frac{A\left(t\right)}{B\left(t\right)}=\frac{C\left(s\right)}{D\left(s\right)}$ That is, $j$ can be expressed either as a function of $t$, or as a ... 7answers 215 views Defining a function that completes the square given a quadratic polynomial expression How can I write a function that would complete the square in a quadratic polynomial expression such that, for example, CompleteTheSquare[5 x^2 + 27 x - 5, x] ... 2answers 1k views How to convert a system of parametric equations to a normal equation? For example, I have a system of parametric equations (R is a constant number) : ... 2answers 102 views Unexpected side effect of removing the Orderless attribute from Times First I make Times orderless: ClearAttributes[Times, Orderless]; Then I evaluate ... 3answers 211 views Is it possible to use Composition for polynomial composition? I want to do this: $P = (x^3+x)$ $Q = (x^2+1)$ $P \circ Q = P \circ (x^2+1) = (x^2+1)^3+(x^2+1) = x^6+3x^4+4x^2+2$ I used Composition for testing if that could ... 3answers 238 views Limiting form of a polynomial expression When simplifying an expression by hand, a trick that is often used is to remove terms that are lower powers of the independent variable, for instance, as $x \rightarrow \infty$, $x^2 + x$ becomes ... 5answers 295 views How to group certain symbolic expressions? For example, I have the following expression : A( 2 x1 + B(y1 + y2) + 2 x2 ) How do I make the output look like this (grouping ... 3answers 125 views How to extract phase angle from sinusoid I'm doing some electric circuit calcualtions and I'm trying to get the phasor representation of some arbitrary function of Sin or Cos. Could be complex like: ... 2answers 95 views How can I expand a inequality with Abs I want Abs[x] + Abs[y] <= 1 to be convert to ... 1answer 40 views Non commutative multiply- expand expression I began to use Mathematica a few days. My problem is: How to expand expression like $(a+b)*(a+b)$, where this multiplication is non commutative? Mathematica can do this? 2answers 152 views Eliminate several variables between five Conic_section equations I want to eliminate x1, x2, y1, y2 between these 5 equations. I tried ... 1answer 157 views How can I express an algebraic expression as a product? I have a simply expression that I can see that I can express as a product. ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8915166854858398, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/3878/constructing-secure-key-exchange-protocol/3879
# Constructing secure key exchange protocol I have $\Pi=(Gen,Enc,Dec)$ and let it be semantically secure public-key encryption scheme. Security parameter is $n$, then the message space of plaintext is always $\lbrace 0, 1 \rbrace^n$. By using $\Pi$ I want to construct key exchange protocol $\Theta$. There should be 2 rounds (i.e. one for Alice and one for Bob). It must be secure against eavesdroppers (and it'd should be possible to prove it :) ). Of course the only assumption is security of $\Pi$. For example Diffie–Hellman key exchange protocol fits to this exercise (if we assume somethink), but I don't know how to generalize it. P.S. The key Alice and Bob establish $\in \lbrace 0, 1 \rbrace^n$. Thanks, Nick. - CurveCP is a protocol with properties similar to TLS, but that uses only DH-Keyexchange and authenticated symmetric encryption. – CodesInChaos Oct 24 '12 at 20:03 ## 1 Answer Well, the obvious way to do this is: • Before the protocol occurs, Alice runs the $Gen$ procedure to create a public and a private key • For her round, Alice sends her public key to Bob • For his round, Bob selects a random symmetric key $\in \{0,1\}^n$, encrypts it with Alice's public key, and sends that encryption to Alice. • Alice decrypts the message that Bob sent her with her private key. Now, Alice and Bob share a random symmetric key (Bob knows it because he created it, Alice knows it because she decrypted it). In addition, Eve has no information on the key; the only thing that could possibly give her information about it is the encrypted version in round 2; and because we assume $\Pi$ is semantically secure, that gives her no information. - 1 It has the usual problem of MitM-attacks if there is no additional authentication (e.g. a certificate for Alice's public key). – Paŭlo Ebermann♦ Sep 26 '12 at 7:58 Yes that's vulnerable to a MitM, as any key exchange protocol without authentication. Further, if the MitM can induce Bob to perform the protocol the way Alice does it, the attack needs to be active only in an initial step, then can become passive eavesdropping; the MitM intercepts and mutes the messages sent by Alice and Bob, and sends one message to each deciding the shared key. – fgrieu Sep 26 '12 at 10:56 1 @PaŭloEbermann: yes, that is vulnerable to a MITM. HOwever, standard DH is also equally vulnerable, and the submitter specifically said that DH solved the problem. Yeah, I probably should have mentioned that MITM needs to be addressed somehow in real life. – poncho Sep 26 '12 at 11:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9078702330589294, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/tag/heat-flow/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘heat flow’ tag. ## Random matrices: The Universality phenomenon for Wigner ensembles 2 February, 2012 in math.PR, paper | Tags: Four Moment Theorem, heat flow, survey, Van Vu, Wigner matrices | by Terence Tao | 2 comments Van Vu and I have just uploaded to the arXiv our paper “Random matrices: The Universality phenomenon for Wigner ensembles“. This survey is a longer version (58 pages) of a previous short survey we wrote up a few months ago. The survey focuses on recent progress in understanding the universality phenomenon for Hermitian Wigner ensembles, of which the Gaussian Unitary Ensemble (GUE) is the most well known. The one-sentence summary of this progress is that many of the asymptotic spectral statistics (e.g. correlation functions, eigenvalue gaps, determinants, etc.) that were previously known for GUE matrices, are now known for very large classes of Wigner ensembles as well. There are however a wide variety of results of this type, due to the large number of interesting spectral statistics, the varying hypotheses placed on the ensemble, and the different modes of convergence studied, and it is difficult to isolate a single such result currently as the definitive universality result. (In particular, there is at present a tradeoff between generality of ensemble and strength of convergence; the universality results that are available for the most general classes of ensemble are only presently able to demonstrate a rather weak sense of convergence to the universal distribution (involving an additional averaging in the energy parameter), which limits the applicability of such results to a number of interesting questions in which energy averaging is not permissible, such as the study of the least singular value of a Wigner matrix, or of related quantities such as the condition number or determinant. But it is conceivable that this tradeoff is a temporary phenomenon and may be eliminated by future work in this area; in the case of Hermitian matrices whose entries have the same second moments as that of the GUE ensemble, for instance, the need for energy averaging has already been removed.) Nevertheless, throughout the family of results that have been obtained recently, there are two main methods which have been fundamental to almost all of the recent progress in extending from special ensembles such as GUE to general ensembles. The first method, developed extensively by Erdos, Schlein, Yau, Yin, and others (and building on an initial breakthrough by Johansson), is the heat flow method, which exploits the rapid convergence to equilibrium of the spectral statistics of matrices undergoing Dyson-type flows towards GUE. (An important aspect to this method is the ability to accelerate the convergence to equilibrium by localising the Hamiltonian, in order to eliminate the slowest modes of the flow; this refinement of the method is known as the “local relaxation flow” method. Unfortunately, the translation mode is not accelerated by this process, which is the principal reason why results obtained by pure heat flow methods still require an energy averaging in the final conclusion; it would of interest to find a way around this difficulty.) The other method, which goes all the way back to Lindeberg in his classical proof of the central limit theorem, and which was introduced to random matrix theory by Chatterjee and then developed for the universality problem by Van Vu and myself, is the swapping method, which is based on the observation that spectral statistics of Wigner matrices tend to be stable if one replaces just one or two entries of the matrix with another distribution, with the stability of the swapping process becoming stronger if one assumes that the old and new entries have many matching moments. The main formalisations of this observation are known as four moment theorems, because they require four matching moments between the entries, although there are some variant three moment theorems and two moment theorems in the literature as well. Our initial four moment theorems were focused on individual eigenvalues (and later also to eigenvectors), but it was later observed by Erdos, Yau, and Yin that simpler four moment theorems could also be established for aggregate spectral statistics, such as the coefficients of the Greens function, and Knowles and Yin also subsequently observed that these latter theorems could be used to recover a four moment theorem for eigenvalues and eigenvectors, giving an alternate approach to proving such theorems. Interestingly, it seems that the heat flow and swapping methods are complementary to each other; the heat flow methods are good at removing moment hypotheses on the coefficients, while the swapping methods are good at removing regularity hypotheses. To handle general ensembles with minimal moment or regularity hypotheses, it is thus necessary to combine the two methods (though perhaps in the future a third method, or a unification of the two existing methods, might emerge). Besides the heat flow and swapping methods, there are also a number of other basic tools that are also needed in these results, such as local semicircle laws and eigenvalue rigidity, which are also discussed in the survey. We also survey how universality has been established for wide variety of spectral statistics; the ${k}$-point correlation functions are the most well known of these statistics, but they do not tell the whole story (particularly if one can only control these functions after an averaging in the energy), and there are a number of other statistics, such as eigenvalue counting functions, determinants, or spectral gaps, for which the above methods can be applied. In order to prevent the survey from becoming too enormous, we decided to restrict attention to Hermitian matrix ensembles, whose entries off the diagonal are identically distributed, as this is the case in which the strongest results are available. There are several results that are applicable to more general ensembles than these which are briefly mentioned in the survey, but they are not covered in detail. We plan to submit this survey eventually to the proceedings of a workshop on random matrix theory, and will continue to update the references on the arXiv version until the time comes to actually submit the paper. Finally, in the survey we issue some errata for previous papers of Van and myself in this area, mostly centering around the three moment theorem (a variant of the more widely used four moment theorem), for which the original proof of Van and myself was incomplete. (Fortunately, as the three moment theorem had many fewer applications than the four moment theorem, and most of the applications that it did have ended up being superseded by subsequent papers, the actual impact of this issue was limited, but still an erratum is in order.) ## Recent progress on the Kakeya conjecture 11 May, 2009 in math.AG, math.AP, math.AT, math.CO, talk, travel | Tags: additive combinatorics, heat flow, incidence geometry, Kakeya conjecture, polynomial method | by Terence Tao | 22 comments Below the fold is a version of my talk “Recent progress on the Kakeya conjecture” that I gave at the Fefferman conference. Read the rest of this entry » ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.945852518081665, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/195066-converge-diverge-plus-show-why.html
# Thread: 1. ## Converge or diverge plus show why show why the con/diverge: $\sum_{r=1}^{\infty}\frac{r^{2}-1}{r^{2}+1}$ i think she diverges. because the terms get bigger (closer to 1 each) so you are just adding almost 1 + 1 +1 +1 ... it tails off to infinity... diverge... but i think i need to use a rigurous proof not just intuition. $\sum_{r=1}^{\infty}\frac{\sin(r^{2})}{5^{r}}$ i'd say she converges since the denom gets rapidly massive so the whole thing is getting rapidly tiny... i can't see what it converges too though? and how to show it? $\sum_{r=1}^{\infty}\frac{(3r)!}{(r!)^{3}}$ as above, i think its the same. but i don't know how to show these using the whole episolon thing. the epsilon thing really bugs me. do i need the epsilon thing? can someone help me to show these things rather than just use intuition... also am i right or wrong??? 2. ## Re: Converge or diverge plus show why Originally Posted by Shizaru show why the con/diverge: $\sum_{r=1}^{\infty}\frac{r^{2}-1}{r^{2}+1}$ i think she diverges. because the terms get bigger (closer to 1 each) so you are just adding almost 1 + 1 +1 +1 ... it tails off to infinity... diverge... but i think i need to use a rigurous proof not just intuition. $\sum_{r=1}^{\infty}\frac{\sin(r^{2})}{5^{r}}$ i'd say she converges since the denom gets rapidly massive so the whole thing is getting rapidly tiny... i can't see what it converges too though? and how to show it? $\sum_{r=1}^{\infty}\frac{(3r)!}{(r!)^{3}}$ as above, i think its the same. but i don't know how to show these using the whole episolon thing. the epsilon thing really bugs me. do i need the epsilon thing? can someone help me to show these things rather than just use intuition... also am i right or wrong??? In the first one $\lim_{r\to\infty} a_r=1$, so... ? The second one: $|\frac{\sin(r^{2})}{5^{r}}|<\frac{1}{|5^{r}|}$ In the third: use Ratio test - Wikipedia, the free encyclopedia 3. ## Re: Converge or diverge plus show why Originally Posted by Also sprach Zarathustra In the first one $\lim_{r\to\infty} a_r=1$, so... ? The second one: $|\frac{\sin(r^{2})}{5^{r}}|<\frac{1}{|5^{r}|}$ In the third: use Ratio test - Wikipedia, the free encyclopedia 1st - divergent by nonnull tesT?? 2nd - i still dont see enough info, i would try the comparison test, again i come up short? or is this better to use absolute convergence properties? 4. ## Re: Converge or diverge plus show why Originally Posted by Shizaru 2nd - i still dont see enough info, i would try the comparison test, again i come up short? or is this better to use absolute convergence properties? The point is if a series converges absolutely it converges period. 5. ## Re: Converge or diverge plus show why Originally Posted by Shizaru show why the con/diverge: $\sum_{r=1}^{\infty}\frac{r^{2}-1}{r^{2}+1}$ i think she diverges. because the terms get bigger (closer to 1 each) so you are just adding almost 1 + 1 +1 +1 ... it tails off to infinity... diverge... but i think i need to use a rigurous proof not just intuition. $\sum_{r=1}^{\infty}\frac{\sin(r^{2})}{5^{r}}$ i'd say she converges since the denom gets rapidly massive so the whole thing is getting rapidly tiny... i can't see what it converges too though? and how to show it? $\sum_{r=1}^{\infty}\frac{(3r)!}{(r!)^{3}}$ as above, i think its the same. but i don't know how to show these using the whole episolon thing. the epsilon thing really bugs me. do i need the epsilon thing? can someone help me to show these things rather than just use intuition... also am i right or wrong??? A necessary (but not sufficient) condition for a series to converge is that the individual terms have to tend to 0. Therefore, a valid way to show that a series diverges is to show that the individual terms do NOT tend to 0. For the first $\displaystyle \begin{align*} \lim_{r \to \infty}\frac{r^2 - 1}{r^2 + 1} &= \lim_{r \to \infty}\frac{r^2 + 1 - 2}{r^2 + 1} \\ &= \lim_{r \to \infty}1 - \frac{2}{r^2 + 1} \\ &= 1 - 0 \\ &= 1 \end{align*}$ Clearly, the terms do not tend to 0, so the series diverges. 6. ## Re: Converge or diverge plus show why Originally Posted by Shizaru 1st - divergent by nonnull tesT?? 2nd - i still dont see enough info, i would try the comparison test, again i come up short? or is this better to use absolute convergence properties? Think about it like this. Suppose you have some series. Since there may be some negative values in it, the sum will never be any greater than the sum of the absolute values of the terms (since they are all positive). Therefore, by the comparison test, if the "larger series" (the series of absolute values) converges, then so must the "smaller series" (the original series). So for your second series, by showing that $\displaystyle \begin{align*} \sum{\left| \frac{ \sin{\left(r^2\right)} }{ 5^r } \right|} \end{align*}$ converges, you show $\displaystyle \begin{align*} \sum{\frac{\sin{\left(r^2\right)}}{5^r}} \end{align*}$ also converges. 7. ## Re: Converge or diverge plus show why Originally Posted by Prove It A necessary (but not sufficient) condition for a series to converge is that the individual terms have to tend to 0. Therefore, a valid way to show that a series diverges is to show that the individual terms do NOT tend to 0. For the first $\displaystyle \begin{align*} \lim_{r \to \infty}\frac{r^2 - 1}{r^2 + 1} &= \lim_{r \to \infty}\frac{r^2 + 1 - 2}{r^2 + 1} \\ &= \lim_{r \to \infty}1 - \frac{2}{r^2 + 1} \\ &= 1 - 0 \\ &= 1 \end{align*}$ Clearly, the terms do not tend to 0, so the series diverges. thats what we call nonnull test. so am i right there (do you think i would need to prove the sequence of terms converges to 1 or can i just state it doesnt converge to 0 by intuition?) as for the 2nd one... ok i am using the comparison test, and the property of absolute convergence... yes this is one of the properties we are told. i think i get that one now but how do you know the example you gave is always greater (or equal) to the series in question? my main problem is knowing what needs to be shown and what can just be stated... :S see my intuition was correct but i dont always know how to show it 8. ## Re: Converge or diverge plus show why Originally Posted by Prove It A necessary (but not sufficient) condition for a series to converge is that the individual terms have to tend to 0. Why is that not sufficient? 9. ## Re: Converge or diverge plus show why Originally Posted by alexmahone Why is that not sufficient? What about the harmonic series? The terms tend to 0 but the series does NOT converge. 10. ## Re: Converge or diverge plus show why Originally Posted by Shizaru how do you know the example you gave is always greater (or equal) to the series in question? Because each term can never be any greater than its absolute value... 11. ## Re: Converge or diverge plus show why Originally Posted by alexmahone Why is that not sufficient? its not sufficient because it doesnt hold for vice versa. they can converge to 0 without the series converging. Prove It - i mean the 1/5^r being greater than the sin one? 12. ## Re: Converge or diverge plus show why Originally Posted by Shizaru i mean the 1/5^r being greater than the sin one? That is a simple geometric series with ratio less that 1. 13. ## Re: Converge or diverge plus show why Originally Posted by Shizaru its not sufficient because it doesnt hold for vice versa. they can converge to 0 without the series converging. Prove It - i mean the 1/5^r being greater than the sin one? You should know that $\displaystyle \begin{align*} |\sin{X}| \leq 1 \end{align*}$ for all $\displaystyle \begin{align*} X \end{align*}$. Therefore $\displaystyle \begin{align*} \left| \sin{ \left( r^2 \right) } \right| &\leq 1 \\ \frac{ \left| \sin{ \left( r^2 \right) } \right|}{ \left| 5^r \right| } &\leq \frac{1}{ \left| 5^r \right| } \\ \left| \frac{\sin{\left(r^2\right)}}{5^r} \right| &\leq \left| \frac{1}{5^r} \right| \end{align*}$ 14. ## Re: Converge or diverge plus show why Originally Posted by Prove It You should know that $\displaystyle \begin{align*} |\sin{X}| \leq 1 \end{align*}$ for all $\displaystyle \begin{align*} X \end{align*}$. of course i know this and i am an idiot for not seeing this was neccessary. thanks for the pointer but wait doesnt $1 \geq |\sin x|$ imply $1 \geq \sin x$ anyway so why do i have to bother with using absolute convergence properties in the first place? can't i just directly use the comparison with the |1/5^r|? also we would need to show that 1/5^r converges, but how? (can someone just confirm my suspicion here also - that it is sinr^2 and not just sin r, actually makes no difference here? is this just an attempt to deceive?) 15. ## Re: Converge or diverge plus show why Originally Posted by Plato That is a simple geometric series with ratio less that 1. how is it? it's not (1/5)^r. If it was I could see that
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.921157717704773, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/40979/proof-of-quantum-mechanical-position-uncertainty?answertab=active
# Proof of quantum mechanical position uncertainty How can you prove the uncertainty for position is: $$\Delta{x} =\sqrt{\langle x^2\rangle-\langle x\rangle^2}$$ $\Delta{x}$, taken to be the root mean square of x. $$\Delta{x} =\sqrt{\langle \left(x-\langle x\rangle\right)^2\rangle}$$ $$\Delta{x} =\sqrt{\langle \left(x-\langle x\rangle\right) \left(x-\langle{x}\rangle\right)\rangle}$$ $$\Delta{x} =\sqrt{\langle x^2-2x\langle x\rangle +\langle x \rangle^2\rangle}$$ This is the bit which I am not sure about and why I can do it (taking the outer braket and acting it on the inner x values: $$\Delta{x} =\sqrt{\langle x^2\rangle -2\langle x \rangle \langle x\rangle +\langle x \rangle^2}$$ $$\Delta{x} =\sqrt{\langle x^2\rangle -2\langle x\rangle^2 +\langle x \rangle^2}$$ $$\Delta{x} =\sqrt{\langle x^2\rangle - \langle x \rangle^2}$$ - – Qmechanic♦ Oct 16 '12 at 23:49 2 I'm afraid your step is incorrect (the last formula). Expanding $\langle(x-\langle x \rangle)^2\rangle$ you obtain $\langle x^2 -2x \langle x \rangle x - \langle x \rangle^2\rangle$. From here you only need to use that $\langle x \rangle$ is a number and that expectation value is linear. Since this looks like a homework, I won't work it all out for you (important part of the learning process in physics is to calculate things for yourself). But hopefully this is enough of a hint to get you to the right answer. – SMeznaric Oct 16 '12 at 23:55 @SMeznaric that could be a good answer – David Zaslavsky♦ Oct 16 '12 at 23:56 You're right, here goes. – SMeznaric Oct 16 '12 at 23:57 By the way, the title of your question seems to be in no relation to the body... – Fabian Oct 17 '12 at 10:08 show 1 more comment ## 1 Answer I'm afraid your step is incorrect (the last formula). Expanding $\langle(x−\langle x \rangle)^2\rangle$ you obtain $\langle x^2−2x\langle x\rangle + \langle x\rangle^2\rangle$. From here you only need to use that $\langle x\rangle$ is a number and that expectation value is linear. Since this looks like a homework, I won't work it all out for you (important part of the learning process in physics is to calculate things for yourself). But hopefully this is enough of a hint to get you to the right answer. - yeah sorry, that was a mistake. I will correct that bit. I had done that expansion but it did not make sense to me that it would work. Still doesn't. – Magpie Oct 17 '12 at 14:12 Now that you have the expansion you can use the linearity of the expectation value, i.e. $\langle a \hat{x} + b \hat{y} \rangle = a \langle \hat{x} \rangle + b \langle \hat{y} \rangle$, where I marked the operators with hats and $a,b$ are numbers. In your expression the operators are $x$ and $x^2$, and $\langle x \rangle$ is a number. You are very close to the answer. – SMeznaric Oct 18 '12 at 15:39 Also, as an aside, your last formula is still incorrect. You need $⟨\rangle$ around the value inside the square root, otherwise you are taking the square root of the position operator (not what you want) – SMeznaric Oct 18 '12 at 17:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528135061264038, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/150270/preparing-for-exam-and-dont-understand-how-to-calculate-this-integral-via-integ
# Preparing for exam and don't understand how to calculate this integral via integrating by parts twice It seems to be a pretty simple integral but I am blanking out on how to find this integral: $$\int_0^1 e^{-x}\cos(n\pi x) \;dx$$ I tried Mathematica, but it seems to use some formula instead of integrating by parts. - ## 3 Answers I don't believe it matters which part you integrate and which part you take the derivative of. If you took the derivative of $e^{-x}$ the first time, do it for the second integration by parts as well. If you have done it correctly, you should have an equation with the same integral on each side, yet they won't cancel out. Simply combine like terms and you should have your answer. - Oh, I got it. You need to switch the parts for the second integration. Thank you! – dana May 27 '12 at 7:00 1 @dana Switch the parts? Sounds like the opposite of what I was saying. What did your equation look like after the second integration by parts? – Mike May 27 '12 at 7:07 ooops, this is not what I actually meant to say. My problem was not seing that after the second integration I had two identical (almost) integrals on the both sides of the equation. I guess it's a good time for me to go to bed. Thanks again! – dana May 27 '12 at 7:19 When you’ve a product of an exponential and a sine or cosine, you can split it either way. I’ll set $u=e^{-x}$ and $dv=\cos n\pi x\,dx$, so that $du=-e^{-x}dx$ and $v=\frac1{n\pi}\sin n\pi x$. Then $$\begin{align*} \int_0^1 e^{-x}\cos n\pi x\,dx&=\left[\frac1{n\pi}e^{-x}\sin n\pi x\right]_0^1-\int_0^1\frac1{n\pi}\sin n\pi x(-e^{-x})dx\\ &=\frac1{n\pi}\int_0^1 e^{-x}\sin n\pi x\,dx\;. \end{align*}$$ Now repeat, making sure to split the new integral the same way: $u=e^{-x}$ and $dv=\sin n\pi x\,dx$, so $du=-e^{-x}dx$, $v=-\frac1{n\pi}\cos n\pi x$, and the integral is $$\begin{align*} \frac1{n\pi}\int_0^1 e^{-x}\sin n\pi x\,dx&=\frac1{n\pi}\left(\left[-\frac1{n\pi}e^{-x}\cos n\pi x\right]_0^1-\frac1{n\pi}\int_0^1e^{-x}\cos n\pi x\,dx\right)\\ &=\frac1{n^2\pi^2}\left(\left[e^{-x}\cos n\pi x\right]_0^1-\int_0^1e^{-x}\cos n\pi x\,dx\right)\;. \end{align*}$$ Call the original integral $I$; then $$I=\frac1{n^2\pi^2}\left(\left[e^{-x}\cos n\pi x\right]_1^0-I\right)\;,$$ so $$\left(1+\frac1{n^2\pi^2}\right)I=\frac1{n^2\pi^2}\left[e^{-x}\cos n\pi x\right]_1^0=\frac1{n^2\pi^2}\left(1-\frac1e\cos n\pi\right)=\frac1{n^2\pi^2}\left(1-\frac{(-1)^n}e\right)\;,$$ since $\cos n\pi=(-1)^n$. Finally, $$I=\frac{\dfrac1{n^2\pi^2}\left(1-\dfrac{(-1)^n}e\right)}{1+\dfrac1{n^2\pi^2}}=\frac{e-(-1)^n}{e(n^2\pi^2+1)}\;,$$ barring possible careless errors. - Thank you! I think there is a small typo in the final equation: numerator should have $n \pi$ not $n^2\pi^2$. Otherwise I was almost there in my initial solution, I just didn't recognize that the integrals on the both sides were almost identical. Too much studying for today. – dana May 27 '12 at 7:22 @dana: Unless I’m asleep (which is possible), the $n^2\pi^2$ is right: I multiplied the four-story fraction by $\frac{en^2\pi^2}{en^2\pi^2}$. – Brian M. Scott May 27 '12 at 7:28 Actually, in the line where you have combined the $I$'s, the $\frac{1}{\pi^2n^2}$ is missing before $[e^{-x}\cos(n\pi x)]^1_0$. So, both of us were a bit off :) – dana May 27 '12 at 7:37 @dana: You are so right; thanks. – Brian M. Scott May 27 '12 at 7:44 Here is another method that applies if you know some complex numbers. Recall $$e^{i \theta} = \cos \theta + i \sin \theta$$ In this case we have a $\cos (n \pi x)$, which is the real part of $e^{i n \pi x}$ $$\mathscr{R}(e^{in \pi x}) = \cos(n \pi x)$$ Then we have $$\int_{0}^1 e^{-x} \cos (n \pi x)dx = \mathscr{R} \int_{0}^1 e^{-x}\cdot e^{i n \pi x}$$ - Interesting. Thanks for the tip. – dana May 27 '12 at 7:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9668003916740417, "perplexity_flag": "head"}
http://nrich.maths.org/265/clue
### Just Rolling Round P is a point on the circumference of a circle radius r which rolls, without slipping, inside a circle of radius 2r. What is the locus of P? ### Rotating Triangle What happens to the perimeter of triangle ABC as the two smaller circles change size and roll around inside the bigger circle? ### Just Opposite A and C are the opposite vertices of a square ABCD, and have coordinates (a,b) and (c,d), respectively. What are the coordinates of the vertices B and D? What is the area of the square? # Coke Machine ##### Stage: 4 Challenge Level: This is another tough nut and perhaps the diagram of the 50p piece will help. A 50 pence piece is a 7 sided polygon ABCDEFG with rounded edges, obtained by replacing the straight line DE with an arc centred at A and radius AE; replacing the straight line EF with an arc centred at B radius BF ...etc.. The 50p piece can roll in the same chute as a disc of radius $r$. Suppose the seven arcs forming the edge of the 50p piece (the arcs AB, BC etc. ) all have radius $R$ (where $R$=AD=AE=BE=BF...) then you need to find $R$ in terms of $r$. These seven arcs subtend angles of $2\pi /7$ at the centre of the disc and $2\pi /14$ at the opposite edge. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9057008624076843, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/37172/what-are-some-open-problems-in-algebraic-geometry/37203
## What are some open problems in algebraic geometry? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What are the open big problems in algebraic geometry and vector bundles? More specifically, I would like to know what are interesting problems related to moduli spaces of vector bundles over projective varieties/curves. - 8 Why don't you read some of the literature on these topics to find out? Usually recent ICM talks, survey articles in the bulletin, and recently published advanced textbooks are good places to start for this kind of thing. – Emerton Aug 30 2010 at 15:31 12 This seems a perfectly good question. I would be interested to see some of the answers. – Richard Borcherds Aug 30 2010 at 15:52 7 MO questions like the rest of us need luck. This question was lucky enough that Richard Borcherds offered a very nice answer and potentially there will be further answers that we can enjoy and ultimately this will be a useful source. Let's keep it open! – Gil Kalai Aug 30 2010 at 16:59 5 We've had many discussions over at meta about whether a sufficient condition to be a good question is that it generates good answers. The overall consensus (that's too strong a word ... plurality opinion?) seems to be "no". If "too broad/vague" were a criterion on the list of reasons to close, I would vote to close. As of my comment, this question currently has four votes to close as "off topic", but it's certainly not that, it's just too vague. I do think it should be improved, though, and I will go in to fix capitalization. – Theo Johnson-Freyd Aug 30 2010 at 18:39 5 Theo, this is not a correct characterization of the discussions on meta. This was an issue where there were different opinions. My opinion was that just like in "real world mathematics" (and science) attracting good answers is a merit of a question. The answers can give prople some clues for what to look for in the ICM talks and bulletin articles Mathew referred to. In fact, good answers can give useful links to specific such papers. In any case, I have voted to reopen. – Gil Kalai Aug 30 2010 at 21:39 show 12 more comments ## 8 Answers A few of the more obvious ones: * Resolution of singularities in characteristic p *Hodge conjecture * Standard conjectures on algebraic cycles (though these are not so urgent since Deligne proved the Weil conjectures). *Proving finite generation of the canonical ring for general type used to be open though I think it was recently solved; I'm not sure about the details. For vector bundles, a longstanding open problem is the classification of vector bundles over projective spaces. (Added later) A very old major problem is that of finding which moduli spaces of curves are unirational. It is classical that the moduli space is unirational for genus at most 10, and I think this has more recently been pushed to genus about 13. Mumford and Harris showed that it is of general type for genus at least 24. As far as I know most of the remaining cases are still open. - 1 You remember correctly, here's the paper: arxiv.org/abs/math/0610203 – Charles Siegel Aug 30 2010 at 15:55 1 At the end of her talk at the Hyderabad Congress, Claire Voisin was asked by someone whether she believed in the Hodge conjecture. Her answer was equivocal, if memory serves me right. – Chandan Singh Dalawat Aug 31 2010 at 3:13 1 Farkas proved that $\overline{M}_g$ is of general type for $g = 22$. – Moon Aug 31 2010 at 7:01 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let me mention a couple of problems related to vector bundles on projective spaces. 1. The Hartshorne conjecture. In its weak form it says that any rank 2 vector bundle on $\mathbf{P}^n_{\mathbf{C}},n>6$ is a direct sum of line bundles, which implies that any codimension 2 smooth subvariety whose canonical class is a multiple of the hyperplane sectionis a complete intersection. In a stronger form Hartshorne's conjecture says that any codimension $>\frac{2}{3}n$ subvariety of $\mathbf{P}^n_{k},k$ an algebraically closed field is a complete intersection. See Hartshorne, Varieties of small codimension in a projective space, Bull AMS 80, 1974. The weak conjecture fails for $n=3$ and $4$ -- there are examples (due to Horrocks and Mumford) of non-split vector bundles of rank 2 on $\mathbf{P}^4_{\mathbf{C}}$, but so far as I know the question if any such examples exist for $n>4$ is open. See here http://mathoverflow.net/questions/13990/evidences-on-hartshornes-conjecture-references for a discussion including some references. 2. The existence of non-algebraic topological vector bundles on $\mathbf{P}^n_{\mathbf{C}}$. It is a classical result that any topological complex vector bundle on $\mathbf{P}^n_{\mathbf{C}}, n\leq 3$ is algebraic, see e.g. Okonek, Schneider, Spindler, Vector bundles on complex projective spaces, chapter 1, \S 6. It is strongly suspected that for $n>3$ there are topological complex vector bundles that are not algebraic. Good candidates are nontrivial rank 2 vector bundles on $\mathbf{P}^n_{\mathbf{C}}, n\geq 5$ all of whose Chern classes vanish which were constructed by E. Rees, see MR0517518. It is claimed there that these bundles do not admit a holomorphic structure, but later a gap was found in the proof. See here http://mathoverflow.net/questions/7304/complex-vector-bundles-that-are-not-holomorphic for some more information. - There are examples of indecomposable rank $2$ vector bundles on $\mathbb{P}^5$ in characteristic $2$ due to Tango and Kumar-Peterson-Rao (independently). – Mahdi Majidi-Zolbanin May 18 2012 at 19:33 Thanks, Mahdi, that's interesting. Does this generalize to projective spaces of higher dimensions? – algori Jun 1 at 21:24 - 2 This problem is (in)famous. I've lost track of the number of false claims regarding this on the arxiv and elsewhere. – Donu Arapura Aug 31 2010 at 14:24 For a good introduction to the subject, allow me to recommend the book Polynomial automorphisms and the Jacobian conjecture, by Arnoldus Richardus and Petrus van den Essen. Given the simplistic statement, how little is truly understood of that problem is simply shocking, and the first pages of the book really helped me dispel many misconception. – Thierry Zell Aug 31 2010 at 16:13 We can also mention two other major open problems : • The abundance conjecture, stating that if a $K_X+\Delta$ is klt and nef, then it is semi-ample (a multiple has no base-point) • The Griffith's conjecture : if $E$ is an ample vector bundle over a compact complex manifold, then it is Griffith-positive. (this is known for line bundles of course) - 1 Henri, can you add links? – Gil Kalai Aug 30 2010 at 17:20 1 Well, Y.T Siu has recently claimed he had proved the abundance conjecture (in a version stating that the Kodaira dimension equals the numerical Kodaira dimension); here's the paper : arxiv.org/abs/0912.0576 – Henri Aug 30 2010 at 20:54 1 For some additional discussion of Siu's work, see the recent question mathoverflow.net/questions/31605/… – Karl Schwede Aug 31 2010 at 0:51 1 Griffiths conjecture is also known to be true for general vector bundles on curves! – diverietti Jan 11 2012 at 16:59 There's also the big open question (I think it's still open) about whether rationally connected varieties are always unirational. I think people believe the answer is NO, but they don't know an example. Joe Harris had some slides a few years ago with regards to this Seattle 2005 - There's also Fujita's conjecture. Conjecture: Suppose $X$ is a smooth projective dimensional complex algebraic variety with ample divisor $A$. Then 1. $H^0(X, \mathcal{O}_X(K_X + mA))$ is generated by global section when $m > \dim X$. 2. $K_X + mA$ is very ample for $m > \dim X + 1$ It's also often stated in the complex analytic world. Also there are many refinements (and generalizations) of this conjecture. For example, the assumption that $X$ is smooth is probably more than you need (something close to rational singularities should be ok). It also might even be true in characteristic $p > 0$. It's known in relatively low dimensions (up to 5 in case 1. I think?) - Only part 1 is known in low dimension - part 2 is open even in dimension 3. – ABayer May 21 2012 at 7:51 Linearization Conjecture. Every algebraic action of $\mathbb{C}^*$ on $\mathbb{C}^n$ is linear in some coordinates of $\mathbb{C}^n$. Open for $n>3$. Cancellation Conjecture. If $X\times \mathbb{C}\cong \mathbb{C}^{m+1}$ then $X\cong \mathbb{C}^m$. Open for $m>2$. Coolidge-Nagata Conjecture. A rational cuspidal curve in $\mathbb{P}^2$ is rectifiable, i.e. there exists a birational automorphism of $\mathbb{P}^2$ which transforms the curve into a line. - In connection to vector bundles over $\mathbb{P}^n$, Hartshorne's paper from 1979 provides a list of open problems. The paper is "Algebraic vector bundles on projective spaces: A problem list" Topology, 18:117–128, 1979. I don't know which of those problems are still open, but I would be interested in knowing how much progress has been made on those problems, since 1979. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397674798965454, "perplexity_flag": "middle"}
http://nrich.maths.org/5816
### Rationals Between What fractions can you find between the square roots of 56 and 58? ### Root to Poly Find the polynomial p(x) with integer coefficients such that one solution of the equation p(x)=0 is $1+\sqrt 2+\sqrt 3$. ### Consecutive Squares The squares of any 8 consecutive numbers can be arranged into two sets of four numbers with the same sum. True of false? # Napier's Location Arithmetic ##### Stage: 4 Challenge Level: Here is an interactivity for Napier's Location Arithmetic. Full Screen Version This text is usually replaced by the Flash movie. Part One : Pick two numbers as multipliers (factors) to multiply together, say $37$ and $51$. Choose some of the values from $1, 2, 4, 8, 16, 32, 64$ and $128$ to make a sum equal to each of those numbers. For example $37 = 32 + 4 + 1$ and $51 = 32 + 16 + 2 + 1$ Incidentally, could $37$ or $51$ have been made in another way ? • Now select the side numbers needed to make the factors you've chosen and press the "press when ready" button to begin. • Next click the counters in the grid one by one to see them move towards the bottom line. • Finally click the counters remaining in the bottom line to compile an answer (product) for your multiplication question. Play with the application. Try different numbers. Why does the process work - why does this method always produce a correct answer? Continue playing. Perhaps try factor numbers with particular patterns of gaps and counters. Part Two (quite hard) : Are there multipliers (factors) that produce a product which has a counter in every position along the bottom line? Part Three (a real challenge) : Which multiplication question requires the most counters to represent both the multipliers (factors) and their product (that means the number of factor counters and product counters together making one single total). For example $37$ uses $3$ counters at the side, $51$ uses $4$ side counters, and their product uses $9$ counters in the bottom line, altogether a total of $16$ counters ($3 + 4 + 9$). Sending in solutions : we would love to hear your way of explaining why Napier's Location Arithmetic is a valid method for multiplication, or if you make some progress with parts $2$ or $3$, and can explain what you've done and what you've discovered, that would be wonderful to receive also. An addition : it's excellent for us to hear that a problem we have offered has stimulated a new question in someone else's mind - thank-you very much Sheldon for this: " I found this technique fascinating. Is there a way of inverting the process, i.e. Factorising. Starting with particular circles on the bottom line, and finding some process which could create the two factors you started with " ? The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303459525108337, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=281663
Physics Forums ## State of an electron, including spin In the quantum mechanics book I have, they first cover the mechanics of a generic particle (say an electron), without ever considering the spin. They encode the state of the electron as a function F in L2, where the F^*F is the probability density for the location of the electron. Later, they discuss spin and immediately start talking about the state of an electron as a 2 dimensional vector (a linear combination of the "spin up" and "spin down" vectors about some axis). They never mention it, but obviously this is not really the state of the particle as two numbers cannot encode all the information stored in F. Conversely, F doesn't seem to encode any information about the spin. So I guess the state of a particle is some combination of the above two pieces of information, along with possibly some additional information? If S1 is the space in which F lives and S2 is the 2D spin space, how do we combine them to form a larger space representing the entire state. Tensor product, Cartesian product, ...? Is there any other information we need to include in the state (along with the spin and position density)? If not, how do we know there's nothing else to include? Some rationale that says the behavior of a particle is determined only by these pieces of information? Thanks PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Homework Help Quote by msumm21 In the quantum mechanics book I have, they first cover the mechanics of a generic particle (say an electron), without ever considering the spin. They encode the state of the electron as a function F in L2, where the F^*F is the probability density for the location of the electron. Later, they discuss spin and immediately start talking about the state of an electron as a 2 dimensional vector (a linear combination of the "spin up" and "spin down" vectors about some axis). They never mention it, but obviously this is not really the state of the particle as two numbers cannot encode all the information stored in F. Conversely, F doesn't seem to encode any information about the spin. So I guess the state of a particle is some combination of the above two pieces of information, along with possibly some additional information? If S1 is the space in which F lives and S2 is the 2D spin space, how do we combine them to form a larger space representing the entire state. Tensor product, Cartesian product, ...? Is there any other information we need to include in the state (along with the spin and position density)? If not, how do we know there's nothing else to include? Some rationale that says the behavior of a particle is determined only by these pieces of information? Thanks Basically you need two functions $F_{\rm down}$ and $F_{\rm up}$ instead of just one function $F$ for a spinless particle. Often one can speak of the spin and the space parts of the wavefunction separately. This is why we bother studying the spin part "by itself". But, of course, as you say, there is always the space part of the wavefunction to deal with as well. For pedagogical reasons some texts treat the spin-only problem first. But press on in your book (or switch to a different book--I like Messiah's text) and I'm sure you will see how to include the spin and the spacial parts of the wavefunctions at once and learn what sort of space this wavefunction lives in. Cheers. Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor Quote by msumm21 So I guess the state of a particle is some combination of the above two pieces of information, along with possibly some additional information? If S1 is the space in which F lives and S2 is the 2D spin space, how do we combine them to form a larger space representing the entire state. Tensor product, Cartesian product, ...? Hi msumm21! the wave-function for an scalar spin-0 particle is Aeiψ(t,x), where A is an ordinary number; the wave-function for an electron is Seiψ(t,x) where S is a spinor; (in other words, the amplitude of an electron is really a spinor, but elementary books tend not to tell you that ) it's just a direct (Cartesian) product. Is there any other information we need to include in the state (along with the spin and position density)? If not, how do we know there's nothing else to include? No … and we know that from experiments … if experiments showed there was something else, we'd add it in (and we probably wouldn't call it an electron). Alternative answer … yes … there's things like lepton number. Tags density, spin, state Thread Tools | | | | |-----------------------------------------------------------|----------------------------------------|---------| | Similar Threads for: State of an electron, including spin | | | | Thread | Forum | Replies | | | Quantum Physics | 13 | | | Advanced Physics Homework | 1 | | | Advanced Physics Homework | 0 | | | Advanced Physics Homework | 4 | | | High Energy, Nuclear, Particle Physics | 4 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377792477607727, "perplexity_flag": "middle"}
http://blog.drskippy.com/2012/05/14/dimension-reduction-for-machine-learning-simple-example-of-svd-pca-pathology/
Of the fast and loose practices in machine learning that I cringe at, throwing one’s favorite tool at high-dimensional data and expecting algorithms to learn to generalize well. Part of the problem is we don’t always know if the features we chose are meaningful to the learning problem. Another part of the problem is that a useful regression requires that our sample fills the space adequately to regress.  (An n-sphere has a volume that scales as $V \approx R^n$.  This means that if you feel comfortable with 10 samples for a 1-d linear regression, you should have in your mind a million samples for a 6-d regression.) One strategy for dealing with high-dimensionality is to rotate and scale the data set along axes of high sample variation. Two closely related mathematical tools are available to help with this: Singular Value Decomposition (SVD) and Principal Component Analysis). I won’t add anything to the mathematics described in these articles.  The purpose here is 1. Explore some questions with a simple example 2. Explore the link between SVD and PCA with some simple data to understand how it works, and build a little bit of intuition on what linear transformation can do for machine learning. 3. Demonstrate how this strategy could work to reduce the dimentionality of a problem (This really turns out to be reducing the dimentionality of the representation of the problem.) 4. Show how this strategy can easily go wrong in order to build intuition about when and how this might work for a real machine learning problem. There is a link to the all of the code at the bottom.  To get this output, install IPython, the matplotlib, scipy, numpy, scikits.learn packages, then paste code into a session.  Or just edit the script to include the output you want and run from the command line. Here we go…first SVD. ``` In [23]: ######################## In [24]: # Demo Part 1 In [25]: ######################## In [26]: # SVD: decompose a matrix X into product of unitary matrix (U), diagonal In [27]: # matrix (S) and vector V, a rotation, (if it helps, think: In [28]: # basis vectors, scaling, rotation) such that X.T = W x S x V_t. In [29]: # In [30]: # In machine learning, X will be a matrix of samples (4, rows) In [31]: # and features (3, columns) for our learning examples. In [32]: X_t = mat([ [ 1, 3, -10 ], ....: [ 0, 2, 1 ], ....: [ -1, -3, 9 ], ....: [ 0, -2, 0 ] ]) In [33]: X = X_t.T In [34]: # The scipy library makes this step easy In [35]: W, s, V_t = linalg.svd( X ) In [36]: S = linalg.diagsvd(s, len(X), len(V_t)) In [37]: recon = dot( dot( W, S), V_t) In [38]: # Are these equal (to within rounding)? In [39]: abs(X - recon) Out[39]: matrix([[ 4.44089210e-16, 1.73472348e-16, 6.66133815e-16, 1.59594560e-16], [ 8.88178420e-16, 2.22044605e-16, 4.44089210e-16, 1.33226763e-15], [ 1.77635684e-15, 4.44089210e-16, 5.32907052e-15, 6.73940070e-16]]) In [40]: # maximum error In [41]: np.max(abs(X - recon)) Out[41]: 5.3290705182007514e-15 In [42]: # One key to understanding the link to PCA is In [43]: # understanding the diagonal matrix, S In [44]: S Out[44]: array([[ 14.19266482, 0. , 0. , 0. ], [ 0. , 2.9255743 , 0. , 0. ], [ 0. , 0. , 0.09633518, 0. ]]) In [45]: # Given that the features have zero-mean: In [46]: [np.mean(i) for i in X] Out[46]: [0.0, 0.0, 0.0] In [47]: # s is an ordered vector that tells us the "significance" of each dimension In [48]: # in the rotated space. (Yes, I arranged it that way. To be clear, In [49]: # you can do SVD on a matrix without zero-mean features, but the dimension In [50]: # reduction part we are about to do requires it.) In [52]: # We can selectively set lower numbers to In [53]: # zero to reduce dimension. In [54]: s_red = s In [55]: s_red[2] = 0 In [56]: # Approximately reconstruct our original matrix, but with In [57]: # a reduced-dimension representation In [58]: S_red = linalg.diagsvd(s_red, len(X), len(V_t)) In [59]: S_red Out[59]: array([[ 14.19266482, 0. , 0. , 0. ], [ 0. , 2.9255743 , 0. , 0. ], [ 0. , 0. , 0. , 0. ]]) In [60]: recon_red = dot( dot( W, S_red), V_t) In [61]: abs(X - recon_red) Out[61]: matrix([[ 0.04297068, 0.04061026, 0.05215936, 0.05451977], [ 0.00118308, 0.00111809, 0.00143606, 0.00150105], [ 0.00412864, 0.00390185, 0.00501149, 0.00523828]]) In [62]: # maximum error In [63]: np.max(abs(X - recon_red)) Out[63]: 0.054519772460470559 In [64]: # ratio of errors In [65]: np.max(abs(X - recon))/np.max(abs(X - recon_red)) Out[65]: 9.7745648554652029e-14 In [66]: In [66]: # We "lost" 14 orders of magnitude in precision of the reconstruction, but this turns In [67]: # out to be okay for some machine learning problems. ``` So that is a very simple SVD example.  Now use the same representation for a simple machine learning problem.   Classify to groups of data in a two dimensional space using Logistic Regression.   (Logistic regression is very good a solving problems just like this without SVD or PCA, so this is merely to get at how it all works together.) ``` In [68]: ######################## In [69]: # Demo Part 2 In [70]: ######################## In [71]: from scikits.learn import linear_model In [72]: import scipy In [73]: from copy import copy In [74]: # Classification problem where points are linearly classifiable In [75]: # in 2 dim. In [76]: N=200 In [77]: x = np.random.normal(0,4,N) In [78]: y1 = 3*x + np.random.normal(3,1,N) In [79]: y2 = 3*x + np.random.normal(-3,1,N) In [80]: y_avg = np.mean(np.append(y1, y2, 1)) In [81]: figure() In [82]: plot(x,y1 - y_avg, 'bo'); In [83]: plot(x,y2 - y_avg, 'ro'); In [84]: title("Original data sets (0-average)") ``` Boundary between groups at $y(x) \approx 3x$ ```In [85]: # features x and y are rows in this matrix In [86]: X = np.append([x,y1 - y_avg],[x,y2 - y_avg],1) In [87]: X_t = X.T In [88]: # y1 group 0; y2 group 1 In [89]: truth = np.append(scipy.ones([1, N]), scipy.zeros([1, N]), 1) In [90]: # 2d model works very well In [91]: lr = linear_model.LogisticRegression() In [92]: lr.fit(np.asarray(X_t),truth[0]) Out[92]: LogisticRegression(C=1.0, intercept_scaling=1, dual=False, fit_intercept=True, penalty='l2', tol=0.0001) In [93]: lr.score(np.asarray(X_t),truth[0]) Out[93]: 1.0 In [94]: lr.predict(np.asarray(X_t)) Out[94]: array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32) In [95]: # Now try dimension reduciton with SVD In [96]: W, s, V_t = linalg.svd( X ) In [97]: s Out[97]: array([ 273.98689602, 19.47579928]) In [98]: # both transformed representations are the same before trimming: In [99]: S = linalg.diagsvd(s, len(X), len(V_t)) In [100]: np.max(abs(X.T*matrix(W) - matrix(V_t).T*matrix(S).T)) Out[100]: 6.7501559897209518e-14 In [101]: # Now work with the transformed coordinates. It might not have been clear In [102]: # from above what the transformed coordinate system was. We can get there In [103]: # by either the product of the first two or last two terms. In [104]: X_prime = matrix(V_t).T*matrix(S).T In [105]: x_prime = np.asarray(X_prime.T[0]) In [106]: y_prime = np.asarray(X_prime.T[1]) In [107]: figure() In [108]: plot(x_prime, y_prime, 'go'); In [109]: title("Features after SVD Transformation") Out[109]: <matplotlib.text.Text at 0x11b956890> ``` Boundary between groups at $y(x) \approx 0$ ```In [110]: # Linearly classifiable in 1-d? Try all new basis directions (extremes of variation) In [111]: # Min variation - Training along y-dim nearly perfect In [112]: ypt = np.asarray(y_prime.T) In [113]: lr.fit(ypt, truth[0]) Out[113]: LogisticRegression(C=1.0, intercept_scaling=1, dual=False, fit_intercept=True, penalty='l2', tol=0.0001) In [114]: lr.score(ypt, truth[0]) Out[114]: 0.99750000000000005 In [115]: lr.predict(ypt) Out[115]: array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32) In [116]: # Max variation - Nothing here In [117]: xpt = np.asarray(x_prime.T) In [118]: lr.fit(xpt, truth[0]) Out[118]: LogisticRegression(C=1.0, intercept_scaling=1, dual=False, fit_intercept=True, penalty='l2', tol=0.0001) In [119]: lr.score(xpt, truth[0]) Out[119]: 0.58250000000000002 In [120]: lr.predict(xpt) Out[120]: array([1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0], dtype=int32) ``` Notice that the transformation made the problem “easier” in that it was solved with 1-d instead of 2-d machine learning and that the two groups appear even more separated after the transformation than before. Lesson 1: Look at all of the dimensions–in this case the smallest variation axis rather than the largest variation axis solves the problem.  This is going to catch anyone who blindly applies PCA for machine learning.  See Part 3. ``` In [121]: ######################## In [122]: # Demo Part 3 In [123]: ######################## In [124]: # Use PCA idea to reduce to 1-D In [125]: s_red = copy(s) In [126]: s_red[1] = 0 In [127]: S_red = linalg.diagsvd(s_red, len(X), len(V_t)) In [128]: X_prime = matrix(V_t).T*matrix(S_red).T In [129]: x_prime = np.asarray(X_prime.T[0]) In [130]: y_prime = np.asarray(X_prime.T[1]) In [131]: figure() Out[131]: <matplotlib.figure.Figure at 0x1193e7450> In [132]: plot(x_prime, y_prime, 'yo'); In [133]: title("Reduce S by removing s[1] = %2.5f"%s[1]) ``` All original group information lost. ```In [134]: # Try all new basis directions (not just greatest variations) In [135]: # 1-D: Max variation - Training along x-dim performs poorly In [136]: ypt = np.asarray(y_prime.T) In [137]: lr.fit(ypt, truth[0]) Out[137]: LogisticRegression(C=1.0, intercept_scaling=1, dual=False, fit_intercept=True, penalty='l2', tol=0.0001) In [138]: lr.score(ypt, truth[0]) Out[138]: 0.5 In [139]: lr.predict(ypt) Out[139]: array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32) In [140]: # This is the other extrema to "principal" components In [141]: s_red = copy(s) In [142]: s_red[0] = 0 In [143]: S_red = linalg.diagsvd(s_red, len(X), len(V_t)) In [144]: X_prime = matrix(V_t).T*matrix(S_red).T In [145]: x_prime = np.asarray(X_prime.T[0]) In [146]: y_prime = np.asarray(X_prime.T[1]) In [147]: figure() Out[147]: <matplotlib.figure.Figure at 0x11c74f6d0> In [148]: plot(x_prime, y_prime, 'mo'); In [149]: title("Reduce S by removing value s[0] = %2.5f"%s[0]) ``` Trivial classification, now in 1-d. ```In [150]: # 1-D: Min variation - Training along y-dim nearly perfect In [151]: ypt = np.asarray(y_prime.T) In [152]: lr.fit(ypt, truth[0]) Out[152]: LogisticRegression(C=1.0, intercept_scaling=1, dual=False, fit_intercept=True, penalty='l2', tol=0.0001) In [153]: lr.score(ypt, truth[0]) Out[153]: 0.99750000000000005 In [154]: lr.predict(ypt) Out[154]: array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32) ``` So our 1-d model performs great. Quoting from Wikipedia, “PCA essentially rotates the set of points around their mean in order to align with the principal components. This moves as much of the variance as possible (using an orthogonal transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be small and may be dropped with minimal loss of information.” The problem with following this without skepticism is that, according to the guidelines of PCA, our problem depends on using the “wrong” dimension.  (The properties are not quite as arbitrary as they may seem. For example, race data for long distances that includes both men and women have this quality because the variation within gender of total race times can be greater than the variation between gender times.) Code on Github
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193114042282104, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/general-relativity?page=3&sort=unanswered&pagesize=15
# Tagged Questions A theory that describes how matter produces and responds to the geometry of space and time. It was first published by Einstein in 1915 and is currently used to study the structure and evolution of the universe, as well as having practical applications like GPS. 2answers 459 views ### Does a Weak Energy Condition Violation Typically Lead to Causality Violation? In the answer to this question: ergosphere treadmills Lubos Motl suggested a straightforward argument, based on the special theory of relativity, to argue that light passing through a strong ... 2answers 353 views ### Why is Mendel Sachs's work not taken seriously? Or is it? Back in college I remember coming across a few books in the physics library by Mendel Sachs. Examples are: General Relativity and Matter Quantum Mechanics and Gravity Quantum Mechanics from General ... 2answers 114 views ### Does gravitational redshift imply gravitation time dilation? The EEP is used to justify that if an observer on the ground shoots a beam of light towards a tower, then when the light reaches the tower, it will be red shifted. This is because of what happens in ... 2answers 89 views ### Does the distance to the cosmic horizon Lorentz-contract? Does the universe Lorentz-contract? Our universe has a finite size. It is often called the "radius of the universe", or "distance of the cosmic horizon". If we would fly with relativistic speed at the position of our Earth, would this ... 2answers 141 views ### What's the relationship between quantum entanglement and the relativity of time? Apologies in advance for what may be a stupid question from a layman. In reading recently about quantum entanglement, I understood there to be a direct link between entangled particles, even at ... 2answers 137 views ### General Relativity & Kepler's law According to Kepler's law of planetary motion, the earth revolves around the sun in an elliptical path with sun at one of its focus. However, according to general theory of relativity, the earth ... 2answers 99 views ### The relativistic mechanics of a battery that is being charged and accelerated at the same time This might be an interesting question: Let's attach a battery into one end of an electric cable. Then we rotate the battery around, with accelerating speed, using 100 Watts of power, while the ... 2answers 191 views ### Is the Graveyard Really so Serious? Calculations in relation to black holes are solely in consideration of spacetime curvature and its effects. They are in total alienation with respect to the action of inertial agents[external ... 1answer 180 views ### What does the equivalence principle mean in quantum cases? We know that electron trapped by nuclear, like the hydrogen system, is described by quantum state,and never fall to the nuclear.So is there any similar situation in the case of electron near the ... 1answer 275 views ### Do spacelike junctions in the Thin-Shell Formalism imply energy nonconservation and counterintuitive wormholes? The Thin Shell Formalism (MTW 1973 p.551ff) is used to properly paste together different vacuum solutions to the Einstein equations. At the junction of the two solutions is a hypersurface of matter – ... 1answer 87 views ### “Redshifting” of forces in stationary space - times Here's the problem statement: Let $(M,g_{ab})$ be a stationary spacetime with timelike killing field $\xi ^{a}$. Let $V^{2} = -\xi _{a}\xi ^{a}$ ($V$ is called the redshift factor). (a) Show that the ... 1answer 92 views ### Liouville's theorem and gravitationally deflected lightpaths It is customary in gravitational lensing problems, to project both the background source and the deflecting mass (e.g. a background quasar, and a foreground galaxy acting as a lens) in a plane. Then, ... 1answer 68 views ### Could a bipolar nebula be produced by a time gradient? M2-9 is an example of a bipolar nebula that resembles two back-to-back rocket nozzles. Is it possible that this shape (somewhat unusual for an explosion) is the result of a time gradient? A rotating ... 1answer 106 views ### Rotation of Spacetime => Change in orbit/path Along the idea of frame-dragging; Will the rotation of a black hole, which has some velocity v and angular momentum, influence its path in 3D space? I've seen the fact that depending on the ... 1answer 252 views ### Gravitational Redshift around a Schwarzschild Black Hole Let's say that I'm hovering in a rocket at constant spatial coordinates outside a Schwarzschild black hole. I drop a bulb into the black hole, and it emits some light at a distance of $r_e$ from the ... 1answer 237 views ### Why dynamic Casimir effect does not appear in static gravity field? Dynamic Casimir effect tells us that a constantly-accelerated mirror should emit radiation due to interaction with vacuum. Following principle of equivalence, a similar mirror placed in static ... 1answer 14 views ### Can we build a synthetic event horizon? If we imagine ourselves to be a civilization capable of manipulating very heavy masses in arbitrary spatial and momentum configurations (because we have access to large amounts of motive force, for ... 1answer 51 views ### Local inertial coordinates It is said that we can introduce local inertial coordinates for any timelike geodesic. But why only for timelike geodesics? What about null geodesics? Perhaps it has to do with invertibility or ... 1answer 86 views ### How can the derivative of this trace be constrained? I am studying for my exam on relativity and I am going through some problems sets including ones where I was not very successful in so I want to know how to do this problem. (Convergence of ... 1answer 55 views ### Gravitational time delay and contraction of matter How can any matter contract to its Schwarzschild radius if gravitational time dilation clearly states that all clocks stop at that point. So any contraction any movement would stop. If that is so why ... 1answer 55 views ### A physical sense of an Inertial frame Definition clarification needed, please: I am hoping to get physical sense of an "inertial frame". Do inertial reference frames all have zero curvature for their spacetime? So is an inertial frame ... 1answer 122 views ### Why does weak equivalence principle say gravity is equivalent to acceleration? I am told that the weak equivalent principle, that $m_i=m_g$ (inertial and gravitational masses are equivalent) is equivalent to the statement that in a small system you can't tell whether you are in ... 1answer 248 views ### warp drive with gravitational waves in the nonlinear regime gravitational waves are strictly transversal (in the linear regime at least), also their amplitudes are tiny even for cosmic scale events like supernovas or binary black holes (at least far away, ... 0answers 254 views ### Gravitation and the QFT vacuum I'm asking this to get yet another lessson in the inability of QFT and GR to cohabit. Many people believe GR must yield to quantization. The question here is as to why the activity of the vacuum ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9268411993980408, "perplexity_flag": "middle"}