url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/91604/does-regularity-of-the-boundary-imply-interior-sphere-condition
## Does regularity of the boundary imply interior sphere condition ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In the article of Massari presented here there is a trace inequality which is said to be true for domains which satisfy the interior sphere condition: There exists $\rho>0$ such that for every $x \in \Omega$ there is a ball $B_\rho$ of radius $\rho$ such that $x \in B_\rho \subset \Omega$. This rhoughly means that the curvature of the domain is bounded from above. In some other article of Anzellotti and Giaquinta they prove a similar trace inequality for bounded domains with $C^1$ boundary. My question is: If a bounded open set $\Omega$ has $C^1$ boundary, is it true that it satisfies the interior sphere condition mentioned above? If the answer is negative for $C^1$ boundary, is it possible that for a $C^k$ with $k \geq 2$ or $C^\infty$ boundary the result becomes true? - 2 No. Consider the domain $y>x^2\sqrt{sin(1/x)^2+x^{100}}$ near 0 (in other words, make a sequence of "almost angles" flattened by some factor to ensure that you stay $C^1$ at the limiting point. – fedja Mar 19 2012 at 10:23 1 Also, $\Omega:=\{ y > x^2\log|x| \}$ has $C^1$ boundary, and for $h > 0$ the maximal $\rho$ such that $(0,h)\in B_\rho\subset \Omega$ is $o(1)$ as $h\to 0$. – Pietro Majer Mar 20 2012 at 10:11 Thank you for your examples. – Beni Bogosel Mar 20 2012 at 13:34 ## 2 Answers I think the remark on the curvature of the boundary of $\Omega$ might give some insight into this problem. Assume that $\Omega \subseteq \mathbb{R}^2$ has a $C^2$ boundary curve. Then its curvature is bounded from above by $\varepsilon > 0$. This implies that for any point of the curve, there is a osculating circle of radius $R \leq 1/\varepsilon$. The tube lemma should imply that there is some sort of $\delta$-collar around the boundary curve (using the normal bundle of the curve). Outside the $\delta$-collar, every point $x\in \Omega$ is contained in $B(x,\delta)$. After taking $\delta < 1/\varepsilon$, inside the collar, every point is contained in an osculating circle of radius $\delta$. I assume this argument should work (after some refinement) for more complicated boundaries (say, if $\Omega$ is an annulus) and in higher dimensions using the Riemannian curvature. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Elaborating on Malte's answer, it's not the Riemann curvature that matters, it's the second fundamental form of the boundary and, specifically, the reciprocals of its eigenvalues, which are known as the principal radii. If the boundary is $C^2$, then given any point $x$ on the boundary, there is a positive lower bound $\rho$ for all of the principal radii for points on the boundary within distance $1$ of $x$. Then the ball of radius $\rho$ that is tangent to the boundary at $x$ is contained fully inside the domain. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414495825767517, "perplexity_flag": "head"}
http://logiciansdoitwithmodels.com/2010/09/
# Logicians do it with Models A self-guided jaunt in philosophical logic ## Archive for September, 2010 ### Determination and Definability: Newtonian Mechanics & Statistical Dynamics September 30, 2010 The last determination-without-reduction example I’ll talk about from Hellman is the one he gives regarding classical particle mechanics and statistical mechanics.   A process is reversible, if it could proceed forward in time as well as backwards (think of playing, in reverse, a video of a model of the orbits of the planets around the sun).  A process is irreversible when it can only proceed in one direction of time and would violate physical law proceeding in the opposite time direction (think of playing, in reverse, a video of a gas escaping a bottle). Newtonian laws governing motion are time-symmetric and reversible: forward motions of Newtonian systems are on par with motions backwards in time.  Statistical mechanics, on the other hand, attempts to explain irreversible behavior of the higher-level observable phenomena of thermodynamics, like temperature, diffusion, pressure and entropy.  The macroscopic properties of thermodynamics are defined in terms of the phase quantities of Newtonian mechanics with the addition of a measure theoretic probability density function as well as some assumptions a-priori about distribution (e.g., equiprobability of equal volumes of phase space).  For macroscopic properties like entropy, more complex probabilistic concepts come into play: dividing phase space into cells and adding to the mechanical motions the periodical average of the cells, then entropy increases and the distribution density tends uniformly to equilibrium.  This is what the Ehrenfests (Paul and Tanya) called coarse-graining and it’s a method for converting a probability density in phase space into a piece-wise constant function by density averaging in cells in phase space. Coarse-grained densities are needed to avoid paradoxical results concerning how irreversible processes of thermodynamics arise from completely reversible mechanical interactions. The question Hellman poses is: can the higher-level concepts of thermodynamics be explicitly defined in the language of Newtonian mechanics?  Determination, at least, holds: having fixed any two closed particle systems that are identical at the level of Newtonian mechanics, their higher level behavior (studied by thermodynamics) will be identical.  Each of these systems will be represented by the same trajectory in phase space.  Hellman gives the example of one system entering higher entropy regions at a given time, then the same entropy regions will be entered by the other system. Definiability on the other hand is more difficult to establish, since the language of classical Newtonian mechanics is not as mathematically robust as that of statistical dynamics.  And ultimately, definability in this case requires a significant change to the language of mechanics: additional vocabulary for measure theory or even set theory to speak of mathematical objects more generally. We’ve covered some examples of determination without reduction: (1) No explicit definition of the truth predicate $\mathsf{Tr}(x)$ in $\textup{L}$ in spite of  $\textup{L}$-truth and $\textup{L}$-reference determining $\mathsf{Tr}$-truth and $\mathsf{Tr}$-reference, respectively, in $\alpha$-structures; (2) in $\alpha^{\textup{C}}$-structures, $\textup{L}$-reference determines $\textup{DF}arith$-reference, but within this same class of structures, $\textup{L} + \textup{G}(x)$ does not inductively define $\textup{DF}arith$; (3) The mechanical properties of two fixed, closed particle systems determine their macro-level thermodynamic properties without thereby establishing definability, as the language of Newtonian mechanics would have to undergo significant changes incorporating the language, at least, of measure theory. Next, I’m going to cover an important distinction Hellman makes between the ontological and and ideological status of  properties, relations, attributes, etc. beyond predicates and sets.  After that I’ll go into a detailed, critical, evaluation of some of the anti-materialist claims David Chamers makes in The Conscious Mind: In Search of a Fundamental Theory. ### Hellman’s Second Definability Example September 27, 2010 Tarski’s theorem shows that the set $\mathsf{Th} (\Omega)^\#$ of code numbers of sentences true in $\Omega$ is not definable.  This has negative consequences for the prospects of aritmetically defining a truth predicate for arithmetic. Nevertheless, Tarski showed how to define truth in terms of satisfaction and by giving an inductive definition of satisfaction beginning with atomic sentences and on up with sentences of higher and higher complexity in terms of the satisfaction of their parts. While a stronger set theory or higher-order logic is still required to convert the inductive definition into an explicit one, Hellman investigates how Det-T and Det-R measure up against this weaker type of definability. Addison’s theorem establishes that the class of arithmetically definable sets of numbers is itself not an arithmetically definable class of sets.  This means that in the language, $\textup{L}$, of arithmetic extended by a one place predicate $\textup{G}(x)$, no formula $\textup{S}$ is true in $\Omega$ when $\textup{G}(x)$ is assigned a set $\textup{A}$ of numbers such that $\textup{A}$ is definable over $\Omega$.  The proof is involved and is clinched by contradiction on the existence of a generic arithmetical set (I may, or may not, get around to explaining what this is in the next post or so, since it involves explanation of the technique of forcing). The example turns on this: Addison’s theorem shows that the predicate $\textup{DF}arith$ = ‘set of numbers definable in arithmetic’ is not inductively definable in  arithmetic. Nevertheless, $\textup{DF}arith$ is determined by the primitive predicates of $\textup{L}$.  Set out the following: $\alpha$ is the set of standard $\omega$-models of arithmetic.  Now extend each model $m \in \alpha$ by adding the class $\textup{C}$ of all sets $\textup{X}$ of natural numbers from the domain of $m$ such that $\textup{X}$ is in the extension in $m$ of a formula $\textup{B}(x)$ of $\textup{L}$ with one free variable.  Let $\alpha^{\textup{C}}$ be the class of $\alpha$-structures extended in this way. So, $\alpha^{\textup{C}}$ contains all the standard models of arithmetic that have standard interpretations of $\textup{DF}arith$. This means that in $\alpha^{\textup{C}}$-structures, $\textup{L}$ reference determines $\textup{DF}arith$ reference but within this same class of structures, $\textup{L} + \textup{G}(x)$ does not inductively define $\textup{DF}arith$.  In spite of this lack of definiability,  Det-R still holds since (and this is all up to isomorphism) any two structures that assign the same interpretations to the primitives of $\textup{L}$ must also assign the same extension to the well-formed-formulas of $\textup{L}$ with only one free variable.  So, up to isomorphism, the same sets of natural numbers are assigned to the distinguished elements of $\textup{C}$ The next example from Hellman is not mathematical, but from classical particle mechanics.  After that I will go into Hellman’s clarification on the difference between the ontological and ideological status of attributes, properties and relations before moving into constructive work on the mental. ### Fun With Robinson Arithmetic September 17, 2010 in yesterday’s notes I mentioned Robinson Arithmetic ($\mathsf{Q}$) as a subsystem of the axiom system $\textup{T}$ in Hellman’s example. Just because it’s so much fun, let’s talk about $\mathsf{Q}$.  The axioms are listed below.  The dot notation distinguishes between the symbol used and the relation it represents (this notation was used earlier without explanation here and here). $\dot{\mathsf{S}}x$ indicates the successor function that returns the successor of $x$. ## Robinson Arithmetic $(\textup{S1}) \ \forall x (\neg \dot{0} \ \dot{=} \ \dot{\mathsf{S}}x)$ $(\textup{S2}) \ \forall x \forall y (\dot{\mathsf{S}}x \ \dot{=} \ \dot{\mathsf{S}} y \rightarrow x \dot{=} y)$ $(\textup{L1}) \ \forall x (\neg x \dot{<} \dot{0})$ $(\textup{L2}) \ \forall x \forall y [x \dot{<} \dot{\mathsf{S}}y \leftrightarrow (x \dot{<} y \vee x \dot{=} y)]$ $(\textup{L3}) \ \forall x \forall y [(x \dot{<} y) \vee (x \dot{=} y) \vee (y \dot{<} x])$ $(\textup{A1}) \ \forall x (x \dot{+} \dot{0} \dot{=} x)$ $(\textup{A2}) \ \forall x \forall y [x \dot{+} \dot{\mathsf{S}}y \dot{=} \dot{\mathsf{S}}(x \dot{+} y)]$ $(\textup{M1}) \ \forall x (x \dot{\times} \dot{0} \dot{=} \dot {0})$ $(\textup{M2}) \ \forall x \forall y [(x \dot{\times} \dot{\mathsf{S}}y) \dot{=} (x \dot{\times} y) \dot{+} x]$ These axioms are from Hinman and correspond to what Boolos, Burgess and Jeffrey (in chapter 16 of Computability and Logic) call “minimal arithmetic”. Since the Boolos et al book is so widely studied, here is a map linking both axiomatizations. $(\textup{S1}) \mapsto 1$ $(\textup{S2}) \mapsto 2$ $(\textup{L1}) \mapsto 7$ $(\textup{L2}) \mapsto 8$ $(\textup{L3}) \mapsto 9$ $(\textup{A1}) \mapsto 3$ $(\textup{A2}) \mapsto 4$ $(\textup{M1}) \mapsto 5$ $(\textup{M2}) \mapsto 6$ This is going to be as confusing as keeping track of names in One Hundred Years of Solitude. Boolos et al compare $\mathsf{Q}$/minimal arithmetic to the system $\mathsf{R}$, which has historically been called “Robinson Arithmetic”.  $\mathsf{R}$ differs from $\mathsf{Q}$ in that it contains, $(\textup{Q0}) \ \forall x [x \dot{=} \dot{0} \vee (\exists y) (y \dot{=} \dot{\mathsf{S}}y)]$ And it replaces $\textup{L1}$-$\textup{L3}$ with, $(\textup{Q10}) \ \forall x \forall y [x \dot{<} y \leftrightarrow \exists z (\dot{\mathsf{S}}z \dot{+} x \dot{=} y)$ In their comparison Boolos et al conclude that $\mathsf{Q}$ and $\mathsf{R}$ have a lot in common (e.g., some of the same theorems are provable), but in some cases $\mathsf{R}$ is stronger than $\mathsf{Q}$ –e.g., in the non-standard ordinal model of $\mathsf{Q}$ some simple laws (like $\dot{1} \dot{+} x \dot{=} x \dot{+} \dot{1}$) fail to hold, in addition to $(\textup{Q0})$ and $(\textup{Q10})$.  At the same time, howerver, $\mathsf{Q}$ is in some cases stronger than $\mathsf{R}$.  For example, there is a model (with domain $\omega$ and non standard elements $a$ and $b$ and natural interpretations of $\dot{0}$, $\dot{+}$, $\dot{\times}$, $\dot{\mathsf{S}}x$ of $\mathsf{R}$ where basic laws like $x \dot{<} \dot{\mathsf{S}}x$ fail to hold. While it’s easier to represent all recursive functions in $\mathsf{Q}$ is than it is to do so in $\mathsf{Q}$ than in $\mathsf{R}$, any one of these will do for Hellman’s example.  What’s interesting is that in theories like $\mathsf{Q}$ and $\mathsf{R}$ which lack an induction schema (and thus fall just short of Peano Arithmetic) the truth predicate is undefinable. Hopefully in the next update I’ll be able to get to Hellman’s second definability example. ### Hellman’s First Definability Example September 15, 2010 What Tarski’s Theorem shows is that interpreted formal languages that are interesting (i.e., with enough expressive machinery to represent arithmetic or fragments thereof) cannot contain a predicate whose extension is the set of code numbers (e.g., $\mathsf{Th}(\Omega)^\#$) of sentences true in the interpretation.  The extension of any proposed truth predicate in such a system escapes the definitional machinery of the system.  Of course, the truth predicate for first-order arithmetic can be defined with appeal to  a stronger system, like second-order arithmetic, in the case of the Peano Axioms, etc. Hellman’s first example is the following.  It is a corollary of Tarski’s theorem that a theory in the language, $\textup{L}$, of arithmetic (e.g., an axiom system $\textup{T}$ containing Robinson Arithmetic ($\mathsf{Q}$)) with symbols for zero, successor, addition, and multiplication, when extended with a one place predicate, $\mathsf{Tr}(x)$ (read “true in arithmetic” such that for each closed sentence $\textup{S}$ in $\textup{L}$ a new axiom of the form $\ulcorner \mathsf{Tr}(n) \leftrightarrow \textup{S}\urcorner$ (where $n$ is the numeral for a code number for the sentence $\textup{S}$), the resulting theory $\textup{T}^{*}$ contains no explicit definition of  $\mathsf{Tr}(x)$ in $\textup{L}$. Connecting this to our ongoing discussion of determination of truth and reference in special collections of models, suppose that $\alpha$ is the class of standard $\omega$-models of $\textup{T}^{*}$.  Then we have: • In $\alpha$-structures $\textup{L}$-truth determines $\mathsf{Tr}$-truth. • In $\alpha$-structures $\textup{L}$-reference determines $\mathsf{Tr}$-reference Which means that once you have the arithmetical truths in the class $\alpha$, then so are the ‘true-in-arithmetic’ truths and the same goes for the reference of the vocabularies.  To avoid collapsing to reductionism via Beth’s theorem, note that there is no first-order theory (like those under discussion) in a language with finitely many non-logical symbols has as it’s models just the models in in $\alpha$. If you extend $\alpha$ to $\alpha^{*}$ containing all models of $\textup{T}^{*}$, then you do get reductionism, since determination of reference in $\alpha^{*}$ amounts to implicit definability in $\textup{T}^{*}$ -thus showing that there exist non-standard models of arithmetic. This is a good example because it is clear, based on popular, well established results and firmly shows how determination of truth and reference in one core theory carry over to it’s extension, without thereby reducing the extension to the core. In the next update I’ll discuss Hellman’s second definability example. Posted in Robinson Arithmetic, Supervenience and Determination, Tarski's Theorem, Undefinability of Truth | Leave a Comment » ### Tarski’s Theorem September 14, 2010 Today we’re going to follow Hinman’s proof of Tarski’s theorem. The proof is by contradiction, and employs diagonalization. First, assuming clauses (i)-(iii) on definability, a universal $\textup{U}$-section for the class of $\Omega$-definable sets is established and it’s diagonal is shown to not be definable over $\Omega$. A $\textup{U}$-section is a set defined for any set $\textup{A}$ and any relation $\textup{U} \subseteq \textup{A} \times \textup{A}$ such that for each $a \in \textup{A}$, $\textup{A}_{a} := \{b : \textup{U}(a, b)\}$.  $\textup{U}$ is universal for a class $\mathcal{C} \subseteq \wp (\textup{A})$ if, and only if, every member of the class $\mathcal{C}$ is a $\textup{U}$-section.  The diagonal set of $\textup{U}$ is $\textup{D}_{\textup{U}} := \{a : \textup{U}(a, a)\}$. Second, assuming the definability of $\mathsf{Th}(\Omega)^{\#}$, there is a formula $\phi (y)$ such that for all $p$, $p \in \mathsf{Th}(\Omega)^{\#} \Longleftrightarrow \phi (\dot{p}) \in \mathsf{Th}(\Omega)$.  But since the function $\mathsf{Sb}$ is effectively computable, it is definable  and so, $\mathsf{Sb}(m) = p \Longleftrightarrow \psi (\dot{m}, \dot{p}) \in \mathsf{Th}(\Omega)$, for some $\psi (x, y)$.  But this means that, $m \in \textup{D}_{\textup{U}} \Longleftrightarrow \exists p [\phi(\dot{p}) \in \mathsf{Th}(\Omega)$ and $\psi(\dot{m}, \dot{p}) \in \mathsf{Th}(\Omega)]$ $m \in \textup{D}_{\textup{U}} \Longleftrightarrow \exists y [\phi(y) \wedge \psi(\dot{m}, y)] \in \mathsf{Th}(\Omega)$, which means that $\textup{D}_{\textup{U}}$ is definable.  This is a contradiction.  So, $\mathsf{Th}(\Omega)^{\#}$ is not definable and not effectively countable.  Nor is $\mathsf{Th}(\Omega)$ effectively countable. In the next update (probably on a plane!) I’ll discuss this theorem and get into Hellman’s example. ### Statement of Tarski’s Theorem September 13, 2010 The theorem showing that the theory determined by the standard model of arithmetic, $\mathsf{Th} (\Omega)$, is undecidable and consequently not decidably axiomatizable (this means that $\mathsf{Th} (\Omega) \not= \mathsf{Th} (\Gamma)$, for some $\Gamma$ consisting of axioms of arithmetic, like those of Peano Arithmetic or one of it’s extensions) can be made stronger by showing that $\mathsf{Th}(\Omega)^{\#} := \{p: \chi_{p} \in \mathsf{Th}(\Omega)\}$,  the set of indices of sentences that are true in $\mathsf{Th} (\Omega)$, is not definable over $\Omega$, which means that $\mathsf{Th}(\Omega)$ is not effectively enumerable.  This is known as Tarski’s Theorem on the undefinability of truth. Now set up the (provably effectively computable) function $\mathsf{Sb}(m):= \#(\chi_{m}(\dot{m}))$, which returns the least number $m$ such that $\chi_{m}$ is a formula of $\textup{L}_{\Omega}$. In the next update I’ll discuss the proof of Tarski’s Theorem and move into the corollary that Hellman uses to give an example of determination without reduction. ### Assumptions On Definability Over The Standard Model Ω, Part 2 September 11, 2010 So we set out the basic assumptions about definability over $\Omega$ that factor into the proof of Tarski’s theorem.  Let’s go over each one to get clear on all the definitions. (i) Every effectively computable function $\textup{F}: \omega \rightarrow \omega$ is definable over $\Omega$. A function $\textup{F}: \textup{X} \rightarrow \textup{Z}$ is effectively computable if, and only if, there is an effective procedure such that, for any $x \in \textup{X}$ the procedure calculates the value $\textup{F}(x)$.  The claim is that every such function defined on the natural numbers is definable in the standard model. (ii) Every decidable set $\textup{X} \subseteq \omega$ is definable over $\Omega$. A subset $\textup{A} \subseteq \textup{X}$ is decidable if, and only if, the property $\mathcal{P}$ defined as $(\mathcal{P}(x): \Longleftrightarrow x \in \textup{A})$ is decidable in $\textup{X}$.  And a property $\mathcal{P}$ defined over a set $\textup{X}$ is decidable in $\textup{X}$ if, and only if, there is an effective (or decision) procedure for deciding, for any $x \in \textup{X}$ whether or not $\mathcal{P}(x)$ holds.  Here the claim is straightforward, every such set of natural numbers is definable in the standard model. (iii) Every effectively countable set $\textup{X} \subseteq \omega$ is definable over $\Omega$ Finally, a set $\textup{X}$ is effectively countable if, and only if, $\textup{X}$ is empty or there is an effectively computable function $\textup{F}: \omega \rightarrow \textup{X}$ that counts $\textup{X}$ (i.e., $\textup{X}$ is the image of $\textup{F}$, $\textup{X} = \mathsf{Im}(\textup{F}) := \{\textup{F}(n): n \in \omega\}$). So, what’s being assumed here is that the effectively countable subsets of the natural numbers are definable in the standard model. Just a few more definitions and we can get into Tarski’s theorem.  In the next update I’ll define the set of indices for the theorems of $\Omega$ as well as a diagonalization function that will play a role in the proof. Posted in Metatheory of First Order Logic, Models of Arithmetic, Undefinability of Truth | Leave a Comment » ### Assumptions On Definability Over The Standard Model Ω, Part 1 September 10, 2010 In the late 30′s Tarski proved that arithmetical truth cannot be defined in arithmetic.  In the next few updates I’m going to be discussing Tarski’s Undefinability theorem and will follows chapter 4 of Hinman’s Fundamentals.  Check this earlier note if you want to get clear on the definability we’re talking about. Below are some of the basic assumptions about definability and the standard model of arithmetic that will factor into the proof of Tarski’s theorem. (i) Every effectively computable function $\textup{F}: \omega \rightarrow \omega$ is definable over $\Omega$. (ii) Every decidable set $\textup{X} \subseteq \omega$ is definable over $\Omega$. (iii) Every effectively countable set $\textup{X} \subseteq \omega$ is definable over $\Omega$. In the next update I’ll break down each of these assumptions and hopefully move into the theorem itself.  The point of all this is just to be able to get through Hellman’s example of determination without reduction using a corollary of the undefinability theorem. Posted in Metatheory of First Order Logic, Models of Arithmetic, Undefinability of Truth | Leave a Comment » ### Hellman’s Physicalist Materialism September 8, 2010 I’m moving onto Hellman’s “Physicalist Materialism”, a paper whose aim is to apply the physicalist materialist position he developed in “Physicalism” to problems in philosophy and philosophy of science.  I want to focus on just three parts of this paper. The first is the examples Hellman gives of determination without reduction. The second is the distinction he clarifies between the ontological and ideological status of attributes, properties and relations.  The third is the section on the mental.  I really want to get into the section on theoretical equivalence, but I will only do so if I can somehow fit it into the evaluation of the anti-materialism of Chalmers. ### “Physicalism” Concluding Summary September 7, 2010 It took me a good while to get through this paper –more than I expected –but here we are. What has Hellman accomplished? First he showed us how to build the ontological principle of physical exhaustion, PE, $(\forall x)(\exists \alpha) (x \in \textup{R}(\alpha))$. PE allows us to say that everything is exhausted by the physical without (embarrassingly) implying that everything is in the extension of a basic physical predicate. Then he introduced the identity of physical indiscernables, IPI, and IPI’, $(\forall u)(\forall v)((\forall \phi) (\phi u \leftrightarrow \phi v) \rightarrow (u = v)$ and $(\forall \psi) (\forall u) (\forall v) (\exists \phi) (\psi u \wedge \lnot \psi v \rightarrow \phi u \wedge \lnot \phi v)$, respectively.  IPI says that if two objects have the same physical properties, then they are the same thing, while IPI’ says that no two objects are distinct with respect to a $\psi$ property without being distinct with respect to a $\phi$ property. These three principles neither independently nor in conjunction imply or require reduction to the physical, but they are also too weak to express the physicalist thesis that physical phenomena determine all phenomena.  He then turns his attention to principles of determination. In addition to PE and IPI/IPI’ Hellman introduces the determination of truth, Det-T: in $\alpha$ structures $\phi$ truth determines $\psi$ truth if, and only if, $(\forall m)(\forall m')((m, m' \in \alpha \wedge m \vert \equiv m' \vert \phi) \rightarrow m \vert \psi \equiv m' \vert \psi)$, and Det-R: in $\alpha$ structures $\phi$ reference determines $\psi$ reference if, and only if, $(\forall m)(\forall m') ((m, m' \in \alpha \wedge m \vert \phi = m' \vert \phi) \rightarrow m \vert \psi = m' \vert \psi)$. Det-T says that once you have given a complete description of things in $\phi$ terms, there is only one correct way to describe them in $\psi$ terms.  Det-R says that if two $\alpha$ structures agree in what they assign to the $\phi$ terms, then they agree on what they assign to the $\psi$ terms. Given a notion of definability, Hellman is able to state the thesis of physical reduction, PE: in $\alpha$ structures, $\phi$ reduces $\psi$, if, and only if, $(\forall \textup{P}(\textup{P} \in \psi \rightarrow \textup{P}$ is definable in terms of $\phi$ in $\alpha$ structures$)$. Given that assumptions about the mathematical-physical determination of all truths and, maybe, reference, are regulative principles of scientific theory construction –the assumption in general that all terms and all theories are reducible to mathematical-physical terms is probably false. Hellman’s physicalist materialism is instead composed of PE, DET-T and DET-R, with PE independent of PR and the determination principles. Because there are non-standard models of the laws of science, our formal systems do not model scientific possibility in a way that permits the move from physical determination to physical reduction –thus Beth’s definibility theorem poses no threat to physicalist materialism.  And any way you cut it, the link between theories as syntactic entities and reductionism doesn’t carry over to determination of reference, ruling out even accidental co-extensiveness between terms. This sets up the theoretical background for an evaluation of the applications of physicalist materialism across disciplines and problems in philosophy of science, mind and social theory.  In the next updates I  will be covering Hellman’s 1977 “Physicalist Materialism” (Noûs) as well as giving a detailed, critical, evaluation of some of the anti-Materialist claims David Chamers makes in the early chapters of his book The Conscious Mind: In Search of a Fundamental Theory.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 227, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8959549069404602, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2012/07/17/the-higgs-mechanism-part-2-examples-of-lagrangian-field-equations/?like=1&source=post_flair&_wpnonce=ae4d6bb289
# The Unapologetic Mathematician ## The Higgs Mechanism part 2: Examples of Lagrangian Field Equations This is part two of a four-part discussion of the idea behind how the Higgs field does its thing. Read Part 1 first. Okay, now that we’re sold on the Lagrangian formalism you can rest easy: I’m not going to go through the gory details of any more variational calculus. I do want to clear a couple notational things out of the way, though. They might not all matter for the purposes of our discussion, but better safe than sorry. First off, I’m going to use a coordinate system where the speed of light is 1. That is, if my unit of time is seconds, my unit of distance is light-seconds. Mostly this helps keep annoying constants out of the way of the equations; physicists do this basically all the time. The other thing is that I’m going to work in four-dimensional spacetime, meaning we’ve got four coordinates: $x_0$, $x_1$, $x_2$, and $x_3$. We calculate dot products by writing $v\cdot w=v_1w_1+v_2w_2+v_3w_3-v_0w_0$. Yes, that minus sign is weird, but that’s just how spacetime works. Also instead of writing spacetime vectors, I’m going to write down their components, indexed by a subscript that’s meant to run from 0 to 3. Usually this will be a Greek letter from the middle of the alphabet like $\mu$ or $\nu$. Similarly, instead of writing $\nabla$ for the vector composed of the four spacetime derivatives of a field I’ll just write down the derivatives, and I’ll write $\partial_\mu f$ instead of $\frac{\partial f}{\partial x_\mu}$. Along with writing down components instead of vectors I won’t be writing dot products explicitly. Instead I’ll use the common convention that when the same index appears twice we’re supposed to sum over it, remembering that the zero component gets a minus sign. That is, $v_\mu w_\mu$ is the dot product from above. Similarly, we can multiply a matrix with entries $A_{\mu\nu}$ by a vector $v_\nu$ to get $w_\mu=A_{\mu\nu}v_\nu$; notice how the summed index $\nu$ gets “eaten up” in the process. Okay, now even without going through the details there’s a fair bit we can infer from general rules of thumb. Any term in the Lagrangian that contains a derivative of the field we’re varying is almost always going to be the squared-length of that derivative, and the resulting term in the variational equations will be the negative of a second derivative of the field. For any term that involves the plain field we basically take its derivative as if the field were a variable. Any term that doesn’t involve the field at all just goes away. And since we prefer positive second-derivative terms to negative ones, we usually flip the sign of the resulting equation; since the other side is zero this doesn’t matter. So if, for instance, we have the following Lagrangian of a complex scalar field $\phi$: $\displaystyle L=\partial_\mu\phi^*\partial_\mu\phi-m^2\phi^*\phi$ we get two equations by varying the field $\phi$ and its complex conjugate $\phi^*$ separately: $\displaystyle\begin{aligned}\partial_\mu\partial_\mu\phi^*+m^2\phi^*&=0\\\partial_\mu\partial_\mu\phi+m^2\phi&=0\end{aligned}$ It may not seem to make sense to vary the field and its complex conjugate separately, but the two equations we get at the end are basically the same anyway, so we’ll let this slide for now. Anyway, what we get is a second derivative of $\phi$ set equal to $m^2$ times $\phi$ itself, which we call the “Klein-Gordon wave equation” for $\phi$. Since the term $m^2\phi^*\phi$ gives rise to the term $m^2\phi$ in the field equations, we call this the “mass term”. In the case of electromagnetism in a vacuum we just have the electromagnetic fields and no charge or current distribution. We use the Faraday field $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ to write down the Lagrangian $\displaystyle L=-\frac{1}{4}F_{\mu\nu}F_{\mu\nu}$ which gives rise to the field equations $\displaystyle\partial_\mu F_{\mu\nu}=0$ or, equivalently in terms of the potential field $A$: $\displaystyle\begin{aligned}\partial_\mu\partial_\mu A_\nu&=0\\\partial_\nu A_\nu&=0\end{aligned}$ The second equation just expresses a choice we can make to always consider divergence-free potentials without affecting the predictions of electromagnetism; the first equation looks like the Klein-Gordon equation again, except there’s no mass term. Indeed, we know that photons — the particles associated to the electromagnetic field — have no rest mass! Turning back to the complex scalar field, we notice that there’s a certain symmetry to this Lagrangian. Specifically, if we replace $\phi(x)$ and $\phi^*$ by $\displaystyle\begin{aligned}\phi'(x)&=e^{i\alpha}\phi(x)\\\phi'^*(x)&=e^{-i\alpha}\phi^*(x)\end{aligned}$ for any constant $\alpha$, we get the same result. This is important, and it turns out to be a clue that leads us — I won’t go into the details — to consider the quantity $\displaystyle j_\mu=-i(\phi^*\partial_\mu\phi-\phi\partial_\mu\phi^*)$ This is interesting because we can calculate $\displaystyle\begin{aligned}\partial_\mu j_\mu&=-i\partial_\mu(\phi^*\partial_\mu\phi-\phi\partial_\mu\phi^*)\\&=-i(\partial_\mu\phi^*\partial_\mu\phi+\phi^*\partial_\mu\partial_\mu\phi-\partial_\mu\phi\partial_\mu\phi^*-\phi\partial_\mu\partial_\mu\phi^*)\\&=-i(\phi^*\partial_\mu\partial_\mu\phi-\phi\partial_\mu\partial_\mu\phi^*)\\&=-i(-m^2\phi^*\phi+m^2\phi\phi^*)\\&=0\end{aligned}$ where we’ve used the results of the Klein-Gordon equations. Since $\partial_\mu j_\mu=0$, this is a suitable vector field to use as a charge-current distribution; the equation just says that charge is conserved! That is, we can write down a Lagrangian involving both electromagnetism — that is, our “massless vector field” $A_\mu$ and our scalar field: $\displaystyle L=-\frac{1}{4}F_{\mu\nu}F_{\mu\nu}-ej_\mu A_\mu$ where $e$ is a “coupling constant” that tells us how important the “interaction term” involving both $j_\mu$ and $A_\mu$ is. If it’s zero, then the fields don’t actually interact at all, but if it’s large then they affect each other very strongly. ## 8 Comments » 1. This is a bit disorienting to try to read when you don’t follow the standard convention of matching upper and lower indexes. Also confusing when you say that \partial_\mu is used for \partial/\partial x_\mu whereas it’s normally got the other sign (\partial/\partial x^\mu). Note also that there’s a missing equality in the equation just after “which gives rise to the field equations”. Comment by | July 17, 2012 | Reply 2. Well, physicists themselves often disregard the idea of covariant-vs.-contravariant indices, especially when the metric is standard; I didn’t feel like going into all that mess of an explanation here. As for the typo, thanks; fixed. Comment by | July 18, 2012 | Reply 3. Question to physicists: Is it really that hard to put a big capital sigma to the left of your equation and put all of the summed indices below it separated by commas? I realise who are we to quibble with Einstein over notation, but it’s pretty frustrating. Comment by | July 18, 2012 | Reply 4. You’d have to do it on a term-by-term basis, since indices in different terms have little to do with each other. Comment by | July 18, 2012 | Reply 5. [...] of a four-part discussion of the idea behind how the Higgs field does its thing. Read Part 1 and Part 2 [...] Pingback by | July 18, 2012 | Reply 6. [...] four of a four-part discussion of the idea behind how the Higgs field does its thing. Read Part 1, Part 2, and Part 3 [...] Pingback by | July 19, 2012 | Reply 7. [...] Mathematician en “The Higgs Mechanism part 1: Lagrangians,” July 16, “The Higgs Mechanism part 2: Examples of Lagrangian Field Equations,” July 17, “The Higgs Mechanism part 3: Gauge Symmetries,” July 18, y “The [...] Pingback by | July 19, 2012 | Reply 8. [...] The Higgs Mechanism part 2: Examples of Lagrangian Field Equations [...] Pingback by | July 20, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 43, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241878986358643, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/280178/approximate-a-function-over-the-interval-0-1-by-a-polynomial-of-degree-n?answertab=oldest
# Approximate a function over the interval $[0, 1]$ by a polynomial of degree $n$ (or less). To approximate a function $G$ over the interval $[0,1]$ by a polynomial $P$ of degree $n$ (or less), we minimize the function $f:R^{n+1} \to R$ given by $F(a) = \int_0^1 (G(x) - P_a(x))^2\,dx$, where $P_a(x) = a_nx^n+a_{n-1}x^{n-1}+\cdots+a_0$ and $a = (a_0,a_1,\ldots,a_n)$. Find the equation satisfied by the optimal coefficients $a_*$ (necessary condition). Show that this equation can by written as a linear equation of the form $Ma_* = B$ for some vector $B$ in $R^{n+1}$ and some symmetric matrix $M$ in $R^{(n+1)\times(n+1)}$. To be honest, I have never seen a question of this type and don't know where to start, any help in regards to the thought process would be appreciated. Thanks - 2 It’s extremely bad form to vandalize a question, especially after it’s been answered. – Brian M. Scott Jan 18 at 22:58 @BrianM.Scott sorry, how do i delete this question? – user5208 Jan 18 at 23:39 2 Once a question has at least one upvoted answer, it can’t be deleted (except by a moderator). – Brian M. Scott Jan 18 at 23:47 ## 4 Answers Note that $P_{a+tb}=P_a+tP_b$ hence $$F(a+tb)=F(a)-2t\int_0^1 (G-P_a)P_b+t^2\int_0^1 P_b^2.$$A necessary condition for $F(a+tb)\geqslant F(a)$ to hold for every $t$ in a neighborhood of $0$ is that $$\int_0^1 (G-P_a)P_b=0.$$ Using this for each $P_b(x)=x^k$ with $0\leqslant k\leqslant n$ yields the conditions $$\sum_{i=0}^na_i\int_0^1 x^{i+k}\mathrm dx=\int_0^1 G(x)x^{k}\mathrm dx.$$ You might want to deduce $M$ and $B$ from here... - did, by what reasoning did you derive the necessary condition mentioned in your post? – Stopwatch Jan 19 at 19:11 By the fact that the only functions $t\mapsto At^2+Bt+C$ with a (local) maximum at $t=0$ are such that $B=0$ (and that $A\gt0$, but here this condition always holds). – Did Jan 19 at 23:00 Expand the square $$(G(x)-P_a(x))^2 = G(x)^2 + 2\sum_{0 \leq i < j \leq n}a_ia_jx^{i+j} + \sum_{i=0}^n(a_ix^i)^2 - 2G(x)\sum_{i=0}^na_ix^i.$$ Integrating we get $$F(a) = F(0) + 2\sum_{i<j}\frac{a_ia_j}{i+j+1} + \sum_{i=0}^n \frac{a_i^2}{2i+1} - 2\sum_{n=0}^\infty a_i\int_0^1x^iG(x)dx.$$ This is a polynomial (thus $\mathscr{C}^\infty$) function of the coefficients. The optimal coefficients (they exists according to the orthogonal projection theorem) must satisfy the equation $\nabla F(a)=0$: $$\sum_{j\neq i}\frac{a_j}{i+j+1} + \frac{a_i}{2i+1} - \int_0^1 x^iG(x)dx = 0,\qquad0 \leq i \leq n.$$ - $M$ is equal to the Hilbert matrix $M_{i,k}=\int_0^1 x^{i+k}dx$. - This is an optimization problem (you can actually find it in Luenberger's book "Linear and Nonlinear Programming" in the exercises of Chapter 7 (3rd edition)). The necessary optimality conditions state that in order for $a_*$ to be optimal, it must satisfy $\nabla_a F(a_*)=0$, in other words, we must have $\frac{\partial F}{\partial a_i}(a_*)=0,~\forall i=0,\ldots,n$. The partial derivatives of $F$ with respect to the $a_i$ are given by: \begin{align*} \frac{\partial F}{\partial a_i}(a)&=\frac{\partial }{\partial a_i}\left(\int_0^1 (G(x)-P_a(x))^2 dx\right)\\ &=\int_0^1 \frac{\partial }{\partial a_i}(G(x)-P_a(x))^2 dx\\ &=-2\int_0^1 x^i (G(x)-P_a(x)) dx \end{align*} Setting these derivatives to 0 yields: \begin{align*} \forall i=0,\ldots,n\quad \int_0^1 x^i G(x)dx &=\int_0^1 x^i P_a(x)dx \\ &=\sum_{j=0}^n a_j \left(\int_0^1 x^{j+i}dx\right)\\ &=\sum_{j=0}^n a_j\times \frac{1}{1+i+j} \end{align*} We can rewrite these last equalities in matrix form as: \begin{equation*} \begin{pmatrix} 1 & \frac{1}{2} & \frac{1}{3} & \ldots & \frac{1}{n+1}\\ \frac{1}{2} & \frac{1}{3} & \frac{1}{4} & \ldots & \frac{1}{n+2}\\ \frac{1}{3} & \frac{1}{4} & \frac{1}{5} & \ldots & \frac{1}{n+3}\\ \vdots & & & &\vdots\\ \frac{1}{n+1}& \frac{1}{n+2} & \frac{1}{n+3} & \ldots & \frac{1}{2n+1} \end{pmatrix} \begin{pmatrix} a_0\\a_1\\a_2\\ \vdots \\ a_n \end{pmatrix}= \begin{pmatrix} \int_0^1 G(x)dx\\ \int_0^1 x G(x)dx\\ \int_0^1 x^2 G(x)dx\\ \vdots\\ \int_0^1 x^n G(x)dx \end{pmatrix} \end{equation*} whence you can identify $M$, $a$ and $B$. Note that as Ana pointed out, $M$ is the Hilbert matrix. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8236395716667175, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/278935/can-a-sum-of-square-roots-be-an-integer
# can a sum of square roots be an integer? Can a sum of a finite number of square roots of integers be an integer? if yes can a sum of two square roots of integers be an integer? The square roots need to be irrational. - 6 like $\sqrt{9} + \sqrt{16} = 7$? – Jonathan Christensen Jan 14 at 23:07 If you choose only integers that are squares of other integers this will always work, since their square roots are integers and the sum of a finite number of integers is an integer. – Fred Jan 14 at 23:10 Ok, I forgot to say I wanted both square roots to be irrational, i know the sum of integers is an integer. – Jorge Fernández Jan 14 at 23:18 – dinoboy Jan 15 at 0:13 ## 4 Answers I think this link is a pretty good answer to your question. However, it might be at a level which is too advanced for you, since this is a pretty natural question to ask relatively early on in life, but it takes some significantly more difficult mathematics to prove. The direct, yes/no answer to the question is "Yes, but only if the numbers inside the square roots were already perfect squares," or equivalently "If you've already done all the simplifying that you can do, then no." - Ding! 1000 points (exactly)! – Eric Stucky Jan 15 at 2:18 Shouldn't that read "were already perfect squares?" – BlueRaja - Danny Pflughoeft Jan 15 at 7:52 Yeap, it should :S – Eric Stucky Jan 15 at 8:17 At least there's an elementary way to see that if $\sqrt{a} + \sqrt{b}$ is an integer, then $a$ and $b$ are perfect squares. Suppose $\sqrt{a} + \sqrt{b} = c\in\mathbb{Z}.$ If $c=0$ the result is trivial. Otherwise, squaring both sides we get that $$a + b + 2\sqrt{ab} = c^2$$ and therefore $ab$ must be a perfect square. Let's say $ab = d^2$. Then $a=\frac{d^2}{b}$ and \begin{align*}\frac{d}{\sqrt{b}} + \sqrt{b} &= c\\ d + b &= c\sqrt{b}, \end{align*} so $b$ is a perfect square, and $a$ must be as well. - nice, this proves that it cant be an integer. But what about the second problem? – Jorge Fernández Jan 14 at 23:20 1 Sadly this argument doesn't generalize in an obvious way, since squaring $\sqrt{a}+\sqrt{b}+\sqrt{c}$ doesn't help. See Eric's answer for the fully general proof. – user7530 Jan 14 at 23:29 1 @user7530 Squaring will work for up to 4 square roots. For example, take squares on both sides of $\sqrt{a} + \sqrt{b} = n - \sqrt{c}$. – Calvin Lin Jan 15 at 0:25 b is a perfect square... or c is zero. – Hurkyl Jan 15 at 1:11 Sure. I'll add a remark. – user7530 Jan 15 at 2:10 Yes. For instance, 8 has two distinct square roots: $\sqrt 8$ and $-\sqrt 8$. These add to zero, which is an integer. The same thing happens with higher order roots in the complex plane. When we add the roots of a number together, we get zero. This is because they form equally distributed points on the unit circle in the complex plane, and so, if we regard them as vectors, we can readily see that they cancel out under addition. - I think this is a perfect example. Why the downvote? – emory Jan 15 at 1:31 A lot of downvoting on stackexchange is just angry teenagers acting up. – Kaz Jan 15 at 1:58 If this correct answer is unsatisfying, please edit the question such that it rules out this answer and then add a helpful comment under the answer to notify me that the answer is no longer correct. I will gladly delete the answer, or edit, if possible. – Kaz Jan 15 at 2:25 I didn't downvote, but your answer does not address the question. The OP asked whether the statement "$a_1, \dots , a_n \in \mathbb{Z} \implies \sum_{i=1}^n \sqrt{a_i} \notin \mathbb{Z}$" is true or not. Since $-\sqrt{8}$ is never the square root of an integer, your answer sheds no light on the situation. – JavaMan Jan 15 at 23:23 @JavaMan That is false. $-\sqrt 8$ is one of the two square roots of $8$, which is an integer. $(-\sqrt 8)\times(-\sqrt 8) = 8$. The question, in its present form, does not rule out negative roots. – Kaz Jan 15 at 23:27 show 3 more comments Suppose that $a,b,\sqrt a+\sqrt b\in\mathbb Z$. $(\sqrt a+\sqrt b)(\sqrt a-\sqrt b)=a-b\in\mathbb Z$. Since $\sqrt a-\sqrt b=\frac{a-b}{\sqrt a+\sqrt b}\in\mathbb Q$. Therefore, $\sqrt a-\sqrt b$ is an algebraic integer and rational; thus, $\sqrt a-\sqrt b\in\mathbb Z$. Next, $(\sqrt a+\sqrt b)+(\sqrt a-\sqrt b)=2\sqrt a\in\mathbb Z$ and $(\sqrt a+\sqrt b)-(\sqrt a-\sqrt b)=2\sqrt b\in\mathbb Z$. Thus, $\sqrt a$ and $\sqrt b$ are algebraic integers and rational, therefore $\sqrt a,\sqrt b\in\mathbb Z$. Thus, $a,b,\sqrt a+\sqrt b\in\mathbb Z\Rightarrow\sqrt a,\sqrt b\in\mathbb Z$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9207521677017212, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/50066/list
## Return to Answer 3 edited body This is not exactly what you wanted, but in algebraic geometry it is often easier to prove something for a particular object by considering the moduli space parametrizing such objects. The example I have in mind is the following: Suppose you picked some random elliptic curve over $\mathbb{Q}$ and were wondering if it has a rational point of order $11$. It is possible to answer this for any particular curve with some computational facility, but we don't have to! . We have Mazur's Theorem, which says that the answer is `no, it doesn't.' Mazur does this essentially by showing that the corresponding moduli space of elliptic curves with a choice of $11$-torsion point (which is a nice modular curve) has no rational points: so you can never have an elliptic curve over $\mathbb{Q}$ with an $11$-torsion point. 2 added 9 characters in body This is not exactly what you wanted, but in algebraic geometry it is often easier to prove something for a particular object by considering the moduli space parametrizing such objects. The example I have in mind is the following: Suppose you picked some random elliptic curve over $\mathbb{Q}$ and were wondering if it has a rational point of order $11$. Treating the curve in isolation, this would amount It is possible to solving answer this for any particular curve with some nasty equations. But, fortunatelycomputational facility, but we don't have to! We have Mazur's Theorem, which says that the answer is `no, it doesn't.' Mazur does this essentially by showing that the corresponding moduli space of elliptic curves with a choice of $11$-torsion point (which is a nice modular curve) has no rational points: so you can never have an elliptic curve over $\mathbb{Q}$ with an $11$-torsion point. 1 [made Community Wiki] This is not exactly what you wanted, but in algebraic geometry it is often easier to prove something for a particular object by considering the moduli space parametrizing such objects. The example I have in mind is the following: Suppose you picked some random elliptic curve over $\mathbb{Q}$ and were wondering if it has a rational point of order $11$. Treating the curve in isolation, this would amount to solving some nasty equations. But, fortunately, we have Mazur's Theorem, which says that the answer is `no, it doesn't.' Mazur does this essentially by showing that the corresponding moduli space of elliptic curves with a choice of $11$-torsion point (which is a nice modular curve) has no rational points: so you can never have an elliptic curve over $\mathbb{Q}$ with an $11$-torsion point.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9695377349853516, "perplexity_flag": "head"}
http://mathoverflow.net/questions/66650?sort=votes
## Connectifications? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Like many of my questions, this question is actually aimed at $p$-adic analysis. One of the main obstacles in doing analysis $p$-adically ist that the $\mathbb{Q}_p$ is totally disconnected. From previous answers and reading I learned that one tries to circumvent these problems and the result are things like "rigid analytic spaces" or "Berkovich spaces". Then I recently thought. Ok $\mathbb{Q}_p$ if it is not connected why not do what often is done when some property does not hold: incomplete -> take the completion not compact -> compactify not algebraically closed -> take algebraic closure. So why not just connectify $\mathbb{Q}_p$? Now my questions is twofold. 1. Searching on the for connectify or connectification I only got a few results, which only seem to be from point set topology without applications in other areas of mathematics. Why is that so? Is connectification somehow bad behaved, or does it not exist in general? By connectification I would understand something like for algebraic closure or completion or the stone-cech compactification which can all be defined by a universal property. Does this make sense at all to define connectification like that. 2. Coming back to the case $\mathbb{Q}_p$, I guess that one also wants the connectification of $\mathbb{Q}_p$ to be a field. Is this maybe not satisfied, or are there other reasons not to consider the connectification of $\mathbb{Q}_p$. - 1 a connectification of a space $X$ is just a space $Y$ that is connected and contains $X$ as a dense subspace. It's not minimal in a sense that would imply some universal property like a completion. If $X$ has compact open proper subsets, it cannot have a Hausdorff connectification. This applies to the Cantor set e.g. – Henno Brandsma Jun 1 2011 at 12:45 but what if we defined it by such a universal property? – wood Jun 1 2011 at 13:15 @wood: please see my elaboration on Qiaochu's answer. – Todd Trimble Jun 1 2011 at 13:46 ## 3 Answers After seeing wood's last comment (comment #2 under his question), I've decided to add a few words (a bit too many for a comment) which hopefully make clear the force of Qiaochu's answer. Generally speaking, the categorical meaning of "completion" refers to taking a left adjoint of a full inclusion of categories; in our situation we are considering the full subcategory $$\text{Conn} \hookrightarrow \text{Top}$$ from connected spaces to general topological spaces. Examples where such completions exist are: the inclusion of complete metric spaces into the category of metric spaces and continuous maps with Lipschitz constant 1 (Cauchy completion), the inclusion of compact Hausdorff spaces into the category of all spaces (Stone-Cech compactification), and the inclusion of fields into the category of integral domains and injective ring maps (field of fractions construction). (There's a bit of fine print here: sometimes one also demands that the unit of the adjunction, here the universal map of an object to its completion, be injective. For example, the inclusion of abelian groups into the category of groups does have a left adjoint (the abelianization), but this isn't injective. Similarly, to get the map from a space to its Stone-Cech compactification to be injective, one should really consider the inclusion of compact Hausdorff spaces in the category of completely regular spaces. Sometimes the suffix -ization or -ification is used in cases where the unit is not injective.) The salient point behind Qiaochu's answer is that a left adjoint, if it exists, must preserve coproducts (or in fact colimits generally). Now, supposing that the left adjoint to the inclusion $\text{Conn} \to \text{Top}$ exists, it would first of all take a one-point space to a one-point space (the proof is easy), and it would take a coproduct of two one-point spaces in $\text{Top}$, viz. a two-point discrete space, to a coproduct of two one-point spaces in $\text{Conn}$. But Qiaochu's example shows this cannot possibly exist. The only remedy that I can think of in this situation is to change things up a bit, in a way that I don't think will be at all useful to the OP. There are for example situations where an algebraic structure on a space forces it to be connected (and then some), where one can construct the corresponding free algebraic structures to get a left adjoint to the (non-full) forgetful functor mapping to $\text{Top}$. The most obvious example might be to consider spaces equipped with a contraction: consider spaces $X$ equipped with a basepoint $x_0: 1 \to X$ and with an action $\alpha: [0, 1] \times X \to X$ of the multiplicative monoid $[0, 1]$, such that $\alpha(0, x) = x_0$ for all $x \in X$. Here the free algebra on a general space $Y$ is just the cone $CY$ with the obvious algebraic structure. But this isn't likely to be useful to the OP. - thanks. I really like that answer. – wood Jun 3 2011 at 9:15 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The two-point discrete space already doesn't have a (universal) connectification, in the sense that two points don't have a coproduct in the category of connected spaces. If $X$ were such a coproduct, then given any pair of points in a connected space $C$ there would have to be a unique compatible map $X \to C$. But letting $C = [0, 1]$ and the points be $0, 1$ we can easily twist any such map by a homeomorphism $C \to C$ fixing $0$ and $1$. - Indeed, this $f: X \to [0, 1]$ would have to be surjective if the points are $0, 1$, using the fact that the image of a connected space is connected. – Todd Trimble Jun 1 2011 at 12:37 You could try the space {0,#,1} with the topology whose open sets are {1,#}, {#,1}, and {0,#,1}. That's a connected space. It's the quotient of [0,1] by the equivalence relation that collapses all the points of (0,1) to a single point #... – André Henriques Jun 1 2011 at 16:00 1 @Andre: But of course it can't have the universal property. In fact, the map $\\{0, 1\\} \to [0, 1]$ does not have a continuous extension to a map $\\{0, *, 1\\} \to [0, 1]$. – Todd Trimble Jun 1 2011 at 16:13 They do not always exist (I believe the Sorgenfrey line does not have one, e.g.), and if they exist they are not very well-behaved. This might be a relevant paper - 1 What? $\mathbb Q_p$ means the completion of $\mathbb Q$ in the $p$-adic metric, right? So it is locally compact, but the irrationals isn't. – Gerald Edgar Jun 1 2011 at 13:12 I thought it was nowhere locally compact. – Henno Brandsma Jun 1 2011 at 13:17 Well, Henno, I'm afraid you thought wrong. See en.wikipedia.org/wiki/P-adic_number#Properties – Todd Trimble Jun 1 2011 at 13:54 Thx, I removed the remarks. – Henno Brandsma Jun 1 2011 at 15:20 A comment to Henno's comment: Adam Emeryk, Władysław Kulpa. The Sorgenfrey line has no connected compactification. „Comm. Math. Univ. Carolinae 18”, ss. 483-487, 1977. However it seems that its square may have one. – Tomek Kania Jun 1 2011 at 18:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9286079406738281, "perplexity_flag": "head"}
http://mathoverflow.net/questions/71137/regular-conditional-probability-given-a-natural-filtration-of-a-stochastic-proces/71141
## Regular Conditional Probability given a natural filtration of a stochastic process ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) OK, this is kind of re-posting, but I think I can clarify the question more, so it's worth a shot. Consider a real valued process $(X_t)_{t \leq T}$, cadlag on a probability space $(\Omega, (\mathcal{F}^\circ_t)_{t \leq T}, \mathbb{P}). \mathcal{F}^\circ_t=\sigma(X_s;s\leq t)$ is the uncompleted, natural filtration generated by $X_t$. Unfortunately $X_t$ neither has independent increments, nor is it markov. Since $\Omega$ is a Polish space, $\mathcal{F}^\circ_T$ and also $\mathcal{F}^\circ_t$ are countably generated, so we know, there exists a regular version of the conditional probability of $\mathbb{P}$ for any fixed $t$ for $\mathbb{P}$-a.a. $\omega$, i.e. for fixed $t$, $\mathbb{P}(\cdot|\mathcal{F}_t)(\omega)$ is a prob. measure f.a.a. $\omega$. Hence we know, that for all $t\in [0,T]\cup \mathbb{Q}$, we find a regular conditional probability f.a.a. $\omega$, depending on $t$. In words: Given almost any path of the process up to time $t$, we can deduce the probablity of events, taking that information into account. On the remaining $\omega$'s, define some meaningless measure, so we have a measure $\forall \omega$. How can I extend this to all $t$ in a reasonable way? Reasonable means: There is one Null set $N$, so that $\forall t$ $\mathbb{P}(\cdot|\mathcal{F}_t)(\omega)$, $\omega\in N^c$, is a measure Anybody seen anything like this? I read something like this only for Markov and Feller processes using infinitesimal generators, but this cannot be carried over one to one, because we do not have a transition semigroup. Maybe I have a deep misunderstanding here. Grateful for any objections, hints and comments. - ## 1 Answer Let's assume that we are working with the canonical probability space $\Omega = D(\mathbb R)$ of càdlàg functions, and $\mathbb P$ is the law of the process. I would doubt that there is a satisfactory answer at the level of maximal generality you've stated. At the very least, the measure $\mathbb P$ should be Radon. There are extremely general results on the existences of RCPs for Radon measures (cf. Leão, Fragoso and Ruffino, Regular conditional probability, disintegration of probability and Radon spaces). The RCP is a measure-valued function $P : [0,T] \times \Omega \to \mathcal M(\Omega)$ such that for $\mathbb P$-almost every $\omega$, the measure $P(t,\omega, \cdot)$ is a version of $\mathbb P(\cdot|\mathcal F_t)$. Do you want the function $(t, \omega) \mapsto P(t,\omega,\cdot)$ to simply exist and be measurable? If so, this can be done in the wide generality stated above; see Leão et al. Recently, I have needed more regularity properties for RCPs, namely, continuity. Consider the space $\mathcal M(\Omega)$ of Radon measures on $\Omega$ equipped with the topology of weak convergence of measures. We say that the RCP is a continuous disintegration (or continuous RCP) when it satisfies the following property: $$\mbox{if $\omega_n \to \omega$, then the measures $P(t,\omega_n,\cdot)$ converge weakly to $P(t,\omega,\cdot)$.}$$ If the law is Gaussian, then my preprint Continuous Disintegrations of Gaussian Processes gives a necessary and sufficient condition for the law $\mathbb P$ to have a continuous disintegration. I haven't thought about this in the case of càdlàg functions, but I'm pretty sure that this will extend easily. Note that this is just for fixed $t$. To show that the map $(t, \omega) \mapsto P(t,\omega,\cdot)$ jointly continuous, a little more work is needed. As part of a larger project, Janek Wehr and I have general results in this direction for stationary, Gaussian processes. If this is what you need, I'm happy to discuss this with you further. Open Question: If the law $\mathbb P$ is not Gaussian but at least is log-Sobolev, then all the same results should hold. This is because log-Sobolev measures satisfy very strong concentration-of-measure properties. I have some ideas how to do this, but I haven't worked out the details because I've been busy with other projects. If anybody is interested in collaborating on extending this work to the log-Sobolev case, please contact me. - I was not as precise, as I hoped I was: The situation is indeed much simpler. $\mathbb{P}$ is just the law of $X_T$. The question is, what can I say about the outcome of $X$ in T, given the path up to t. I want to compute $\mathbb{P}(\{f(X_T)\in \cdot \}|F_t)$ using $(\mathbb{P}|F_t)(\cdot,\omega)$ as a RCP. One may not need the path space $D$ here, though it might be the way to prove it. What I worry about is, that you always have to declare some nonsense measure on a Null set of $\Omega$ for each $t$ and you have no control which $\omega$ that concerns for some $t$. – Pierre Jul 24 2011 at 21:20 You have control one a dense subset of [0,T], because then it is only a Nullset of $\Omega$, where you have to declare the nonsense measure. So I thought some kind of continuity (from the right?) might help, because we have cadlag paths. – Pierre Jul 24 2011 at 21:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461624026298523, "perplexity_flag": "head"}
http://mathoverflow.net/questions/93462/the-facial-structure-of-the-convex-hull-of-a-family-of-characteristic-functions
## The facial structure of the convex hull of a family of characteristic functions ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $S$ be a finite set and let $\mathcal{A} \subset\mathcal{P}(S)$ be a family of subsets of $S$. Consider the convex polytope spanned by the characteristic functions of members of $\mathcal{A}$ : $$C=C_ \mathcal{A}:=\operatorname{co}\{ \mathbf {1}_A \ : \ A\in\mathcal{A} \}\ .$$ It's easy to see that every $\mathbf {1} _ A$, for $A\in\mathcal{A}$, is extremal in $C_ \mathcal{A}$ (indeed, if we have a convex combination $\mathbf {1} _ A= \sum_{B\in\mathcal{A}}\lambda _ B\ \mathbf {1} _ B$, then $B\subset A$ for any $B$ corresponding to a coefficient $\lambda _ B > 0$, so $\sum_{B\in\mathcal{A}}\lambda _ B\left(|A|-|B| \right )=0$, whence $\lambda _ A =1$ is the only non-zero coefficient of the convex combination). Therefore, the vertex set of $C_ \mathcal{A}$ is exactly $\{ \mathbf {1}_A \ : \ A\in\mathcal{A} \}$, that we may identify abstractly with $\mathcal{A}$ itself. Question 1. How to describe the complete abstract facial structure of $C$ in terms of the combinatorics of $\mathcal{A}$? I suspect that for a general family $\mathcal{A}$ this task may prove to be quite hard. If so, I'd like to see known examples of polytopes obtained this way, especially when $\mathcal{A}$ enjoies special regularity properties, such that the skeleton of $C_ \mathcal{A}$ admits simple description. For instance: Let $S$ be the $r$-th Cartesian power of the set $[n]:=\{1,2,\dots,n \}$ , $S={n}^r$, and let $$\mathcal{A}:=\{B^{\ r} \ : \ B\subset R \}\ .$$ Question 2. Which polytope is the corresponding $C_ n^r:=C _ \mathcal{A}$? The present problem, especially in the latter example, has been suggested to me by a recent interesting question, which is related to the case $r=2$ (the analogous problem of the one described there, where one consider all intersections of $r$ sets extracted from a given family of $n$, yields to the above polytope $C _ n ^ r\subset \mathbb{R}^{n^r}$). Up-date, April 13, 2012. Thanks to the very interesting references given so far, I see that my naive suspicions about the difficulty of question 1 were after all right . So, I would like to focus the attention on question 2: what can be said about $C_n^r$, at least for $r=2$? Can we at least count the number $f_k$ of $k$-dimensional faces of $C_n^2$: which polynomial sequence do they define, $P_n(x):=\sum_{k\ge 0} f_k x^k$ ? - The combinatorics of $0/1$ polytopes is not as simple as you'd expect from looking at low dimensional examples. It would be nice if you had a specific question in mind for $C^r_n$. – Gjergji Zaimi Apr 7 2012 at 21:37 Here is a gentle introduction: arxiv.org/abs/math/9909177 – Gjergji Zaimi Apr 7 2012 at 21:38 Thank you for your reference! – Pietro Majer Apr 7 2012 at 22:39 ## 1 Answer In addition to the notes by Ziegler referenced in the comment above, there are a few general classes of 0/1 polytopes where one can say something about the facial structure. One example is that of the independent set polytope, $P_{I(M)}$ for a matroid $M$, where one uses the independent sets of the matroid to define the characteristic vectors. In this case, there is a well-known description of the hyperplane description of $P_{I(M)}$ using the rank function for the matroid. Also, one can describe the facets of the polytope using the combinatorial structure of the matroid. See for example Jon Lee's book titled A First Course in Combinatorial Optimization, section 1.7. Another example of polytopes of this type are permutation polytopes, where one considers the convex hull of a collection of permutation matrices arising as the representation of some finite group. The paper On Permutation Polytopes, by Baumeister, Haase, Nill, and Paffenholz, http://arxiv.org/abs/0709.1615, contains a nice introduction to this topic and investigations of the type you suggest. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9202255606651306, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/40134-applications-calculus.html
# Thread: 1. ## Applications of Calculus Question: The acceleration of a particle is given by a = 3x+1 (where a is acceleration, x is displacement from origin). If the particle is initially at the origin and moving at a velocity of $15ms^{-1}$, find its velocity after 3s. What I've done: $a = v\frac{dv}{dx} = 3x -1$ $\frac{1}{2} v^2 = \frac{3}{2}x^2 + x + c$ when t=0, x=0, v = 15 $v^2 = 3x^2 + 2x + 225$ I'm not sure where to go on from here. I know if I find t in terms of x, I can substitute it into that equation. I've tried $\frac{dx}{dt} = \sqrt{3x^2 + 2x + 225} \Rightarrow \frac{dt}{dx} = \frac{1}{\sqrt{3x^2 + 2x + 225}}$ but it doesn't work well. 2. Originally Posted by Gusbob Question: The acceleration of a particle is given by a = 3x+1 (where a is acceleration, x is displacement from origin). If the particle is initially at the origin and moving at a velocity of $15ms^{-1}$, find its velocity after 3s. What I've done: $a = v\frac{dv}{dx} = 3x -1$ $\frac{1}{2} v^2 = \frac{3}{2}x^2 + x + c$ when t=0, x=0, v = 15 $v^2 = 3x^2 + 2x + 225$ I'm not sure where to go on from here. I know if I find t in terms of x, I can substitute it into that equation. I've tried $\frac{dx}{dt} = \sqrt{3x^2 + 2x + 225} \Rightarrow \frac{dt}{dx} = \frac{1}{\sqrt{3x^2 + 2x + 225}}$ but it doesn't work well. I think you are overcomplicating things. Consider $v(t)=\int{a(t)dt}+v_0$ 3. Thank you. I've tried that before, but didn't derive the v² equation first. But now that I have it, it works 4. Originally Posted by Mathstud28 I think you are overcomplicating things. Consider $v(t)=\int{a(t)dt}+v_0$ Originally Posted by Gusbob Thank you. I've tried that before, but didn't derive the v² equation first. But now that I have it, it works Just to make sure I wasnt ambiguous $v_0$ is the constant of integration with the integration of a(t), so dont add an extra c, the c is $v_0$ 5. Originally Posted by Gusbob Question: The acceleration of a particle is given by a = 3x+1 (where a is acceleration, x is displacement from origin). If the particle is initially at the origin and moving at a velocity of $15ms^{-1}$, find its velocity after 3s. What I've done: $a = v\frac{dv}{dx} = 3x -1$ $\frac{1}{2} v^2 = \frac{3}{2}x^2 + x + c$ when t=0, x=0, v = 15 $v^2 = 3x^2 + 2x + 225$ I'm not sure where to go on from here. I know if I find t in terms of x, I can substitute it into that equation. I've tried $\frac{dx}{dt} = \sqrt{3x^2 + 2x + 225} \Rightarrow \frac{dt}{dx} = \frac{1}{\sqrt{3x^2 + 2x + 225}}$ but it doesn't work well. $a=\frac{dv}{dt}=3x-1$ so differentiate this again: $\frac{d^2v}{dt^2}=3v$ which is a constant coefficient homogeneous ODE, which you should be able to solve, to get: $v(t)=Ae^{\sqrt{3}t}+Be^{-\sqrt{3}t}$ and so: $<br /> x(t)=\frac{A}{\sqrt{3}}e^{\sqrt{3}t}-\frac{B}{\sqrt{3}}e^{-\sqrt{3}t}+C<br />$ Now putting both of these back into the equation $\frac{dv}{dt}=3x-1$ shows that $C=0$. Now apply the initial condition to find $A$ and $B$ and then its simple. RonL 6. Originally Posted by Mathstud28 I think you are overcomplicating things. Consider $v(t)=\int{a(t)dt}+v_0$ What you mean is: $v(t)=\int_0^t {a(\tau)~d\tau}+v_0$ and how do you proceed from this point? RonL 7. Originally Posted by CaptainBlack $a=\frac{dv}{dt}=3x-1$ so differentiate this again: $\frac{d^2v}{dt^2}=3v$ which is a constant coefficient homogeneous ODE, which you should be able to solve, to get: $v(t)=Ae^{\sqrt{3}t}+Be^{-\sqrt{3}t}$ Thank you for that, but I have no idea what "constant coefficient homogeneous ODE" is. And for Mathstud28's method, I integrated a = 3x + 1 in respect to t, treating 3x as a constant. I then substituted t = 3 to get the a relation between v and x at that time. Then solved simultaneously with the equation I derived. Is that a valid method? or is it incredible coincidence that I got the right answer to 1 dp. (16.1) 8. Originally Posted by Gusbob Thank you for that, but I have no idea what "constant coefficient homogeneous ODE" means. I explain this in my Differential Equations Tutorial. Read the section on Undetermined coefficients (Post #6), and see if this helps out. 9. Originally Posted by Gusbob [snip] And for Mathstud28's method, I integrated a = 3x + 1 in respect to t, treating 3x as a constant. [snip] Sorry, but that's completely wrong. x is the displacement of the object. I assume that the object is actually moving and so x is changing. Therefore ....... x is NOT a constant and CANNOT be treated as one. This is not the only error. Correct me if I'm wrong but doesn't the original question say a = 3x + 1? Why then do you say a = 3x - 1 in your very first line of working. No offence, but this whole thread is a comedy of errors as a consequence of post #2 (and to a lesser extent the confusion of what a is actually meant to be). CaptainB's approach is correct and efficient. But you don't have background to use this approach (not your fault). So: The bottom line is that you must first use one of the expressions $a = \frac{d}{dx} \left( \frac{v^2}{2} \right)$ or $v \frac{dv}{dx}$ to get v = v(x). You did this but then lost your way. You then have to do the following: dx/dt = v(x) => dt/dx = 1/v(x). Integrate to get x = x(t). dx/dt gives you v as a function of time. Substitute t = 3 and you're done. 10. Originally Posted by Gusbob Thank you for that, but I have no idea what "constant coefficient homogeneous ODE" is. And for Mathstud28's method, I integrated a = 3x + 1 in respect to t, treating 3x as a constant. I then substituted t = 3 to get the a relation between v and x at that time. Then solved simultaneously with the equation I derived. Is that a valid method? or is it incredible coincidence that I got the right answer to 1 dp. (16.1) x is not a constant so no its not valid. RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9604511857032776, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/112657/maximal-subgroups-of-finite-simple-groups/112713
## maximal subgroups of finite simple groups ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is it possible to determine the structure of maximal subgroups of finite simple groups?(Even if in special cases such as minimal simple groups, alternating groups,...) - What do you mean by "the structure of" them? $\:$ – Ricky Demer Nov 17 at 9:23 If you take the minimal permutation representation of a finite simple group, the stabilizer of a point will be a maximal size subgroup. See mathoverflow.net/questions/16858/… – Agol Nov 29 at 23:51 ## 7 Answers The maximal subgroups of $A_n$ are given by the O'Nan-Scott Theorem. They lie in one of the following classes: 1) $A_n \cap (S_{n-k} \times S_k)$, that is the stabiliser of a $k$-set. 2) $A_n \cap (S_a wr S_b)$ where $n=ab$, that is the stabiliser of a partition. 3) $A_n\cap AGL(d,p)$ where $n=p^d$ for some prime $p$. 4) $A_n \cap (S_m wr S_k)$ where $n=m^k$, that is the stabiliser of a cartesian power. 5) $A_n \cap (T^{k+1}.(Out(T) \times S_{k+1}))$ where $n=|T|^k$ and $T$ is a finite nonabelian simple group 6) an almost simple group acting primitively on $n$ points. For classical groups the main result is Aschbacher's Theorem in the paper pointed out by Rivin. More details of the structure of the subgroups is given in the book by Kleidman and Liebeck. For exceptional groups of Lie type there are papers by Liebeck and Seitz as noted by Barnea. Both these results and Aschbacher's Theorem have the same philosophy as the O'Nan-Scott Theorem, namely that a maximal subgroup is either one of a small number of natural families that are usually stabilisers of some geometric structure, or is almost simple. For sporadic simple groups, all the information in in the online Atlas. They are all known except for the monster. In this case there are a couple of possibilities for maximal subgroups where it is not known if they are actually subgroups. - Thanks a lot for your comprehensive answer. – liobei Jan 13 at 21:23 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In general the question is too ambitious, but more can be said than might be expected. For example, it follows from results of J.G. Thompson that if $M$ is a maximal subgroup of a non-Abelian finite simple group $G$ and $M$ is nilpotent, then $M$ is a Sylow $2$-subgroup of $G$ and is non-Abelian ( this does occur "in nature", for example for $G = {\rm PSL}(2,17)).$ The proof by Feit and Thompson of the solvability of finite groups of odd order proceeds by analyzing the structure of maximal subgroups of a minimal non-Abelian simple group of odd order, and the interplay between them. Perhaps the person who has exploited the structure of maximal subgroups of finite simple groups most is H. Bender, and this has given rise to the term "the Bender method". A remarkable general result of Bender, which has had many extensions and appications is that if $G$ is a non-Abelian finite simple groups with distinct maximal subgroups $A$ and $B$ such that the generalized Fitting subgroups of $A$ and $B$ satisfy `$F^{*}(A) \leq B$` and `$F^{*}(B) \leq A$`, then there is a prime $p$ such that `$F^{*}(A)$` and `$F^{*}(B)$`are both $p$-groups (again, the exceptional situation does occur in nature, for example if $G$ is a simple group of Lie type of characteristic $p$ and rank greater than $1$). However, if $A$ and $B$ are both solvable, then $p = 2$ or $3$. It does, however, appear that there are situations in the study of finite simple groups where the Bender method is not as easily applicable as the method of signalizer functors. - Thank you so much! – liobei Nov 17 at 14:15 I think the basic paper is actually this one: Aschbacher, Michael. "On the maximal subgroups of the finite classical groups." Inventiones mathematicae 76.3 (1984): 469-514. - I am not an expert, but I believe lots of work was done on the case of groups of Lie type by Seitz, Liebeck and others. I think the basic paper is "The maximal subgroups of classical algebraic groups" by Seitz. These results are of great importance in the development of the many of the random generation results that were found in the last 20 years. But I will leave it to the experts to add more information. - The Atlas of Finite Group Representations has information on the maximal subgroups of some particular finite simple groups. This is at least a handy place to start. See http://brauer.maths.qmul.ac.uk/Atlas/v3/lin/L253/#maxes for an example. The front page is at http://brauer.maths.qmul.ac.uk/Atlas/v3/ - The recent book of Malle and Testerman, Linear algebraic groups and finite groups of Lie type, has several chapters on the subject. From the MR review: some important recent developments are treated here for the first time. For instance, the authors describe the classification of the maximal subgroups of simple algebraic groups, and this is used in their subsequent analysis of the subgroup structure of finite groups of Lie type. (etc. The review goes on to describe the Aschbacher/Liebeck-Seitz results mentioned in other replies.) Also Robert A. Wilson's The finite simple groups addresses the question, for the classical groups in a devoted section (3.10), for the others by systematically giving references to original papers classifying the maximal subgroups. - Every finite group can be broken down into elements consisting of prime cycle permutations. Just go up the lattice. Every nontrivial subgroup has to contain at least one of these prime cycle permutations: http://oeis.org/A186202 - This OEIS link is essentially about the number of minimal subgroups of the symmetric groups. The question, however, is about the structure (not just the number) of maximal (not minimal) subgroups of finite simple groups (not the symmetric groups). – Andreas Blass Nov 30 at 0:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9141642451286316, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/180441/product-of-right-cosets-equals-right-coset-implies-normality-of-subgroup
# Product of right cosets equals right coset implies normality of subgroup I cannot seem to find a way to prove that if $H$ is a subgroup of $G$ such that the product of two right cosets of $H$ is also a right coset of $H,$ then $H$ is normal in $G.$ (This is from Herstein by the way.) Thank you. - There might be something missing from your question. What is your definition of the product of two cosets? Isn't it always a coset? – M Turgeon Aug 8 '12 at 22:27 For $a,b \in G,$ $HaHb = \{(h_1a)(h_2b) | h_1,h_2 \in H\}.$ i.e. we want to show that if for any $a,b \in G,$ we have that for some $c \in G, HaHb=Hc,$ then $H$ is normal. – limac246 Aug 8 '12 at 22:31 @MTurgeon $G/H$ is a group only when $H$ is normal. If $H$ is not normal, multiplication of cosets is not well-defined. – Code-Guru Aug 8 '12 at 22:58 @limac246 What have you tried? I suggest starting with the definitions of a normal subgroup and coset multiplication. – Code-Guru Aug 8 '12 at 22:59 @Code-Guru I know this. My point is that the usual way to define multiplication shows that the product of two cosets is a "coset", e.g. $(aH)(bH)=(ab)H$. The point is the well-definedness; I was wondering if this is what the OP was trying to show. – M Turgeon Aug 8 '12 at 23:15 show 2 more comments ## 2 Answers Hint. $~$ If $Ha^{-1}Ha$ is a right coset it must be $H$ because it contains the identity. - I'm not sure how this, as it is written, answers the OP's question: we know $\,H\,$ is a subgroup and we want to show that if a product of right cosets (the pointwise product, I guess, stemming from the group operation) is again a right coset then this sbgp. is in fact normal. Now, this follows exactly from the above: $$\forall a\in G\,\forall x,y\in H\,\exists h\in H\,\,s.t.\,\,xa^{-1}ya=h\Longrightarrow a^{-1}ya=x^{-1}h\in H$$and we're done. But how the identity in $\,H\,$ helps here? – DonAntonio Aug 9 '12 at 0:05 Hmmm...and still the above isn't complete (im my mind, of course) as the rightmost rightcoset doesn't have to be $\,H\,$, it could be $\,Hb\,$, say.. – DonAntonio Aug 9 '12 at 0:07 @DonAntonio: this is a complete proof. If $Ha^{-1}Ha=H$, then $a^{-1}Ha=H$ and $H$ is normal. In particular, it is not necessary to assume that $HaHb=Hc$ for all pairs $(a,b)$, but only for pairs where $b=a^{-1}$. The reason we need the identity for $c$ is so that you get your $xa^{-1}ya = hc$ to actually be in $H$. If $c$ were not in $H$, then we would not conclude $H$ is normal, but rather reach a contradiction. – Jack Schmidt Aug 9 '12 at 0:17 Exactly my point, @JackSchmidt ! How can we know a priori that $\,Ha^{-1}Ha=H\,$ ? This is the whole point. Of course, we can argue that since $\,Ha^{-1}Ha=Hc\,$ then for all $\,x,y\in H\,$ we have that $\,xa^{-1}ya=hc\,$ , in particular if we choose $\,y=1\in H\Longrightarrow xa^{-1}a=x=hc\in G\Longleftrightarrow c=1\,$ as we know right cosets are a partition of $\,G\,$...hmm, perhaps this is what anon meant...yes, I think it is and I didn't see clearly his hint though I knew that taking $\,Ha\,,\,Ha^{-1}\,$ is the way to prove the claim...damn, what a nice though aethereal hint! +1 – DonAntonio Aug 9 '12 at 2:07 @DonAntonio It is a fact that the only coset of a given subgroup containing the identity is the subgroup itself. Another fact is that for any subset $S\subseteq G$ and subgroup $H\le G$, we have $SH\subseteq H\iff S\subseteq H$. (Same for $HS$.) In my opinion these should be standard exercises. – anon Aug 9 '12 at 2:23 show 3 more comments A (not quite as) short alternate proof: If $HaHb=Hc$ then $HaHb=Hab$. @anon's short proof chooses $b=a^{-1}$, but you can also choose $b=1$, since $$HaH = Ha \iff 1aH \subseteq Ha$$ Of course to get equality, we also have to use $$Ha^{-1}H =Ha^{-1} \iff a^{-1} H \subseteq Ha^{-1} \iff Ha \subseteq aH$$ In general, $HaHb=Hab \iff aHb \subseteq Hab$, so if we want $aH=Ha$ we choose $b=1$ and if we want $aHa^{-1}= H$ we choose $b=a^{-1}$. If groups are finite, we don't even have to pay attention to $\subseteq$ versus $=$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438112378120422, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/203941-what-do-i-need-do-equation.html
# Thread: 1. ## What do I need to do with this equation? 20x -10 = 6+12x Maths was never my best subject! Thanks! 2. ## Re: What do I need to do with this equation? 20x -10 = 6+12x You have to solve for x, so you need to isolate x, to do that, you can subtract 12x from both sides: $20x-10-12x =6+12x-12x$ and you will get -> $8x-10=6$, now add 10 to both sides and you get rid of that -10 on the left side and you will have -> $8x=6+10=16$, now you have 8x=16, you can divide both sides by 8 to get rid of that 8 next to x and you will get -> $x=\frac{16}{8}$ -> $x=2$. 3. ## Re: What do I need to do with this equation? Originally Posted by Sb2011 20x -10 = 6+12x add 10 and subtract 12x from both sides ... 4. ## Re: What do I need to do with this equation? I don't know why I didn't see it! More practice is needed I suppose. Thanks again guys!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9673953056335449, "perplexity_flag": "middle"}
http://www.mathplanet.com/education/algebra-1/discovering-expressions,-equations-and-functions/representing-functions-as-rules-and-graphs
# Representing functions as rules and graphs Let's begin by looking at an example: At a store the carrots cost \$2.50/lb. The prize the customer pays is dependent on how many pounds of carrots that he buys. Another way to say this is to say that the total cost is a function of the pounds bought. We can write this as an equation. $\\ total\: cost=prize\: per\: lb\: \cdot \: weight\: bought \\$ or $\\ y=2.50\cdot x \\$ A function is an equation which shows the relationship between the input x and the output y and where there is exactly one output for each input. Another word for input is domain and for output the range. As we stated earlier the prize the costumer has to pay, y, is dependent on how many pounds of carrots, x, that the customer buys. The number of pounds bought is called the independent variable since that's what we're changing whereas the total prize is called the dependent variable since it is dependent on how many pounds we actually buy. Input variable = Independent variable = Domain Output variable = Dependent variable = Range Functions are usually represented by a function rule where you express the dependent variable, y, in terms of the independent variable, x. $\\ y=2.50\cdot x \\$ You can represent your function by making it into a graph. The easiest way to make a graph is to begin by making a table containing inputs and their corresponding outputs. Again we use the example with the carrots Input, x (lb) Output, y (\$) 0 0 1 2.50 2 5.00 3 7.50 A pair of an input value and its corresponding output value is called an ordered pair and can be written as (a, b). In an ordered pair the first number, the input a, corresponds to the horizontal axis and the second number, the output b, corresponds to the vertical axis. We can thus write our values as ordered pairs (0, 0) - This ordered pair is also referred to as the origin (1, 2.5) (2, 5) (3, 7.5) These ordered pairs can then be plotted into a graph. A pairing of any set of inputs with their corresponding outputs is called a relation. Every function is a relation, but not all relations are functions. In the example above with the carrots every input gives exactly one output which qualifies it as a function. If you are insecure whether your relation is a function or not you can draw a vertical line right through your graph. If the relation is not a function the graph contains at least two points with the same x-coordinate but with different y-coordinates. The relation portrayed in the graph to the left shows a function whereas the relation in the graph to the right is not a function since the vertical line is crossing the graph in two points. Videolesson: Write a rule for the function: Input 0 1 2 4 5 output 4 3 2 0 -1 Next Class:  Exploring real numbers, Integers and rational numbers • Pre-Algebra • Algebra 1 • Algebra 2 • Geometry • Sat • Act
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 3, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232446551322937, "perplexity_flag": "head"}
http://mathoverflow.net/questions/116797/constructive-proof-of-projective-implies-proper/116805
Constructive proof of “Projective implies proper” Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For every ring $A$, the structural morphism of schemes $\pi_A : {\bf P}^n_{A} \to {\rm Spec}{A}$ is a closed map. The usual proof of this fact is not constructive : given equations of a closed subset $Z$ of ${\bf P}^n_{A}$, it doesn't produce equations for $\pi_A(Z)$. In the case $A$ is a polynomial ring over an algebraically closed field $k$, this result is none other than the fundamental theorem of elimination theory : the image of a Zariski-closed subset of ${\bf P}^n(k) \times k^m$ under the second projection is a Zariski-closed subset of $k^m$. The first proofs of this theorem (Cayley, Kronecker, Sylvester) used resultants and thus were constructive. In fact, the proof using elimination theory is universal in the following sense. Given integers $n,r \geq 1$, $d_1,\ldots,d_r \geq 1$, consider the universal homogenous polynomials $P_1,\ldots,P_r$ of degree $d_1,\ldots,d_r$ in the indeterminates $T_0,\ldots,T_n$, having coefficients in the polynomial ring $\widetilde{A} = \mathbf{Z}[Y_{i,\alpha} : 1 \leq i \leq r]$, where the indeterminates $Y_{i,\alpha}$ are the coefficients of $P_i$. Then there exists an explicit "resultant system" $R_1,\ldots,R_s \in \widetilde{A}$ such that $\pi_{\widetilde{A}}(V_+(P_1,\ldots,P_r))=V(R_1,\ldots,R_s)$. This means that specializations of $P_1,\ldots,P_r$ in some algebraically closed field $k$ have a common root in ${\bf P}^n(k)$ if and only if the corresponding specializations of $R_1,\ldots,R_s$ all vanish. Of course $s$ has to depend on $n,r,d_i$, but everything is explicit (at least from a theoretical point of view). Now let $A$ be any ring and let $I=(f_1,\ldots,f_r)$ be an homogenous ideal of finite type of $A[T_0,\ldots,T_n]$. Then the resultant system above specialized at $f_1,\ldots,f_r$ provides explicit equations for $\pi_A(V_+(I))$ (this can be seen by studying the geometric fibers of $\pi_A$). In particular if $A$ is noetherian, then the map $\pi_A$ is closed, and we have a constructive proof for that. But in general, a closed subset of ${\bf P}^n_A$ need not be defined by finitely many equations. This raises the following questions : 1. Is there a way to prove that the map $\pi_A$ is closed for every ring $A$, by some clever reduction to the noetherian case? 2. If $Z$ is a closed subset of ${\bf P}^n_A$, given to us by infinitely many explicit equations $(f_i)_{i \in I}$, is there a way to produce explicit equations for $\pi_A(Z)$? In other words, is there a constructive proof of the fact that $\pi_A$ is closed? 3. Regarding question 2, an obvious thing to do is to look at all finite subfamilies $(f_i)_{i \in J}$, where $J$ is a finite subset of $I$, and to consider the associated resultant systems. Are all these equations sufficient to define $\pi_A(Z)$? EDIT. Will Sawin has proved that the answers to all these questions is yes. Following Daniel Litt's comment, we can also consider $\pi_A(Z)$ as a closed subscheme of $\operatorname{Spec} A$, namely the closed subscheme defined by the kernel of the morphism $A \to \mathcal{O}_Z(Z)$. Do the resultant systems generate this ideal of $A$? - 1 Do you want equations cutting out $\pi_A(Z)$ set-theoretically (which suffices for the claim of the title)? Or do you want generators for the ideal of the scheme-theoretic image $\pi_A(Z)$--namely, the kernel of $A\to \pi_*(\mcl{O}_Z)$? It seems to me that Will's answer meets the first requirement, but it's not obvious to me that it meets the second requirement... – Daniel Litt Dec 19 at 22:34 Dear Daniel, thanks for your comment. Indeed, this is something I also wanted to clarify. It's already not obvious to me in the noetherian case. Possibly it's enough to treat the case of the ring $\widetilde{A}$, but how would one go to prove that? – François Brunault Dec 20 at 8:02 I edited the post to incorporate Daniel's refined question. – François Brunault Dec 20 at 8:17 1 Answer 3, and thus 2 and 1: yes. By checking equality between the two sets at each point, we reduce to the case where the base is a point. But points are always Noetherian schemes, and the statement is obviously true for Noetherian schemes. - Dear Will, thanks for your answer. Your argument convices me! Following Daniel's comment, do you know if it also works scheme-theoretically? – François Brunault Dec 20 at 8:01 No, but I'll think about it. – Will Sawin Dec 20 at 18:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449558854103088, "perplexity_flag": "head"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/X02/x02intro.html
# NAG Library Chapter Introductionx02 – Machine Constants ## 1  Scope of the Chapter This chapter is concerned with parameters which characterise certain aspects of the computing environment in which the NAG C Library is implemented. They relate primarily to floating point arithmetic, but also to integer arithmetic, the elementary functions and exception handling. The values of the parameters vary from one implementation of the Library to another, but within the context of a single implementation they are constants. The parameters are intended for use primarily by other functions in the Library, but users of the Library may sometimes need to refer to them directly. Most of these constants are not functions, but they are defined in the header file <nagx02.h>. Defined constant names are specified in upper case characters, and functions in lower case. Those machine constants which are defined as functions have also been given upper case names using #define in <nagx02.h>. ## 2  Background to the Problems ### 2.1  Floating-point Arithmetic #### 2.1.1  A model of floating point arithmetic In order to characterise the important properties of floating point arithmetic by means of a small number of parameters, NAG uses a simplified model of floating point arithmetic. The parameters of the model can be chosen to provide a sufficiently close description of the behaviour of actual implementations of floating point arithmetic, but not, in general, an exact description; actual implementations vary too much in the details of how numbers are represented or arithmetic operations are performed. The model is based on that developed by Brown (1981), but differs in some respects. The essential features are summarized here. The model is characterised by four integer arguments. The four integer arguments are: $b$: the base $p$: the precision (i.e., the number of significant base-$b$ digits) ${e}_{\mathrm{min}}$: the minimum exponent ${e}_{\mathrm{max}}$: the maximum exponent These parameters define a set of numerical values of the form: $f×be$ where the exponent $e$ must lie in the range [${e}_{\mathrm{min}},{e}_{\mathrm{max}}$], and the fraction $f$ (also called the mantissa or significand) lies in the range $\left[1/b,1\right)$, and may be written $f=0. f1f2⋯fp$ Thus $f$ is a $p$-digit fraction to the base $b$; the ${f}_{i}$ are the base-$b$ digits of the fraction: they are integers in the range $0$ to $b-1$, and the leading digit ${f}_{1}$ must not be zero. The set of values so defined (together with zero) are called model numbers. For example, if $b=10$, $p=5$, ${e}_{\mathrm{min}}=-99$ and ${e}_{\mathrm{max}}=+99$, then a typical model number is $0.12345×{10}^{67}$. The model numbers must obey certain rules for the computed results of the following basic arithmetic operations: addition, subtraction, multiplication, negation, absolute value, and comparisons: the computed result must be the nearest model number to the exact result (assuming that overflow or underflow does not occur); if the exact result is midway between two model numbers, then it may be rounded either way. For division and square root, this latter rule is relaxed: the computed result may also be one of the next adjacent model numbers on either side of the permitted values just stated. On many machines, the full set of representable floating point numbers conforms to the rules of the model with appropriate values of $b$, $p$, ${e}_{\mathrm{min}}$ and ${e}_{\mathrm{max}}$. For machines supporting IEEE binary double precision arithmetic: $b = 2 p = 53 emin = -1021 emax = 1024.$ (Note:  the model used here differs from that described in Brown (1981) in the following respect: square-root is treated, like division, as a weakly supported operator.) #### 2.1.2  Derived arguments of floating point arithmetic Most numerical algorithms require access, not to the basic parameters of the model, but to certain derived values, of which the most important are: the machine precision $\epsilon $: $\text{}=\left(\frac{1}{2}\right)×{b}^{1-p}$ the smallest positive model number: $\text{}={b}^{{e}_{\mathrm{min}}-1}$ the largest positive model number: $\text{}=\left(1-{b}^{-p}\right)×{b}^{{e}_{\mathrm{max}}}$ It is important to note that the machine precision defined here differs from that defined by ISO (1997). Two additional derived values are used in the NAG C Library. Their definitions depend not only on the properties of the basic arithmetic operations just considered, but also on properties of some of the elementary functions. We define the safe range parameter to be the smallest positive model number $z$ such that for any $x$ in the range $\left[z,1/z\right]$ the following can be computed without undue loss of accuracy, overflow, underflow or other error: • $-x$ • $1/x$ • $-1/x$ • $\sqrt{x}$ • $\mathrm{log}\left(x\right)$ • $\mathrm{exp}\left(\mathrm{log}\left(x\right)\right)$ • ${y}^{\left(\mathrm{log}\left(x\right)/\mathrm{log}\left(y\right)\right)}$ for any $y$ In a similar fashion we define the safe range argument for complex arithmetic as the smallest positive model number $z$ such that for any $x$ in the range [$z,1/z$] the following can be computed without any undue loss of accuracy, overflow, underflow or other error: • $-w$ • $1/w$ • $-1/w$ • $\sqrt{w}$ • $\mathrm{log}\left(w\right)$ • $\mathrm{exp}\left(\mathrm{log}\left(w\right)\right)$ • ${y}^{\left(\mathrm{log}\left(w\right)/\mathrm{log}\left(y\right)\right)}$ for any $y$ • $\left|w\right|$ where $w$ is any of $x$, $ix$, $x+ix$, $1/x$, $i/x$, $1/x+i/x$, and $i$ is the square root of $-1$. ### 2.2  Other Aspects of the Computing Environment No attempt has been made to characterise comprehensively any other aspects of the computing environment. The other functions in this chapter provide specific information that is occasionally required by functions in the Library. ## 3  Recommendations on Choice and Use of Available Functions Derived parameters of model of floating point arithmetic: largest positive model number nag_real_largest_number (X02ALC) machine precision nag_machine_precision (X02AJC) safe range nag_real_safe_small_number (X02AMC) safe range of complex floating point arithmetic nag_complex_safe_small_number (X02ANC) smallest positive model number nag_real_smallest_number (X02AKC) Largest permissible argument for SIN and COS nag_max_sine_argument (X02AHC) Largest representable integer nag_max_integer (X02BBC) Maximum number of decimal digits that can be represented nag_decimal_digits (X02BEC) Parameters of model of floating point arithmetic: None. ## 5  References Brown W S (1981) A simple but realistic model of floating-point computation ACM Trans. Math. Software 7 445–480 ISO (1997) ISO Fortran 95 programming language (ISO/IEC 1539–1:1997)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 65, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7464576363563538, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/60148-consistent-unbiased-estimator.html
# Thread: 1. ## Consistent and unbiased estimator. Let $Y_1, Y_2 , . . . \dotso Y_n$ denote a random sample from the uniform distribution on the interval $(\theta,\ \theta+1)$. Let $\hat{\theta_1} = \overline{Y}-\frac{1}{2}$ and $\hat{\theta_2} = Y_{(n)} -\frac{n}{n+1}$ a) show that both $\hat{\theta_1}$ and $\hat{\theta_2}$ are unbiased estimators for $\theta$ b) show that both $\hat{\theta_1}$ and $\hat{\theta_2}$ are consistent estimators for $\theta$ Attempt a) I figure $f(y) = \left\{ \begin{array}{rcl}<br /> \frac{1}{(\theta+1)-\theta} & \mbox{for} & \theta \leq y \leq \theta+1 \\ 0 & \mbox{otherwise} & <br /> \end{array}\right.$ simplifying I get: $f(y) = \left\{ \begin{array}{rcl}<br /> 1 & \mbox{for} & \theta \leq y \leq \theta+1 \\ 0 & \mbox{otherwise} & <br /> \end{array}\right.$ so for the estimator of $\hat{\theta_1}$ $\int_\theta^{\theta+1} y \ dy$ $= \frac{y^2}{2}\bigg{|}^{\theta+1}_{\theta}$ $= \frac{1}{2}\bigg{(}(\theta+1)^2-\theta^2\bigg{)}$ $= \frac{1}{2}(2\theta+1) = \theta +\frac{1}{2}$ I know I'm near the end for $\hat{\theta_1}$ but I don't know how to get it into to above form. for $\hat{\theta_2}$ which is an order stat. my $F(y) = y$, thus for the max function which is $n[F(y)]^{n-1}f(y)$, so getting the estimator: $\int_\theta^{\theta+1} n[y]^{n-1} \cdot 1 \cdot y \ dy$ $=\int_\theta^{\theta+1} n[y]^n \ dy$ $=y^{n+1}\frac{n}{n+1} \bigg{|}_\theta^{\theta+1}$ $=\frac{n}{n+1} \bigg{(}(\theta+1)^{n+1} - \theta^{n+1}\bigg{)}$ at this point I get stuck. b) Looking at the definition $\lim_{n\rightarrow \infty} \ V(\hat{\theta}) = 0$ so for $\hat{\theta_1}$ $E[\hat{\theta_1^2}]= \int_\theta^{\theta+1} y^2 \ dy$ $= \frac{y^3}{3}\bigg{|}^{\theta+1}_{\theta}$ $= \frac{1}{3}\bigg{(}(\theta+1)^3-\theta^3\bigg{)}$ $=\theta^2+\theta+\frac{1}{3}$ thus $V(\theta)= \theta^2+\theta+\frac{1}{3} - (\theta +\frac{1}{2})^2= \frac{1}{12}$ this next step I'm not sure, but it looks similar to the example I have in the book $V(\hat{\theta_1})=V \left[ \overline{Y}-\frac{1}{2}\right] = V(\overline{Y})+\frac{1}{4}$ at which point I don't know how to proceed. for $\hat{\theta_2}$ $E[\theta^2]=\int_\theta^{\theta+1} n[y]^{n-1} \cdot 1 \cdot y^2 \ dy$ $E[\theta^2]=\int_\theta^{\theta+1} n[y]^{n+1} \ dy$ $E[\theta^2]=y^{n+2}\frac{n}{n+2} \bigg{|}_\theta^{\theta+1}$ $E[\theta^2]=\frac{n}{n+2} \bigg{(}(\theta+1)^{n+2} - \theta^{n+2}\bigg{)}$ for the variance $V[\theta]= \frac{n}{n+2} \bigg{(}(\theta+1)^{n+2} - \theta^{n+2}\bigg{)} - \left( \frac{n}{n+1} \bigg{(}(\theta+1)^{n+1} - \theta^{n+1}\bigg{)} \right)^2$ $V[\theta]= \frac{n}{n+2} \bigg{(}(\theta+1)^{n+2} - \theta^{n+2}\bigg{)} - \left( \frac{n}{n+1} \right)^2 \bigg{(}[(\theta+1)^{n+1}]^2 -2[\theta(\theta+1)]^{n+1} + [\theta^{n+1}]^2\bigg{)}$ I can't seem to go further from here. 2. Originally Posted by lllll Let $Y_1, Y_2 , . . . \dotso Y_n$ denote a random sample from the uniform distribution on the interval $(\theta,\ \theta+1)$. Let $\hat{\theta_1} = \overline{Y}-\frac{1}{2}$ and $\hat{\theta_2} = Y_{(n)} -\frac{n}{n+1}$ a) show that both $\hat{\theta_1}$ and $\hat{\theta_2}$ are unbiased estimators for $\theta$ [snip] Attempt a) I figure $f(y) = \left\{ \begin{array}{rcl}<br /> \frac{1}{(\theta+1)-\theta} & \mbox{for} & \theta \leq y \leq \theta+1 \\ 0 & \mbox{otherwise} & <br /> \end{array}\right.$ simplifying I get: $f(y) = \left\{ \begin{array}{rcl}<br /> 1 & \mbox{for} & \theta \leq y \leq \theta+1 \\ 0 & \mbox{otherwise} & <br /> \end{array}\right.$ so for the estimator of $\hat{\theta_1}$ $\int_\theta^{\theta+1} y \ dy$ $= \frac{y^2}{2}\bigg{|}^{\theta+1}_{\theta}$ $= \frac{1}{2}\bigg{(}(\theta+1)^2-\theta^2\bigg{)}$ $= \frac{1}{2}(2\theta+1) = \theta +\frac{1}{2}$ I know I'm near the end for $\hat{\theta_1}$ but I don't know how to get it into to above form. for $\hat{\theta_2}$ which is an order stat. my $F(y) = y$, thus for the max function which is $n[F(y)]^{n-1}f(y)$, so getting the estimator: $\int_\theta^{\theta+1} n[y]^{n-1} \cdot 1 \cdot y \ dy$ $=\int_\theta^{\theta+1} n[y]^n \ dy$ $=y^{n+1}\frac{n}{n+1} \bigg{|}_\theta^{\theta+1}$ $=\frac{n}{n+1} \bigg{(}(\theta+1)^{n+1} - \theta^{n+1}\bigg{)}$ at this point I get stuck. [snip] $E(\hat{\theta_1}) = E\left(\overline{Y} - \frac{1}{2}\right)$ $= E\left(\frac{Y_1 + Y_2 + \, .... \, + Y_n}{n} - \frac{1}{2}\right)$ $= E\left(\frac{Y_1}{n}\right) + E\left(\frac{Y_2}{n}\right) + \, .... \, + E\left(\frac{Y_n}{n}\right) - E\left(\frac{1}{2}\right)$ $= \frac{\theta + \frac{1}{2}}{n} + \frac{\theta + \frac{1}{2}}{n} + \, .... \, + \frac{\theta + \frac{1}{2}}{n} - \frac{1}{2}$ $= n \, \left( \frac{\theta + \frac{1}{2}}{n}\right) - \frac{1}{2} = \theta$. -------------------------------------------------------------------------------- $E(\hat{\theta_2}) = E\left(Y_{(n)} - \frac{n}{n+1}\right)$ $= E\left(Y_{(n)} \right) - E\left(\frac{n}{n+1}\right)$ $= E\left(Y_{(n)} \right) - \frac{n}{n+1}$. $g(u) = n [F(u)]^{n-1} f(u) = n (u - \theta)^{n-1} (1) = n (u - \theta)^{n-1}$. Note: Since the distribution of Y is uniform, no integration is necessary to get F(u). Therefore $E\left(Y_{(n)} \right) = \int_{\theta}^{\theta + 1} u \, g(u) \, du = n \int_{\theta}^{\theta + 1} u \, (u - \theta)^{n-1} \, du$ Substitute $w = u - \theta$: $= n \int_{0}^{1} (w + \theta) \, w^{n-1} \, dw$ $= n \int_{0}^{1} w^{n} \, dw + n \theta \int_{0}^{1} w^{n-1} \, dw$ $= \frac{n}{n+1} + \theta$. 3. Originally Posted by lllll Let $Y_1, Y_2 , . . . \dotso Y_n$ denote a random sample from the uniform distribution on the interval $(\theta,\ \theta+1)$. Let $\hat{\theta_1} = \overline{Y}-\frac{1}{2}$ and $\hat{\theta_2} = Y_{(n)} -\frac{n}{n+1}$ a) show that both $\hat{\theta_1}$ and $\hat{\theta_2}$ are unbiased estimators for $\theta$ b) show that both $\hat{\theta_1}$ and $\hat{\theta_2}$ are consistent estimators for $\theta$ [snip] b) Looking at the definition $\lim_{n\rightarrow \infty} \ V(\hat{\theta}) = 0$ so for $\hat{\theta_1}$ $E[\hat{\theta_1^2}]= \int_\theta^{\theta+1} y^2 \ dy$ $= \frac{y^3}{3}\bigg{|}^{\theta+1}_{\theta}$ $= \frac{1}{3}\bigg{(}(\theta+1)^3-\theta^3\bigg{)}$ $=\theta^2+\theta+\frac{1}{3}$ thus $V(\theta)= \theta^2+\theta+\frac{1}{3} - (\theta +\frac{1}{2})^2= \frac{1}{12}$ this next step I'm not sure, but it looks similar to the example I have in the book $V(\hat{\theta_1})=V \left[ \overline{Y}-\frac{1}{2}\right] = V(\overline{Y})+\frac{1}{4}$ Mr F says: That's not right. It's just ${\color{red}V\left(\overline{Y}\right)}$. at which point I don't know how to proceed. for $\hat{\theta_2}$ $E[\theta^2]=\int_\theta^{\theta+1} n[y]^{n-1} \cdot 1 \cdot y^2 \ dy$ $E[\theta^2]=\int_\theta^{\theta+1} n[y]^{n+1} \ dy$ $E[\theta^2]=y^{n+2}\frac{n}{n+2} \bigg{|}_\theta^{\theta+1}$ $E[\theta^2]=\frac{n}{n+2} \bigg{(}(\theta+1)^{n+2} - \theta^{n+2}\bigg{)}$ for the variance $V[\theta]= \frac{n}{n+2} \bigg{(}(\theta+1)^{n+2} - \theta^{n+2}\bigg{)} - \left( \frac{n}{n+1} \bigg{(}(\theta+1)^{n+1} - \theta^{n+1}\bigg{)} \right)^2$ $V[\theta]= \frac{n}{n+2} \bigg{(}(\theta+1)^{n+2} - \theta^{n+2}\bigg{)} - \left( \frac{n}{n+1} \right)^2 \bigg{(}[(\theta+1)^{n+1}]^2 -2[\theta(\theta+1)]^{n+1} + [\theta^{n+1}]^2\bigg{)}$ I can't seem to go further from here. Since $\hat{\theta_1}$ and $\hat{\theta_2}$ are unbiased estimators, it's sufficient to show $\lim_{n \rightarrow \infty} Var(\hat{\theta_1}) = 0$ and $\lim_{n \rightarrow \infty} Var(\hat{\theta_2}) = 0$. $Var(\hat{\theta_1}) = Var\left(\overline{Y} - \frac{1}{2}\right) = Var(\overline{Y}) = Var\left( \frac{Y_1 + Y_2 + \, .... \, + Y_n}{n}\right)$ $= \frac{1}{n^2} \, Var\left( Y_1 + Y_2 + \, .... \, + Y_n \right) = \frac{n}{n^2} \, Var(Y_i) = \frac{1}{n} \, \left(\frac{1}{12}\right)$. ---------------------------------------------------------------------------------------------------------------------------- $Var(\hat{\theta_2}) = Var\left(Y_{(n)} - \frac{n}{n+1}\right) = Var\left(Y_{(n)}\right)$ $= E(Y_{(n)}^2) - [E(Y_{(n)})]^2$. You've already got $E(Y_{(n)})$ from part (a). $E\left(Y_{(n)}^2 \right) = \int_{\theta}^{\theta + 1} u^2 \, g(u) \, du = n \int_{\theta}^{\theta + 1} u^2 \, (u - \theta)^{n-1} \, du = \, ....$ Then $Var(\hat{\theta_2}) = \frac{n}{n+2} - \frac{n^2}{(n+1)^2} = \frac{2n+1}{n^2 + 2n + 1} - \frac{2}{n+2}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 113, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9236921668052673, "perplexity_flag": "head"}
http://mathoverflow.net/questions/96651/the-approximation-to-perturbed-kdv-equation/101617
The approximation to perturbed KdV Equation Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider the perturbed KdV Equation $$u_t-6uu_x+u_{xxx}=\epsilon u$$,I want to use perturbative expansion to construct the solution as the form $$u=u(x,t;\epsilon)=\sum_{n=0}^\infty\epsilon^n u_n(x,t)$$ The following is my question: 1.Does there exist the the solution in that form?How to prove it is convergent to the exact solution? 2.If so, we have $$u_{0t}-6u_0u_{0x}+u_{0xxx}=0$$it is the KdV equation,which can be solved by the inverse scattering method.And $$u_{1t}+6(u_0u_{1x}+u_1u_{0x})+u_{1xxx}=u_0$$,can anyone help me to prove there exists the solution of $u_1$ from this equation? - 2 Answers There are techniques other than the inverse scattering method to solve the KdV equation: energy estimates, semigroup theory etc. These techniques can be used to prove differentiability with respect to the parameter $\epsilon$. Since $\epsilon$ can be complex, this can be used to justify power series expansions. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is covered in Ablowitz and Segur (Book) for soliton initial conditions. An alternative method is given in a paper by Allen Newel. Neither of which use inverse scattering. Here multiples scales is used and much of the interest was on tails which would trail behind the soliton. Although, this doesn't exactly answer your question, since these methods are not concerned with convergence as much as accurate approximations. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940417468547821, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/celestial-mechanics+reference-request
# Tagged Questions 2answers 127 views ### How to learn celestial mechanics? I'm a PhD student in math and am really excited about celestial mechanics. I was wondering if anyone could give me a roadmap for learning this subject. The amount of information about it on the ... 0answers 169 views ### Simple model of the solar system. Parameters? Accuracy? I was thinking of making a simple 2D model of the solar system, with planets moving along ellipses like $$x(t) = k_x \sin(t + k_t) (\sin(k_\phi) + \cos(k_\phi))$$ y(t) = k_y \cos(t + k_t) ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299609661102295, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/20604/are-rings-really-more-fundamental-objects-than-semi-rings
## Are rings really more fundamental objects than semi-rings? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The discovery (or invention) of negatives, which happened several centuries ago by the Chinese, Indians and Arabs, has of course be of fundamental importance to mathematics. From then on, it seems that mathematicians have always striven to "put the negatives" into whatever algebraic structure they came across, in analogy with the usual "numerical" structure, $\mathbb{Z}$. But perhaps there are cases in which the notion of a semiring seems more natural than the notion of a ring (I will be very very sloppy!): 1) The Cardinals. They have a natural structure of semiring, and the usual construction that allows to pass from $\mathbb{N}$ to $\mathbb{Z}$ cannot be performed in this case without great loss of information. 2) Vector bundles over a space; and notice that in the infinite rank case the Grothendieck ring is trivial just because negatives are allowed. 3) Tropical geometry. 4) The notion of semiring, as opposed to that of a ring, seems to be the most natural for "categorification", in two separate senses: (i) For example, the set of isomorphism classes of objects in a category with direct sums and tensor products (e.g. finitely-generated projective modules over a commutative ring) is naturally a semiring. When one constructs the Grothendieck ring of a category, one usually adds formal negatives, but this can be a very lossy operation, as in the case of vector bundles. (ii) A category with finite biproducts (products and coproducts, and a natural isomorphism between these) is automatically enriched over commutative monoids, but not automatically enriched over abelian groups. As such, it's naturally a "many object semiring", but not a "many object ring". Do you have any examples of contexts in which semirings (which are not rings) arise naturally in mathematics? - 2 I like the question but not the title: it seems unnecessarily argumentative. You're not really trying to decide whether rings are "better" than semirings, are you? – Pete L. Clark Apr 7 2010 at 13:36 Two comments: (1) Since example 1 is a special case of 2, I guess it should follow that adjoining negatives to cardinal arithmetic would make it trivial? (2) I'm a bit puzzled by example 4. An abelian category looks to me-- and not just me -- like a ring with many objects since you have additive inverses. Was there some other example that you had in mind? – Donu Arapura Apr 7 2010 at 14:54 It's been many years since I looked at the precise definitions. But I understood the sum to be the cardinality of the disjoint union, which would be commutative. Is there something I'm missing? – Donu Arapura Apr 7 2010 at 15:25 @GE: I thought the cardinals had commutative addition, given by disjoint union of sets. In particular, for infinite cardinals, addition is just "max", at least if we have axiom of choice (so that any two cardinals are comparable). The noncommutative addition, I thought, was for ordinals, which are (isomorphism classes of) sets along with well-orderings. The ordered disjoint union is definitely noncommutative. – Theo Johnson-Freyd Apr 7 2010 at 15:56 With DA, I think you should amend statement 4. Heck, this is a CW question, so I feel no compunction about editing it myself. – Theo Johnson-Freyd Apr 7 2010 at 15:58 show 5 more comments ## 5 Answers Of course the real question is whether abelian groups are really more fundamental objects than commutative monoids. In a sense, the answer is obviously no: the definition of commutative monoid is simpler and admits alternative descriptions such as the one I give here. The latter description can be adapted to other settings, such as to the 2-category of locally presentable categories, which shares many formal properties with the category of commutative monoids (such as being closed symmetric monoidal, having a zero object, having biproducts). As such I would claim that any locally presentable closed symmetric monoidal category is itself a categorified version of a semiring, not in the sense you describe, but in that it is an algebra object in a closed symmetric monoidal category, so we may talk of modules over it, etc. However, it is undeniable that there is a large qualitative difference between the theories of abelian groups and commutative monoids. Observe that an abelian group is just a commutative monoid which is a module over $\mathbb{Z}$ (more precisely a commutative monoid has either a unique structure of $\mathbb{Z}$-module, if it has additive inverses, and no structure of $\mathbb{Z}$-module otherwise). The situation is analogous to the (smaller) difference between abelian groups and $\mathbb{Q}$-vector spaces. I do not know of a characterization of $\mathbb{Z}$ as a commutative monoid that can be transported to other settings. It seems that there is something deep about the fact that $\mathbb{Z}$-modules are so much nicer than commutative monoids, which often is taken for granted. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Semirings are pervasive throughout computer science: every notion of resource lacking a corresponding notion of debt gives rise to semiring structure in a standard way. 1. First, you formalize resource as a (partial) commutative monoid. That is, you have a set representing resources (for example, time bounds or memory usage of a computer program), and the monoidal structure has the unit representing "no resource", and the concatenation representing "combine these two resources". 2. Then, you can generate a quantale from this monoid by taking the powerset of the monoid. This forms a quantale, where the ordering is set inclusion, meet and join are set intersection and union, with monoidal structure $A \otimes B = \{ a \cdot b \;|\; a \in A \land b \in B \}$, and $I = \{e\}$ (For partial monoids, we can just consider the defined pairs.) This quantale can be interpreted as "propositions about resources". 3. Note that $(I, \otimes, \bot, \vee)$ forms a semiring. As an aside, this fact is very useful for reasoning about programs. Some further observations: 1. If you have a notion of "debt" corresponding to your notion of resource, then you can start with a group structure in step 1, and repeat the construction to get a ring. 2. Mariano's example fits into this framework, too, if you relax the commutativity restriction. Then you can view words as elements of a free monoid over an alphabet, and then you get languages as forming a noncommutative quantale. 3. Tropical algebra is an excellent framework for modelling optimization problems (ie, minimizing a cost function). You can often derive algorithms for by just twiddling Galois connections between the tropical semiring and a semiring of data. When this works, the process is so transparent it feels like magic! - I like your observation about modelling optimization problems, can you provide some reference? – Diego de Estrada Apr 26 2010 at 7:04 1 Roland Backhouse has written some nice papers on this subject -- I particularly like "Regular Algebra Applied to Language Problems". (cs.nott.ac.uk/~rcb/MPC/RegAlgLangProblems.ps.gz) Warning: his notation gets black-hole dense, since he likes to do everything by algebraic manipulations, including logic and quantifier manipulations. His techniques are pretty enough that it is worth persevering, though. – Neel Krishnaswami Apr 26 2010 at 8:11 The algebraic treatment of formal language theory uses systematically semi-rings of power series. - Although not a ring, the renormalisation group of quantum field theory is really a semigroup. Moreover, there is no compelling physical reason to add inverses, since in fact physically inverses need not exist. Indeed, the process of renormalisation often loses information, admits fixed points,... - Nikolai Durov showed that a commutative algebraic monad with 0 "is" a semiring if and only if b(x,0)=x for all x, where b is a binary operation with b(x,y) not identically equal to x. So semirings are in some sense easy to get. On the other hand, the commutative algebraic monads that seem to be his motivating examples, the unit ball in a commutative Banach algebra, are not semirings. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379346966743469, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/165335-please-help-clarify-galois-proof.html
# Thread: 1. ## Please help clarify this Galois proof I more or less copied the following notes from my professor: Find the Galois group of $x^4-2$ over $\mathbb{Q}$. Solution: $x^4-2$ has roots $\pm\sqrt[4]{2},\pm i\sqrt[4]{2}$. Note that since $G$ acts on the four roots, $G$ is isomorphic to a subgroup of $S_4$. The splitting field is $\mathbb{Q}(\sqrt[4]{2},i)$, so $|G|=8$. By Sylow, any two subgroups of $S_4$ of order $8$ must be isomorphic. So $G\cong D_8$. Everything about this proof makes perfect sense to me except the part in bold: Why is $G$ a subgroup (isomorphic) of $S_4$? It cannot be simply because $G$ acts on the four roots (because for instance any group of any order acts on any set trivially). All I can tell from that is that there is a homomorphism from $G$ into $S_4$. So, how do I conclude that $G$ is a subgroup (isomorphic) of $S_4$? Also, I'm a bit confused as to how it can be that $|G|=8$. For each $\alpha\in G$ is completely determined by $\alpha(i)$ and $\alpha(\sqrt[4]{2})$. Now, $1=\alpha(1)=\alpha(-i^2)=-\alpha(i)^2$, which seems to imply $\alpha(i)=i$. But then $\alpha(\sqrt[4]{2})$ must be one of the four roots. So $|G|\leq 4$, a contradiction. Where did I go wrong, I wonder? Any help would be much appreciated! 2. Originally Posted by hatsoff I more or less copied the following notes from my professor: Everything about this proof makes perfect sense to me except the part in bold: Why is $G$ a subgroup (isomorphic) of $S_4$? It cannot be simply because $G$ acts on the four roots (because for instance any group of any order acts on any set trivially). All I can tell from that is that there is a homomorphism from $G$ into $S_4$. So, how do I conclude that $G$ is a subgroup (isomorphic) of $S_4$? Any help would be much appreciated! As you remark the part in bold is not completely correct. For it to be it should be that the Galois group acts transitively on the four roots, and then it'd follow that the group is isomorphic to a subgroup of $S_4$. There's another way to show that $G\cong D_8$: first, it's easy to see that $G$ cannot be abelian and that its order is divisible by four and divides 8, so it is either $D_8\,\,or\,\,Q_8$. Nevertheless, this extension has a non-normal subextension, $\mathbb{Q}(\sqrt[4]{2})/\mathbb{Q}$ , from where it follows that there is a non-normal subgroup of $G$ of index 4, and since all the subgroups $Q_8$ are normal then we're done. Tonio 3. Originally Posted by hatsoff I more or less copied the following notes from my professor: Everything about this proof makes perfect sense to me except the part in bold: Why is $G$ a subgroup (isomorphic) of $S_4$? It cannot be simply because $G$ acts on the four roots (because for instance any group of any order acts on any set trivially). All I can tell from that is that there is a homomorphism from $G$ into $S_4$. So, how do I conclude that $G$ is a subgroup (isomorphic) of $S_4$? Also, I'm a bit confused as to how it can be that $|G|=8$. For each $\alpha\in G$ is completely determined by $\alpha(i)$ and $\alpha(\sqrt[4]{2})$. Now, $1=\alpha(1)=\alpha(-i^2)=-\alpha(i)^2$, which seems to imply $\alpha(i)=i$. But then $\alpha(\sqrt[4]{2})$ must be one of the four roots. So $|G|\leq 4$, a contradiction. Where did I go wrong, I wonder? Any help would be much appreciated! $\alpha(i)^2=-1\Longrightarrow \alpha(i)=\pm i$ , since $i^2=(-i)^2=-1$ Tonio 4. Okay, I see now what is going on. Thanks a bunch for the help!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 58, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.959017813205719, "perplexity_flag": "head"}
http://en.wikisource.org/wiki/On_the_Electrodynamics_of_Moving_Systems_II
# On the Electrodynamics of Moving Systems II From Wikisource On the Electrodynamics of Moving Systems II  (1904)  by Emil Cohn, translated by Wikisource In German: Zur Elektrodynamik bewegter Systeme II, Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften, Zweiter Halbband (43): 1404–1416, Source (Session of December 15, 1904.) On the Electrodynamics of Moving Systems II Emil Cohn Wikisource 1904 On the Electrodynamics of Moving Systems II. By Prof. Emil Cohn in Straßburg i. E. Submitted by Mr. Warburg. Some years ago, I gave an extension of Maxwell's equation for moving bodies, which was in agreement with all then known facts.[1] This approach was found in an inductive way, and has proven itself with respect to subsequent experiences. The decisive test concerns the case of uniform translational velocity; the special case of the equations which corresponds to this case, seems unquestionable to me. Up to now, the general equations were not subject to an equally strict experimental test. In the course of their development, I was led by the principle of "scientific economy";[2] nevertheless, it may be that there can be found a simpler way, which is in agreement with facts as well. Meanwhile, it may be allowed for me, to develop here the characteristic features of that kind of electrodynamics, which follow from my equations. The resulting theorems are in full factual agreement with the content of my older treatise, as far as the properties of the electromagnetic field are concerned; with respect to mechanical forces, they partly deviate from them. That the determination of these forces was arbitrary to some extent, was particularly emphasized by me at that time. § 1. The fundamental equations. They read: $-\int\limits _{\circ}\mathsf{E}_{s}ds=\frac{d}{dt}\int\mathfrak{M}_{N}dS$ I [ 1405 ] $\int\limits _{\circ}\mathsf{M}_{s}ds=\frac{d}{dt}\int\mathfrak{E}_{N}dS+\int\Lambda_{N}dS$ II $\begin{cases} \mathfrak{E}=\epsilon\mathsf{E}-[u\mathsf{M}]\\ \\\mathfrak{M}=\mu\mathsf{M}+[u\mathsf{E}]\\ \\\Lambda=\lambda(\mathsf{E-K})\end{cases}$ III $\Sigma=\mathsf{EM}$ IV $w=\frac{1}{2}(\mathsf{E}\cdot\mathfrak{E})+\frac{1}{2}(\mathsf{M}\cdot\mathfrak{M})+(u\cdot\Sigma)$ V Here, E and M denote the two field intensities; $\epsilon, \mu, \lambda$ scalar constants, K a constant vector; $u$ the velocity of matter; $\Sigma$ the radiation relative to matter; $w$ the electromagnetic energy in unit volume; $S$ a surface which continuously goes through the same material particle, $s$ its boundary curve, $N$ the perpendicular of $dS$. In vacuum, the values apply: $u = 0,\ \lambda = 0,\ \epsilon = 1,\ \mu = 1.$ By that it is said, that the speed of light in vacuum is chosen as unity. In the preceding, the entirety of our presuppositions is contained. The equations claim applicability for arbitrary velocities $u$ as far as Maxwell's equations apply for $u=0$. From our equations it follows, for example, that in case $u=0$, the radiation is propagating with the same velocity in all directions. Thus they presuppose a reference system to which this actually applies. That such a references system is existing with respect to the fixed stars, is without question. To what extent it is defined by our equations, shall be examined later. Equations I and II, related to the unit of area (supposed as infinitely small), shall be written: $-\mathsf{P(E)}=\frac{\overline{d\mathfrak{M}}}{dt}$ I' $\mathsf{P(M)}=\frac{\overline{d\mathfrak{E}}}{dt}+\Lambda$. II' Consequently, the meaning of the newly introduced symbols is: $\frac{\overline{dA}}{dt}=\frac{dA}{dt}+\Gamma(u)A-(A\cdot\nabla)u$ (1) $=\frac{\partial A}{\partial t}+\Gamma(A)u-\mathsf{P}[uA)$ (2) [ 1406 ] where $\tfrac{d}{dt}\left(\tfrac{\partial}{\partial t}\right)$ denotes the differentiation with respect to a fixed material point (space point). Furthermore P is rotation, $\Gamma$ is divergence, $\nabla$ is gradient, $(A\cdot\nabla)=A_{x}\cdot\frac{\partial}{\partial x}+A_{y}\cdot\frac{\partial}{\partial y}+A_{z}\cdot\frac{\partial}{\partial z}$. § 2. Transformation to a moving coordinate system and local time. We decompose the velocities $u$ into a common translational velocity $p$ (which is constant with respect to time) of the whole system, and the "relative" velocity $v$: $u=p+v,\ p=const.,$ (3) and we denote a differentiation with respect to time, in relation to a relatively stationary point, by $\tfrac{\delta}{\delta t}$: $\frac{\delta}{\delta t}=\frac{\partial}{\partial t}+(p\cdot\nabla)=\frac{d}{dt}-(v\cdot\nabla)$ (4) Then it is given $\frac{\overline{dA}}{dt}=\frac{\delta A}{\delta t}+\Gamma(v)A-(A\cdot\nabla)v+(v\cdot\nabla)A$ $=\frac{\delta A}{\delta t}+\Gamma(A)v-\mathsf{P}[vA)$. (5) Simultaneously, instead of the "general time" $t$ we introduce the "local time" $t'$. It is defined at a point whose radius vector is $r$, by: $r'=r-(p\cdot r)\,$ (6) Differentiations with respect to relative coordinates, in which local time is assumed as the fourth independent variable, shall be denoted by an upper index prime. Then it is $\frac{\delta}{\delta t}=\frac{\delta}{\delta t'}$ $\mathsf{P}=\mathsf{P}'-\left[p\cdot\frac{\delta}{\delta t'}\right]$ $\Gamma=\Gamma'-\left[p\cdot\frac{\delta}{\delta t'}\right]$ (7) Eventually, we decompose $\mathfrak{E}$ and $\mathfrak{M}$: $\begin{matrix}\mathfrak{E}=\mathfrak{E}_{0}-[p\mathsf{M}]\\ \\\mathfrak{M}=\mathfrak{M}_{0}+[p\mathsf{E}]\end{matrix}$ (8) [ 1407 ] By means of (5), (7) and (8), two equations emerge from I' II', which shall be simplified by putting: $p\cdot v=0\,$ (9) Then it is given: $-\mathsf{P'(E)}=\frac{\delta\mathfrak{M}_{0}}{\delta t'}+\Gamma'(\mathfrak{M}_{0})v-\mathsf{P}'[v\mathfrak{M}_{0}]=\frac{\overline{d\mathfrak{M}_{0}}}{dt'}$ I'a $-\mathsf{P'(M)}=\frac{\delta\mathfrak{E}_{0}}{\delta t'}+\Gamma'(\mathfrak{E}_{0})v-\mathsf{P}'[v\mathfrak{E}_{0}]+\Lambda=\frac{\overline{d\mathfrak{E}_{0}}}{dt'}+\Lambda$ II'a $\begin{cases} \mathfrak{E}_{0}=\epsilon\mathsf{E}-[v\mathsf{M}]\\ \\\mathfrak{M}_{0}=\mu\mathsf{M}+[v\mathsf{E}]\\ \\\Lambda=\lambda(\mathsf{E-K})\end{cases}$ III'a $\Sigma=[\mathsf{EM}]$ IV'a § 3. The field in relatively resting media. Let the only velocity of the system be $p$, or which is the same, let it be in relative rest. Then $v=0$, and the following equations – in all rigor – shall apply: $-\mathsf{P'(E)}=\frac{d\mathfrak{M}_{0}}{dt'}$ I'b $\mathsf{P'(M)}=\frac{d\mathfrak{E}_{0}}{dt'}+\Lambda$ II'b $\begin{cases} \mathfrak{E}_{0}=\epsilon\mathsf{E}\\ \\\mathfrak{M}_{0}=\mu\mathsf{M}\\ \\\Lambda=\lambda[\mathsf{E-K}]\end{cases}$ IIIb $\Sigma=[\mathsf{EM}]$ IVb These equations have exactly the same form, as Maxwell's equations for a stationary system. Only by those equations, however, the field E, M and thus also the radiation relative to matter $\Sigma$ is determined, as far as certain quantities – "electric and magnetic quantities" – which are invariable according to those very equations, are required.[3] Thus for a while we postpone the consideration of these processes in which electromagnetic energy goes over into other forms of energy, especially mechanical work,[4] and then we can say: [ 1408 ] the electrodynamics of the moving system appears (with respect to a co-moving observer) to be influenced by motion only in so far, as the observer is able to distinguish local time $t'$ from general time $t$. The difference of both quantities is, according to (6), a fraction of the light propagation-time corresponding to vector $r$, which in the worst case ($r$ parallel to $p$) is equal to the ratio of the translational velocity to the speed of light. Let us apply this to the motion of Earth: Everywhere, where the propagation of radiation is not the object of measurement, we define identical moments of time at different points of Earth's surface, by treating the propagation of light as timeless. In optics, however, we define these identical moments of time by assuming, that the propagation takes place in spherical waves for every relatively resting and isotropic medium.[5] This means: the "time" which actually serves us for the representation of terrestrial process, is the "local time" $t'$, for which the equations I'b to IVb hold, – not the "general time" $t$. What is required to experimentally distinguish $t'$ from $t$, can be shown by a proposal which recently was made by W. Wien "for the decision of the question, as to whether the luminiferous aether is moving with Earth or not."[6] Through the gaps of two gears whose common axis has the direction of Earth's motion, light of the same intensity shall be sent through in both directions. Then both gears shall be set in rotation with equal angular velocity. Wien concludes: If the aether is at rest, then the propagation of light is different for the two paths; – the arriving light hits the gear at the end of the path in different locations upon both stations; – the intensities must have become different. Now it is clear that for this experiment, not the same angular velocity is required as Wien thinks, but equal collective rotation[7] from the moment of observation at rest until the moment of observation at rotation. If the two collective rotations are equal for equal "general times" t of both stations, then one obtains a difference in brightness in the case of "dragged aether" [ 1409 ] (but no difference in the case of "stationary aether"). Whether one or the other kind of rotation actually has taken place, there can be (due to logical reasons) no optical, or more general, no electric means of testing. A material (mechanical or acoustical) protection or control is rather required. The scheme would be as follows: the two gears are located at the same shaft, which will be driven in the middle; then we must guarantee the phase equality of both ends up to 1⁄10000 of the propagation time of light, which corresponds to the length of the axis.[8] What we have to understand under "electric and magnetic quantities", still needs a clarification. Those aren't concepts which (besides our equations or independently from them) must be introduced into electrodynamics. The are rather given from these equations as "integration constants". Equation I says, that for every closed surface $S$ which goes through invariable material particles, the surface integral of $\mathfrak{M}$ is a time-independent quantity; we will call this quantity the magnetic quantity within $S$. Equation II says the same with respect to the surface integral of $\mathfrak{E}$ for a surface extended in isolators, and it connects for an arbitrary surface the temporal change of this magnitude with the electric current through $S$ in the same way, as the content of a fluid is connected with the current of a fluid. We call this magnitude the quantity of electricity within $S$. In the definitions of both magnitudes, it is implicitly presupposed, that we can know the identical moments of time in the various points of the closed surface. From the preceding it follows: When we define identical times at different places, so that the propagation of light becomes uniform with respect to the fixed stars (time $t$), then electricity and magnetism express themselves as surface integrals of $\mathfrak{E}$ and $\mathfrak{M}$. When we define identical times at different places, so that the propagation of light becomes uniform with respect to Earth (time $t'$), then they express themselves as surface integrals of $\mathsf{E}\mathsf{E}$ and $\mu\mathsf{M}$. From the equations, into which I', II', III pass for $u=p=const.$, it is given by means of (7): $\begin{matrix}\Gamma(\mathfrak{M})=\Gamma'(\mu\mathsf{M})\\ \\\Gamma(\mathfrak{E})=\Gamma'(\mathsf{E}\mathsf{E})+(p\cdot\Lambda)\end{matrix}$. (10) [ 1410 ] If we at first presuppose, that the field is static, then firstly it is $\tfrac{d}{dt}=0$ and consequently $\Gamma=\Gamma'$, and secondly $\Lambda=0$, thus $\Gamma(\mathfrak{E})=\Gamma(\mathsf{E}\boldsymbol{E})$ and thus for an arbitrarily closed surface: ${N}dS=\int\limits _{t'=const.}\mathsf{E}\mathsf{E}_{N}dS$ If $S$ is extended in isolators, then the first integral is generally of the special value $t$, and the last integral is independent of the special value of $t'$. The once existing equality of both expressions thus remains during all changes of the field; that is: $\Gamma(\mathfrak{E})=\Gamma'(\mathsf{E}\mathsf{E}),$ in the isolator (11) $\int\limits _{t=const.}\mathfrak{E}_{N}dS=\int\limits _{t'=const.}\mathsf{E}\mathsf{E}_{N}dS$ for every conductor surface. (12) (10), (11) and (12) say, that the magnitudes which are to be denoted as magnetic density ($\rho_0$), electric density in the isolator ($\rho_e$) and total quantity of electricity of a conductor ($e$), have in general the same value in both representations. Thus the result is: identical data $\rho_m$, $\rho_e$, $e$ determine identical fields E, M, independent of the values of $p$. Everything said in this paragraph applies to media, which are in relative rest with respect to a reference system that itself has a uniform translational velocity. By assuming this references system as being fixed in Earth, we neglect the rotation of axis. Theoretically spoken, the requirement of uniform propagation of light in all directions relative to earth cannot be satisfied by any "local time", because the velocity of diurnal motion has no potential. Namely, this has the consequence that the change which the propagation time of light suffers due to motion, depends on the light path, not on its start- and endpoint. However, if one considers that the diurnal motion (for one meter of distance from the axes each) varies by less than 1⁄100 cm⁄sec, then it becomes clear, that no interference experiment can detect these local velocity differences. (One thinks of an interferometer whose two light paths are the halves of the square of one meter side length; let one side-couple be parallel to the direction of motion; Na-light shall be used. A rotation of the instrument around 180° would cause a replacement of the interference image by a millionth width of a fringe.) Also, that [ 1411 ] the direction of velocity is changing with time is without observable influence. The proof shall be neglected here. Thus, we practically are allowed to consider the diurnal motion as pure translation as well, which superimposes the motion in the annual path, at every place of Earth's surface in every moment. § 4. Relative motions. Now we consider the general case of relative motions, but we presuppose that the product of common translation velocity and relative velocity is a vanishing magnitude with respect to the square of the speed of light. This condition, which is formulated in (9), has led us to equations from I'a to IVa. They are formally in agreement with I' to IV. The difference only lies in the fact, that in place of the "absolutely resting" spatial reference system, we use a "relatively resting" one, and in place of the "general time" we use "local time". Thus this means when applied to Earth: as far as the product of Earth's velocity assumed by us, and the actually given relative velocity with respect to Earth can be neglected with respect to the square of the speed of light, it is irrelevant as to whether we relate our equations to a coordinate system at rest with respect to Earth and to "terrestrial time" $t$, – or to another arbitrary coordinate system, which has the uniform velocity ($-p$) against Earth and a time $t$ defined by (6). What was spoken out as a condition here, is actually valid for all observations, when we understand under $p$ the velocity of Earth against the fixed stars (ca. $10^{-4}$). We can distinguish two fields of application: 1. Astrophysics. Here, it is either $v=-p$ (fixed stars), or it is at most of the order of magnitude $p$. Thus, the neglectable magnitudes are at most of order $10^{-8}$, while the measurement of the aberration angle and the corresponding change of wave lengths not nearly reach this precision. 2. Motions of extended bodies at Earth's surface. Here, $v$ remains small compared with $p$, and $p\cdot v$ is neglectable for any observation. Thus everything strictly derived in §3 for relatively resting systems, applies with practically sufficient precision also to relatively moving systems. Summarized: the (thus far) known facts of electrodynamics give us the choice for the representation: using the stationary Earth and [ 1412 ] terrestrial time, or the stationary celestial fixed stars and celestial time. That our equations, interpreted in the one or the other form, correctly represent the influence of relative motions, was demonstrated partly by me l.c., and partly by others. A summary and a comparison with other theories, is planned by me to be given soon. § 5. Energy conservation and mechanical forces. To obtain the energy equation, we decompose the magnitude $\tfrac{\overline{dA}}{dt}$ of equation (I) in two parts: $\frac{\overline{dA}}{dt}=\frac{d'A}{dt}+A_{def}$. (13) Here, $\tfrac{d'A}{dt}$ shall denote the change of vector $A$ relative to moving matter, or with other words, the change that $A$ suffers by the change in a fixed space-point caused by translation and by rotation. This would be the total value of $\tfrac{\overline{dA}}{dt}$, when matter is not deformed. Therefore, $A$ is the contribution that stems from the deformation. In the notation: $\frac{d'A}{dt}=\frac{dA}{dt}+\frac{1}{2}[A\cdot\mathsf{P}(u)]$ (14) $(A_{def})_{x}=A_{x}\left(\frac{\partial u_{y}}{\partial y}+\frac{\partial u_{z}}{\partial z}\right)-A_{y}\frac{1}{2}\left(\frac{\partial u_{y}}{\partial x}+\frac{\partial u_{x}}{\partial y}\right)-A_{z}\frac{1}{2}\left(\frac{\partial u_{z}}{\partial x}+\frac{\partial u_{x}}{\partial z}\right)$; etc. (15) or $A_{def}=A\cdot\Gamma(u)+A_{\delta}\,$ (16) where $(A_{\delta})_{x}=-A_{x}\frac{\partial u_{x}}{\partial x}-A_{y}\frac{1}{2}\left(\frac{\partial u_{y}}{\partial x}+\frac{\partial u_{x}}{\partial y}\right)-A_{z}\frac{1}{2}\left(\frac{\partial u_{z}}{\partial x}+\frac{\partial u_{x}}{\partial z}\right)$; etc. (17) One can easily convince oneself, that $\tfrac{\overline{dA}}{dt}$, calculated by (13), (14), (15), gives the value in (I). From the defining equations (14) it follows: $\left(\frac{d'A}{dt}\cdot B\right)+\left(A\cdot\frac{d'B}{dt}\right)=\frac{d}{dt}(A\cdot B)$ (18) $\left[\frac{d'A}{dt}\cdot B\right]+\left[A\cdot\frac{d'B}{dt}\right]=\frac{d'}{dt}[A\cdot B]$ (19) [ 1413 ] Now, we multiply I' by M, II' by E, and sum up; then it follows: $-\Gamma(\Sigma)-(\Lambda\cdot\mathsf{E})=\left(\mathsf{E}\cdot\frac{\overline{d\mathfrak{E}}}{dt}\right)+\left(\mathsf{M}\cdot\frac{\overline{d\mathfrak{M}}}{dt}\right)$, or by (13): $=\left(\mathsf{E}\cdot\frac{d'\mathfrak{E}}{dt}\right)+\left(\mathsf{M}\cdot\frac{d'\mathfrak{M}}{dt}\right)+(\mathsf{E}\cdot\mathfrak{E}_{def})+(\mathsf{M}\cdot\mathfrak{M}_{def})$. (20) We neglect the change that will be suffered by $\epsilon$ and $\mu$ due to the deformation, thus we put $\tfrac{d\epsilon}{dt}=\tfrac{d\mu}{dt}=0$; then by III it becomes: $\left(\mathsf{E}\cdot\frac{d'\mathfrak{E}}{dt}\right)+\left(\mathsf{M}\cdot\frac{d'\mathfrak{M}}{dt}\right)=\frac{d}{dt}\left\{ \frac{1}{2}(\epsilon\mathsf{E}^{2}+u\mathsf{M}^{2})\right\} +2\left(\Sigma\cdot\frac{d'u}{dt}\right)+\left(\frac{d'\Sigma}{dt}\cdot u\right)$, or by V: $=\frac{dw}{dt}-\left(u\cdot\frac{d'\Sigma}{dt}\right)$. (21) Furthermore it is by (16): $(\mathsf{E}\cdot\mathfrak{E}_{def})+(\mathsf{M}\cdot\mathfrak{M}_{def})=\left((\mathsf{E}\cdot\mathfrak{E})+(\mathsf{M}\cdot\mathfrak{M})\right)\Gamma(u)+(\mathsf{E}\cdot\mathfrak{E}_{\delta})+\mathsf{M}\cdot\mathfrak{M}_{\delta})$ $=w\Gamma(u)+\frac{1}{2}(\epsilon\mathsf{E}^{2}+\mu\mathsf{M}^{2})\Gamma(u)+\epsilon(\mathsf{E}\cdot\mathsf{E}_{\delta})$ $+\mu(\mathsf{M}\cdot\mathsf{M}_{\delta})-(\mathsf{E}\cdot[u\mathsf{M}]_{\delta})+(\mathsf{M}\cdot[u\mathsf{E}]_{\delta})$. (22) Eventually, it is given from (17) by arranging with respect to the components of $u$: $-(\mathsf{E}\cdot[u\mathsf{M}]_{\delta})+[\mathsf{M}\cdot[u\mathsf{E}]_{\delta})=-(u\cdot\Sigma_{def})=-\left(u\cdot\frac{\overline{d\Sigma}}{dt}\right)+\left(u\cdot\frac{d'\Sigma}{dt}\right)$. (23) We include (21), (22), (23) in (20), and denote by $\tau$ a material element of volume, so that $\frac{dw}{dt}+w\Gamma(u)=\frac{1}{\tau}\frac{d}{dt}(w\cdot\tau)$ Then it follows: $-\frac{1}{\tau}\frac{d(w\tau)}{dt}=\Gamma(\Sigma)+(\Lambda\cdot\mathsf{E})+\mathsf{A}$, (24) where $A=-\left(u\cdot\frac{\overline{d\Sigma}}{dt}\right)+\frac{1}{2}\left(\epsilon\mathsf{E}^{2}+\mu\mathsf{M}^{2}\right)\Gamma(u)-S_{i,k}\left\{ \left(\epsilon\mathsf{E}_{i}\mathsf{E}_{k}+\mu\mathsf{M}_{i}\mathsf{M}_{k}\right)\frac{\partial u_{i}}{\partial k}\right\} \qquad\left.{i\atop k}\right\} =x,y,z.$. (25) In (24), the left-hand side is the decrease of electromagnetic energy, the first member of the right-hand side the radiation, the second member the chemical-thermal energy spent, $A$ is thus the work spent (always calculated for the unit of time and of the material volume). [ 1414 ] The forces which exert this work, consist of the translational force $f_{1}=-\frac{\overline{d\Sigma}}{dt}$ (26) and of a system of deformation forces, which are entirely in agreement with Maxwell's tensions. They can be decomposed into a universal tension $q=-\frac{1}{2}(\epsilon\mathsf{E}^{2}+\mu\mathsf{M}^{2})$ (27a) besides the tensions $q_{ik}=+(\epsilon\mathsf{E}_{i}\mathsf{E}_{k}+\mu\mathsf{M}_{i}\mathsf{M}_{k})$. (27b) The motions of the material particles are thus determined by the equivalent system of translational forces $f$, whose components are: $f_{x}=f_{1x}+\frac{\partial q}{\partial x}+\frac{\partial q_{xx}}{\partial x}+\frac{\partial q_{xy}}{\partial y}+\frac{\partial q_{xz}}{\partial z}$; etc.[9] (28) If one substitutes the values of (26) and (27), then one obtains $\begin{matrix}f=-\frac{d\Sigma}{dt}-[\epsilon\mathsf{E}\cdot\mathsf{P(E)}]+\Gamma(\epsilon\mathsf{E})\cdot\mathsf{E}-\frac{1}{2}\mathsf{E}^{2}\cdot\nabla\epsilon\\ \\-[u\mathsf{M}\cdot\mathsf{P(M)}+\Gamma(\mu\mathsf{M})\cdot\mathsf{M}-\frac{1}{2}\mathsf{M}^{2}\cdot\nabla\mu\end{matrix}$ (27) This is the most general expression for the forces. We notice at first, that it applies for vacuum: $u=0$, $\lambda$=0, $\epsilon=\mu=1$ and thus $\mathfrak{E}=\mathsf{E},\ \mathfrak{M}=\mathsf{M},\ \Lambda=0$; furthermore $\Gamma\mathsf{E}=\Gamma\mathsf{M}=0$. Thus the last four terms in (29) individually vanish, however, the three first ones give zero according to I' and II'. Force $f$ is thus identical with zero at all space locations, where a material substrate of forces is unknown to us. This theorem is a logical postulate, as long as one doesn't ad hoc substitute (into vacuum) a medium with properties of matter. On the other side, this follows from our equations only be means of presupposing $u=0$. Thus one can conceptually define the reference system, for which the fundamental equations are valid, so that it is at rest with respect to empty space. By that, however, not the least is gained for the representation of experience. [ 1415 ] Since we measure electromagnetic forces with respect to bodies which are at rest with respect to Earth or moving slowly at least, the value which $f$ assumes for $u=p=const.$ is most important. We obtain it in the most vivid form, by again introducing local time $t'$ by means of (6). Then, it follows from (29) by means of (7), or more simple by means of: $\frac{\partial}{\partial x}=\frac{\partial}{\partial x'}-p_{x}\frac{d}{dt'}$ etc. and $\frac{\bar{d}}{dt}=\frac{d}{dt}=\frac{d}{dt'}$ directly from (28): $f_{x}=-\frac{d\Sigma_{x}}{dt'}-[\epsilon\mathsf{E}\cdot\mathsf{P'(E)}]_{x}+\Gamma'(\epsilon\mathsf{E})\cdot\mathsf{E}_{x}-\frac{1}{2}\mathsf{E}^{2}\frac{\partial\epsilon}{\partial x'}$ $-[\mu\mathsf{M\cdot P'(M)}]_{x}+\Gamma'(\mu\mathsf{M})\cdot\mathsf{M}_{x}-\frac{1}{2}\mathsf{M}^{2}\frac{\partial\mu}{\partial x'}$ $-p_{x}\frac{d}{dt'}(q+q_{xx})-p_{y}\frac{d}{dt'}(q_{xy})-p_{z}\frac{d}{dt'}(q_{xz})$. Here, it is by I'b to IIIb: $-\mathsf{P'(E)}=\mu\frac{d\mathsf{M}}{dt'}$ $\mathsf{P'(M)}=\epsilon\frac{d\mathsf{E}}{dt'}+\Lambda$; thus it follows: $\begin{matrix}f_{x}=f_{0x}+\frac{d}{dt'}\left\{ (\epsilon u-1)\Sigma_{x}+p_{x}\left(\frac{1}{2}\left(\epsilon\mathsf{E}^{2}+\mu\mathsf{M}^{2}\right)-\left(\epsilon\mathsf{E}_{x}^{2}+\mu\mathsf{M}_{x}^{2}\right)\right)\right.\\ \\\left.-p_{y}(\epsilon\mathsf{E}_{x}\mathsf{E}_{y}+\mu\mathsf{M}_{x}\mathsf{M}_{y})-p_{z}(\epsilon\mathsf{E}_{x}\mathsf{E}_{z}+\mu\mathsf{M}_{x}\mathsf{M}_{z})\right\} \end{matrix}$ (30) where $f_{0}=\Gamma'(\epsilon\mathsf{E})\cdot\mathsf{E}-\frac{1}{2}\mathsf{E}^{2}\cdot\nabla'\epsilon+\Gamma'(\mu\mathsf{M})\cdot\mathsf{M}-\frac{1}{2}\mathsf{M}^{2}\cdot\nabla'\mu+[\Lambda\cdot\mu\mathsf{M}]$ (31) The value in (31), considered as a function of relative coordinates and local time, doesn't explicitly depend on $p$; but also not implicitly, since according to § 3, also E, M and $\Lambda$ are functions (independent from P) of the same four variables. With respect to stationary states it becomes $f=f_0$. Furthermore, it is irrelevant for the representation of these states, as to whether we use local time or general time. Thus the approach is given: the forces of the stationary field in relatively resting bodies are in all rigor independent from Earth's motion. It additionally gives the amount of these forces in the well-known form, which forms the expression of all certain experiences. [ 1416 ] In the case of variable states, an amount of terms are added to $f_0$, which are all represented as complete derivatives with respect to time. This circumstance excludes the possibility, that momentary actions of periodic processes are arbitrarily summed up. On the other hand, every single term in the {}-brace is a very small quantity: $\Sigma$ as well as the quantities in the () are of order $w$; however, $\Sigma$ is connected with a factor that is very small for all easily movable bodies (gases), and the () have as factors the components of $p=10^{-4}$, if we assume that the reference system of our equations is fixed with respect to the fixed stars. What is added to $f_0$, is thus imperceptible; however, $f_0$ itself is independent of $p$, as soon as one chooses the terrestrial local time as fourth variable. Thus, also the consideration of mechanical forces verifies our earlier result: no experience hinders us, to arbitrarily relate our fundamental equations to a spatial reference system that is at rest in Earth, or to such a system that has an arbitrary uniform velocity, whose order is the relative velocity of Earth - fixed stars. We only have to adapt the temporal reference system to the freely chosen spatial system. 1. Göttinger Nachrichten 1901, Heft 1; Ann. der Physik 7, 8.29, 1902. 2. See in this respect, below p. 1409 f. 3. See below § 5. 4. The use of this definition presupposes, that bodies exist which can be rotated under all circumstances without changing their dimensions. The presupposition is the basis of our whole geometry. It is nevertheless not superfluous to mention it; since the theory of electrons negates the existence of such bodies. 5. Phys. Zeitschr. 5, p. 585, 1904. 6. "up to whole multiples of the angular distance of the gear", would be a valid but unessential generalization. 7. Also this procedure of course has only a meaning, as soon as we can be sure, that the laws of mechanics for the "general time" is strictly correct. 8. As regards the recent developments, see Lorentz, Math. Enc. V, p. 251 ff. However, it is to be noticed, that Lorentz has neglected terms with $u^2$ throughout. This is a translation and has a separate copyright status from the original text. The license for the translation applies to this edition only. Original: This work is in the public domain in the United States because it was published before January 1, 1923. The author died in 1944, so this work is also in the public domain in countries and areas where the copyright term is the author's life plus 60 years or less. This work may also be in the public domain in countries and areas with longer native copyright terms that apply the rule of the shorter term to foreign works. Translation: This work is released under the Creative Commons Attribution-ShareAlike 3.0 Unported license, which allows free use, distribution, and creation of derivatives, so long as the license is unchanged and clearly noted, and the original author is attributed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 174, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941665530204773, "perplexity_flag": "middle"}
http://nrich.maths.org/4835/note?nomenu=1
## 'Squares in Rectangles' printed from http://nrich.maths.org/ ### Why do this problem? This problem is a context for systematic number work, geometrical thinking and problem solving. It is an excellent example of a situation where the thinking involved in analysing one rectangle can be applied directly to other rectangles. These transferable insights about the structure of the problem can then be expressed as algebraic statements about all rectangles. ### Possible approach As students enter, display the $20$ and $50$ diagram, asking how many squares there are. It may be appropriate to give the answers then ask pairs to come and explain - one to talk, and one to write/draw/record on the board. Well laid out number work will help with the algebra later, so the students' boardwork should prompt more suggestions about how to record working for this problem. Present the problem, give students an opportunity to share first ideas. Several approaches (working backwards, trial and error, building up from smaller ones, systematic searching) might be suggested, and advantages/disadvantages discussed. Encourage students to compare results with peers, and to resolve discrepancies by mathematical argument, rather than relying on the teacher's spreadsheet (see below). It might be useful to gather the results of the students as they work, to help them to see patterns and encourage them to conjecture the results for other rectangles. With a group who have not moved towards algebra, a final plenary could ask for observations about the rectangles, and discuss how each can be expressed algebraically. ### Key questions Is there an obvious rectangle which contains $100$ squares? How might you organise a search for rectangles with exactly $100$ squares? Is what you're describing specific to this rectangle? how does it generalise? ### Possible extension Prove that you have all the rectangles. Can you find an algebraic rule for the number of squares contained in an '$m \times m$' square? an '$m \times n$' rectangle? For a given area, which rectangle gives the largest total number of squares? Can you show this in general? If the original question didn't say $100$, what other numbers (under $100$) would give a problem with non-trivial solutions? Is there a pattern to these? Set up a spreadsheet to calculate numerical solutions to these problems. ### Possible support Struggling students could shade squares on worksheets (2nd sheet ) with lots of small copies of the rectangles. Encourage them to work systematically, in order to observe the structure, and then make conjectures about the numbers of the next size of square, or in the next rectangle. Once they, either individually or as a group, have worked out the full counting they could be asked to do this activity: In a pair, each guess a rectangle which might have $40$ squares. Swap squares and each work out the total. Whose guess was closest to $40$? Between you, can you come up with a better guess? This spreadsheet contains a calculator to work out the totals and also lists all of the possibilities; you might find this helpful for checking students' work quickly.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292147159576416, "perplexity_flag": "middle"}
http://mathhelpforum.com/number-theory/41324-help-algebraic-number-theory.html
# Thread: 1. ## Help with Algebraic Number Theory Hello I'm currently working through a text titled 'Introductory Algebraic Number Theory' by Saban Alaca and Kenneth S. Williams, and I'm having trouble with some of the practise exercises throughout the text. The first problem I've been trying to figure out is the following: Consider the integral domain A = Z + Z((1 + sqrt(m))/2) where m = 1(mod4) and is less than -3 Prove that the set of units U(Z +Z((1 + sqrt(m)/2)= (+1, -1). Can anyone help me out with this? 2. Originally Posted by jamix Consider the integral domain A = Z + Z((1 + sqrt(m))/2) where m = 1(mod4) and is less than -3 Prove that the set of units U(Z +Z((1 + sqrt(m)/2)= (+1, -1). I did not work on this problem, but did you try doing it by definition? Meaning $\alpha$ is a unit iff there is $\beta$ such that $\alpha \beta = 1$. Now if $\alpha \in A$ then $\alpha = a + b\cdot \tfrac{1+\sqrt{m}}{2}$. Thus, we are saying we can find $\beta = c + d \cdot \tfrac{1+\sqrt{n}}{2}$ so that $\left( a + b\cdot \tfrac{1+\sqrt{m}}{2} \right) \cdot \left( c + d\cdot \tfrac{1+\sqrt{n}}{2} \right) = 1$. Now multiply out compare coefficients with RHS and argue that $b=d=0$ and $ac=1$. Thus, $a=c=1$ or $a=c=-1$. Which tells you the units at $\pm 1$. 3. Originally Posted by jamix Hello I'm currently working through a text titled 'Introductory Algebraic Number Theory' by Saban Alaca and Kenneth S. Williams, and I'm having trouble with some of the practise exercises throughout the text. The first problem I've been trying to figure out is the following: Consider the integral domain A = Z + Z((1 + sqrt(m))/2) where m = 1(mod4) and is less than -3 Prove that the set of units U(Z +Z((1 + sqrt(m)/2)= (+1, -1). Can anyone help me out with this? let $x=a+\frac{b}{2}(1 + \sqrt{m}) \in A.$ then: $N(x)=x \bar{x}=\left(a + \frac{b}{2} + \frac{b\sqrt{m}}{2} \right)\left(a + \frac{b}{2} - \frac{b\sqrt{m}}{2}\right)=\left(a + \frac{b}{2}\right)^2 - \frac{b^2m}{4}.$ now $x$ is a unit iff $N(x)=1,$ i.e. iff: $\left(a + \frac{b}{2}\right)^2-\frac{b^2m}{4}=1.$ call this (1). so $1+\frac{b^2m}{4} = \left(a+\frac{b}{2} \right)^2 \geq 0.$ hence $b^2m \geq -4.$ call this (2). now we're given that $m < -3.$ suppose that $b \neq 0.$ then $b^2 \geq 1,$ because $b$ is an integer. thus $b^2 m < - 3.$ so by (2) we'll have: $-4 \leq b^2m < -3,$ which is possible only when $m=-4, \ b^2 = 1.$ but since $m \equiv 1 \mod 4,$ we cannot have $m=-4.$ so our assumption that $b \neq 0$ is false and hence $b = 0.$ thus by (1) we will have $a^2=1$, which gives us: $x=a=\pm 1. \ \ \ \square$ 4. Thanks NonCommAlg I had to do the proof of the theorem you presented that N(x)=1 iff x is a unit. Is this theorem true for all integral domains? More specifically, are there integral domains out there for which N(x)= (+1, -1) iff x is a unit? 5. Originally Posted by jamix Thanks NonCommAlg I had to do the proof of the theorem you presented that N(x)=1 iff x is a unit. Is this theorem true for all integral domains? More specifically, are there integral domains out there for which N(x)= (+1, -1) iff x is a unit? If $N$ is such a function so that $N(ab) = N(a)N(b)$ then $N(x) = 1$ if $x$ is a unit. To prove this we first show $N(1) = 1$. Since $1\cdot 1 = 1$ it means $N(1)N(1) = N(1)$ and so $N(1) = 1$ (since it is non-zero). Now if $a$ is a unit it means $ab=1$ for some $b$. Thus, $N(a)N(b) = 1\implies N(a) = 1$. I am not sure about the converse direction, I would imagine no. However, I cannot think of a good counterexample because all the known integral domains I worked with (integers, Gaussian, Eisenstein) have this property. 6. ThePerfectHacker You said that N(a)N(b) = 1 implies that N(a) = 1. What if N(a) = N(b) = -1? With regards to the converse (ie that x is a unit if N(x) = 1), this should follow immediately since the element y such that xy=1 is just the conjugate of x. 7. Originally Posted by jamix ThePerfectHacker You said that N(a)N(b) = 1 implies that N(a) = 1. What if N(a) = N(b) = -1? With regards to the converse (ie that x is a unit if N(x) = 1), this should follow immediately since the element y such that xy=1 is just the conjugate of x. It cannot happen that N(a) = -1, because norms are non-negative. I think I have an answer to your question about converses. It is necessary for us to define what an Euclidean domain is, since you are studying algebraic number theory you will definitely run into these. A Euclidean domain is an integral domain $D$ so that there exists a function $N<img src=$^{\times} \mapsto \mathbb{N}" alt="N^{\times} \mapsto \mathbb{N}" /> (non-negative) which satisfies two conditions. The first condition is the "division algorithm", that is, given $a,b\in D$ with $b\not = 0$ we can divide $a$ by $b$ 'with a remainder' i.e. write $a = qb + r$, where $r=0$ or otherwise $N(r) < N(b)$. The second condition is for all $a,b\in \mathbb{D}^{\times}$ we have $N(a) \leq N(ab)$. The function $N$ is called an Euclidean norm. Just a warning an Euclidean domain does not need to satisfy $N(ab) = N(a)N(b)$. It turns out that in the classic examples I will give below this Euclidean norm is multiplicative. The first example are the ordinary integers. Let $D = \mathbb{Z}$, this is an integral domain. We define $N(x) = |x|$, and it follows that this is an Euclidean domain, also, $N(xy) = N(x)N(y)$. The second example are the Gaussian integers, $D = \mathbb{Z}[i] = \{a+bi|a,b\in \mathbb{Z} \}$. We define $N(\alpha) = |\alpha| = \sqrt{a^2+b^2}$ if $\alpha = a+bi$. Furthermore, it is multiplicative. It takes work showing that it satisfies the division algorithm, the way we show this is let $\beta \not = 0$ and $\alpha$ be arbitrary, divide $\alpha / \beta = q+ir$ in $\mathbb{Q}$ and pick $n,m$ so that $|n-q|,|r-m|\leq \tfrac{1}{2}$; now argue that this is the quotient and remainder you are looking form. Another example are the Eisenstein integers, $\mathbb{Z}[\omega] = \{a+b\omega |a,b\in \mathbb{Z}\}$ where $\omega = e^{2\pi i/3}$. Here we define $N(\alpha) = \alpha \bar \alpha = a^2 - ab+b^2$. And again $N(\alpha \beta) = N(\alpha)N(\beta)$. It satisfies the division algorithm by using the similar argument as above. Some useful facts here, if you are planning to do some computations with them, is that $\bar \omega = \omega^2 = -1-\omega$. And finally $D = F[x]$, the polynomials over a field, are an Euclidean domain. Define $N(f(x)) = \deg f(x)$ for non-zero polynomials. And this is the standard division algorithm you are used to. And again $N$ is multiplicative. There are two really useful theorems about Euclidean domains. The first one is that Euclidean domains are principal ideal domains, meaning if $I$ is an ideal then it is principal i.e. $I = \left< a \right>$ for some $a\in D$. The proof is really simple, assume that $I$ is a non-trivial ideal and considier $\{ N(x) | x\in I^{\times} \}$, this set is a set of non-negative integers so there is a minimal element. Argue that if $\alpha \in I$ is what gives the minimal norm then $I = \left< \alpha \right>$ (note we never use condition #2 of integral domains!). The second useful theorem is what you asked: $N(x) = N(1)$ iff $x$ is a unit. The proof is simple also. Note, $N(1) \leq N(1a) = N(a)$ for all $a\in D^{\times}$, thus $N(1)$ is a lower bound for all the norms. While if $a$ is unit we can write $N(a) \leq N(aa^{-1}) = N(1)$ and so it means $N(a) = N(1)$. For the converse let $a\in D^{\times}$ be such that $N(a) = N(1)$ then write $1 = qa+r$, it cannot be that $N(r) < N(a)=N(1)$ for $N(1)$ is minimal by above, so $r=0$ which means $1=qa$ and therefore it is a unit. Note, this is not exactly what you asked, you rather asked $N(x) = 1$ iff $x$ is unit. But as I said in almost all examples you will see that will be the case. In fact, my number theory book defines Euclidean domain differently how I posted above, so I was not even familar with the most general definition. 8. Hi PerfectHacker Thanks for the useful info. Euclidean domains are part of chapter 2, but hopefully I'll get there soon. For now I just want to make sure I can get through all the exercises in this first chapter. Thanks for help (I'll post some more questions on this same thread again if I get stuck again).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 101, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321718811988831, "perplexity_flag": "head"}
http://www.impan.pl/cgi-bin/dict?pick
## pick Pick the first arc of length 1 in this sequence. We can continue to pick elements of $B$ as above. But there are only finitely many such, a contradiction. Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z Copyright © 2011 by Jerzy Trzeciak, Warszawa 2011. All rights reserved. Copyright © 2006-2011 by IMPAN. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8765395283699036, "perplexity_flag": "head"}
http://www.openwetware.org/index.php?title=User:Pranav_Rathi/Notebook/OT/2010/08/18/CrystaLaser_specifications&diff=prev&oldid=674782
# User:Pranav Rathi/Notebook/OT/2010/08/18/CrystaLaser specifications ### From OpenWetWare (Difference between revisions) | | | | | |-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | () | | Current revision (16:39, 8 February 2013) (view source) () | | | Line 113: | | Line 113: | | | | My experimental beam waist is: | | My experimental beam waist is: | | | | | | | - | w<sub>R</sub>=.63mm with a divergence of .54mrad. The data suggests the real far field divergence angle to be .756mrad (half angle) (w<sub>R</sub>/z at that z). This gives: | + | w<sub>R</sub>=.63mm with a theoretical divergence of .69mrad. The data suggests the real far field divergence angle to be 1mrad (half angle) (w<sub>R</sub>/z at that z). This gives: | | | | | | | | '''M<sup>2</sup>≈1.4''' | | '''M<sup>2</sup>≈1.4''' | ## Specifications We are expecting our laser any time. To know the laser more we are looking forward to investigate number of things. These specifications are already given by the maker, but we will verify them. ### Polarization Laser is TM (transverse magnetic) or P or Horizontal linearly polarized (in the specimen plane laser is still TM polarized; when looking into the sample plane from the front of the microscope). We investigated these two ways: 1) by putting a glass interface at Brewster’s angle and measured the reflected and transmitted power. At this angle all the light is transmitted because the laser is P-polarized, 2) by putting a polarizing beam splitter which uses birefringence to separate the two polarizations; P is reflected and S is transmitted, by measuring and comparing the powers, the desired polarizability is determined. We performed the experiment at 1.8 W where P is 1.77 W and S is less than .03 W* ### Beam waist at the output window We used knife edge method (this method is used to determine the beam waist (not the beam diameter) directly); measure the input power of 1.86W at 86.5 and 13.5 % at the laser head (15mm). It gave us the beam waist (Wo) of .82mm (beam diameter =1.64mm). ### Possible power fluctuations if any The power supply temperature is really critical. Laser starts at roughly 1.8 W but if the temperature of the power supply is controlled very well it reaches to 2 W in few minutes and stay there. It’s really stupid of manufacturer that they do not have any fans inside so we put two chopper fans on the top of it to cool it and keep it cool. If no fans are used then within an hour the power supply reaches above 50 degrees of Celsius and then, not only the laser output falls but also the power supply turns itself off after every few minutes. ### Mode Profile Higher order modes had been a serious problem in our old laser, which compelled us to buy this one. The success of our experiments depends on the requirement of TEM00 profile, efficiency of trap and stiffness is a function of profile.So mode profiling is critical; we want our laser to be in TEM00. I am not going to discuss the technique of mode profiling; it can be learned from this link: [1] [2]. As a result it’s confirmed that this laser is TEM00 mode. Check out the pics: A LabView program is written to show a 3D Gaussian profile, it also contains a MatLab code[3]. ## Specs by the Manufacturer All the laser specs and the manual are in the document: [Specs[4]] ## Beam Profile The original beam waist of the laser is .2mm, but since we requested the 4x beam expansion option, the resultant beam waist is .84 at the output aperture of the laser. As the nature of Gaussian beam it still converges in the far field. We do not know where? So there is a beam waist somewhere in the far field. There are two ways to solve the problem; by using Gaussian formal but, for that we need the beam parameters before expansion optics and information about the expansion optics, which we do not have. So the only way we have, is experimentally measure the beam waist along the z-axis at many points and verify its location for the minimum. Once this is found we put the AOM there. So the experimental data gives us the beam waist and its distance from the laser in the z-direction. We use scanning knife edge method to measure the beam waist. ### Method • In this method we used a knife blade on a translation stage with 10 micron accuracy. The blade is moved transverse to the beam and the power of the uneclipsed portion is recorded with a power meter. The cross section of a Gaussian beam is given by: $I(r)=I_0 exp(\frac {-2r^2}{w_L^2})$ Where I(r) is the Intensity as function of radius (distance in transverse direction), I0 is the input intensity at r = 0, and wL is the beam radius. Here the beam radius is defined as the radius where the intensity is reduced to 1/e2 of the value at r = 0. This can be seen by letting r = wL. setup Power Profile The experiment data is obtained by gradually moving the blade across from point A to B, and recording the power. Without going into the math the intensity at the points can be obtained. For starting point A $\mathbf{I_A(r=0)}=I_0 exp(-2)=I_0*.865$ For stopping point B $\mathbf{I_B}=I_0 *(1-.865)$ By measuring this distance the beam waist can be measured and beam diameter is just twice of it: $\mathbf{\omega_0}=r_{.135}-r_{.865}$ this is the method we used below. • Beam waist can also be measured the same way in terms of the power. The power transmitted by a partially occluding knife edge: $\mathbf {p(r)}=\frac{P_0}{\omega_0} \sqrt{\frac{2}{\pi}} \int\limits_r^\infty exp(-\frac{2r^2}{\omega^2}) dr$ After integrating for transmitted power: $\mathbf {p(r)}=\frac{P_0}{2}{erfc}(2^{1/2}\frac{r}{\omega_0})$ Now the power of 10% and 90% is measured at two points and the value of the points substituted here: $\mathbf{\omega_0}=.783(r_{.1} - r_{.9})$ The difference between the methods is; the first method measures the value little higher than the second method (power), but the difference is still under 13%. So either method is GOOD but the second is more accurate. Here is a link of a LabView code to calculate the beam waist with knife edge method[5]. #### Data We measured the beam waist at every 12.5, 15 and 25mm, over a range of 2000mm from the output aperture of the laser head. The measurement is minimum at 612.5 mm from the laser, thus the beam waist is at 612.5±12.5mm from the laser. And it is to be 1.26±.1 mm. #### Analysis Here the plot beam diameter Vs Z is presented. Experimental data is presented as blue and model is red. As it can be seen that model does not fit the data. Experimental beam expands much faster than the model; this proves that the beam waist before the expansion optics must be relatively smaller. And also we are missing an important characterization parameter. The real word lasers work differently in a way that their beam do not follow the regular Gaussian formula for large propagation lengths (more than Raleigh range). So that's why we will have introduce a beam propagation factor called M2. ##### Beam propagation factor M2 The beam propagation factor M2 was specifically introduced to enable accurate calculation of the properties of laser beams which depart from the theoretically perfect TEM00 beam. This is important because it is quite literally impossible to construct a real world laser that achieves this theoretically ideal performance level. M2 is defined as the ratio of a beam’s actual divergence to the divergence of an ideal, diffraction limited, Gaussian, TEM00 beam having the same waist size and location. Specifically, beam divergence for an ideal, diffraction limited beam is given by: $\theta_{0}=\frac{\lambda}{\pi w_0}$ this is theoretical half divergence angle in radian. $\theta_{R}=M^2\frac{\lambda} {\pi w_0}$ so $M^2=\frac{\theta_{R}} {\theta_{0}}$ Where: • λ is the laser wavelength • θR is the far field divergence angle of the real beam. • w0 is the beam waist radius and θ0 is the far field divergence angle of the theoratical beam. • M2 is the beam propagation factor This definition of M2 allows us to make simple change to optical formulas by taking M2 factor as multiplication, to account for the actual beam divergence. This is the reason why M2 is also sometimes referred to as the “times diffraction limit number”. The more information about M2 is available in these links:[6][7] My experimental beam waist is: wR=.63mm with a theoretical divergence of .69mrad. The data suggests the real far field divergence angle to be 1mrad (half angle) (wR/z at that z). This gives: M2≈1.4 Now using beam propagation formula with M2 correction: $w_R(z) = w_0 \, \sqrt{ 1+ {\left( \frac{z M^2}{z_\mathrm{R}} \right)}^2 } \ .$ instead of: $w_R(z) = w_0 \, \sqrt{ 1+ {\left( \frac{z}{z_\mathrm{R}} \right)}^2 } \ .$ The result is obvious. Plot shows the real experimental data with theoretical data fit with and without M^2 correction. M2 is an important parameter and it is good to know it to complete the characterization of a laser.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9207056164741516, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/35332?sort=newest
## Does there exist a potential which realizes this strange quantum mechanical system? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have done some courses on quantum mechanics and statistical mechanics in the past. Since I also do math, I wonder about converge issues which are usually not such a problem in physics. One of those questions is the following. I will describe the background, but in the end it boils down to a question about ordinary differential equations. In quantum mechanics on the real line, we start with a potential $V: \mathbb{R} \to \mathbb{R}$ and try to solve the Schrödinger question $i\hbar \frac{\partial}{\partial t}\Psi(x,t) = - \frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}\Psi(x,t)+V(x)\Psi(x,t)$. In many cases this can be accomplished by seperating variables, in which case we obtain the equation $E\Psi(x,t) = - \frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}\Psi(x,t)+V(x)\Psi(x,t)$ which we try to solve for $E$ and $\Psi$ to obtain a basis for our space of states together with an associated energy spectrum. For example, if we have a harmonic oscillator, $V(x) = \frac{1}{2}m\omega^2x^2$ and we get $E_n = \hbar \omega (n+\frac{1}{2})$ and $\Psi_n$ a certain product of exponentials and Hermite polynomials. We assume that the energy in normalized such that the lowest energy state has energy $0$. If the states of our system are non-degenerate, i.e. there is only one state for each energy level in the spectrum, then the partition function in statistical mechanics for this system is given by the sum $Z(\beta) = \sum_n \exp(-\beta E_n)$, where $\beta$ is the inverse temperature $\frac{1}{k_B T}$. It is clear that this sum can be divergent; in fact for a free particle ($V = 0$), it is not even well defined since spectrum is a continuum. However, I was wondering about the following question: Is there a system such that $Z(\beta)$ diverges for $\beta < \alpha$ and converges for $\beta > \alpha$ for some $\alpha \in \mathbb{R}_{> 0}$? Am I correct in thinking that such a system is most likely an approximation of another system, which undergoes a phase transition at $\beta = \alpha$? Anyway, an obvious candidate would be a potential $V$ such that the spectrum is given $E_n = C \log(n+1)$ for $n \geq 0$ and $C > 0$. This gets me to my main mathematical question: Does such a potential (or one with spectrum asymptotically similar) exist? If so, can you give it explicitly? One the circle, the theory of Sturm-Liouville equations tells us that the eigenvalues must go asymptotically as $C n^2$, so in this case such problems can't occur. I don't know much about spectral theory for Sturm-Liouville equations on the real line though. The second question is therefore: What is known about the asymptotics of the spectrum of a Sturm-Liouville operator on the real line? - 1 Presumably you mean converges for $\beta>\alpha$. The closest thing I've seen to the spectrum you describe is this: en.wikipedia.org/wiki/Primon_gas – jc Aug 12 2010 at 11:30 And as usual, John Baez has a good writeup with plenty of references math.ucr.edu/home/baez/week199.html – jc Aug 12 2010 at 11:43 2 The divergence that skupers describes is known more generally as a Hagedorn temperature - more details here: en.wikipedia.org/wiki/Hagedorn_temperature – jc Aug 12 2010 at 11:51 @jc Thank you, I fixed that mistake. The Primon gas indeed seems to behave in the way I want, but is there a quantum mechanical realization on the line of that system? – skupers Aug 12 2010 at 12:13 @skupers I would check the reference by Bost and Connes cited on Baez's page - I don't have online access to it though. – jc Aug 12 2010 at 12:25 show 3 more comments ## 2 Answers If I understand your first question correctly, then the answer is yes. In fact, all physical matter exhibits this behavior. Allow me to answer in the following mathematically nonrigorous way: Consider that even in a lone hydrogen atom, the Hamiltonian operator for the nonrelativistic electron $H = - \frac 1 2 \nabla^2 + \frac{1}{r}$ has a discrete spectrum of bound states corresponding to the 1s, 2s, 2p, 3s, ... atomic orbitals and a continuous spectrum of unbound states corresponding to an electron that is unbound for all practical purposes. Thus at sufficiently high temperature (probably at $\beta^{-1}$ = kT ~ 0.5) there will be significant population of the continuous spectrum and you would have to deal with counting the continuous spectrum in the partition function. The same phenomenon exists for all atoms and collections of atoms, even when the nuclear and interactions terms are turned on. I am not 100% confident that the same thing holds in the relativistic case too, but I would be surprised if it did not. Regarding your discussion of the harmonic oscillator, and the comment that "such a system [exhibiting such divergence at a critical temperature] is most likely an approximation of another system", I would go so far as to say that it is the other way round, that almost all the time "nice" systems like the harmonic oscillator are in fact derived as asympotic approximations to messier Hamiltonians. For example, you could write down the molecular Hamiltonian $H = \sum_i -\frac 1 2 \nabla_i^2 + \sum_{ij} \frac 1 {r_{ij}} - \sum_{Ki} \frac {Z_K} {r_{iK}} + \sum_K -\frac 1 2 \nabla_K^2$ which as mentioned above has both a discrete part and a continuous part to its spectrum, and assume that we are interested only in the regime where we care about slow atomic nuclear motions, and that they move very little, and from there derive an effective lattice Hamiltonian of coupled harmonic oscillators. While the phase transition can be observed in the original molecular Hamiltonian, it would not be possible to see this occur in the simplified Hamiltonian since the the discrete spectrum of the harmonic oscillators would go on forever without becoming continuous. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The answer is yes, such a system exists. Here is how to construct it: Denote by $H = - \frac{d^2}{dx^2} + V$ the operator on $L^2(\mathbb{R})$ and by $H_{\pm}$ the operators on $L^2(R_{\pm})$ obtained by restricting $H$ to the corresponding half lines. It is then known that $$H = H_{-} \oplus H_{+} + \text{rank one}.$$ This implies that if $H_{+}$ and $H_{-}$ both have discrete spectrum, then also $H$ has discrete spectrum. Furthermore, one obtains that the sets $$\sigma(H),\quad and \quad \sigma(H_+) \cup \sigma(H_-)$$ interlace. This implies that if you describe the spectrum of $H_{+}$ and $H_{-}$ to satisfy an asymptotic formula like $$E_{\pm,n} = \alpha \log(n + 1)$$ Then also the one of $H$ will. (Prescribing the spectrum of $H_+$ and $H_-$ can be done by standard inverse spectral theory). Of course this does not give you an exact description of the spectrum of $H$ but it is good enough for your purposes. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415024518966675, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/55348/what-are-the-ux-and-vx-y
What are the U(x) and V(x-y) Srednicki in his QFT book introduces the the Hamiltonian operator of the quantum field theory (equation 1.32). Here what are $U \bf(x)$ and $V\bf(x-y)$? - 2 He explains what they are just above Eq. (1.30). He is describing a set of particles, each of which is subject to some potential $U(x)$ and interacting through $V(x-y)$. You probably have seen these objects in a normal quantum mechanics class. – Vibert Feb 27 at 20:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9479904770851135, "perplexity_flag": "head"}
http://mathoverflow.net/questions/116860/spectrum-of-the-normal-operator-associated-to-compact-supported-spectral-measures
## Spectrum of the Normal Operator associated to compact supported spectral measures ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\mathcal{H}$ be a Hilbert space and $E:\Sigma\to\mathcal{L}(\mathcal{H})$ be a compactly supported spectral on the Borel $\sigma$-algebra $\Sigma$ of $\mathbb{C}$. Then we can form the bounded, normal operator $$A=\int \operatorname{id}_\mathbb{C}\;dE\in\mathcal{L}(\mathcal{H})$$ Do you know a proof for the fact, that $E(\operatorname{spec} A)=\operatorname{id}_H$? - By "spectral" do you mean a projection-valued measure? Because if so, then doesn't this hold just by definition? – Branimir Ćaćić Dec 20 at 12:32 Yes, I mean a projection valued measure. By definition, $E(\mathbb{C})=id_H$, and also if $E$ is defined as the spectral measure associated to a given normal operator $A$, this is essentially true by definition. However, in our situation we start with a generic spectral measure $E$ from wich it is only known that there is some compact $K\subset\mathbb{C}$ such that $E(K)=id_H$ and define $A$ by the above formula. One can then show that the spectral measure associated to this $A$ is exactly $E$, but this is not obvious and actually the crucial point is to show that $E(spec A)=id_H$ – Robert Rauch Dec 20 at 13:23 1 Yes, you're absolutely right. I imagine that the result you want is precisely the theorem on Page 7 of math.uchicago.edu/~may/VIGRE/VIGRE2006/PAPERS/…, namely, that the support of a compactly-supported projection-valued measure is one and the same as the spectrum of the associated normal operator. – Branimir Ćaćić Dec 20 at 14:15 1 Although I have already looked over your reference a couple of days ago, I did not realize that a nice answer to my question is hidden therein. Thanks a lot pointing that out! – Robert Rauch Dec 20 at 15:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466935992240906, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/157692-trigonometric-sum-identity.html
# Thread: 1. ## trigonometric sum identity I'm looking for a hint on how to prove that $\sum \limits _{k=1}^n \cos \left[(2k-1)\theta\right]=\dfrac{\sin \left(2n \theta\right) }{2\sin \theta}$ I know I'm on here a lot but I really have no one to go to for guidance and am trying to learn this material on my own from the text book. Thank you for being patient with me so far and for being so helpful. 2. Originally Posted by magus I'm looking for a hint on how to prove that $\sum \limits _{k=1}^n \cos \left[(2k-1)\theta\right]=\dfrac{\sin \left(2n \theta\right) }{2\sin \theta}$ I know I'm on here a lot but I really have no one to go to for guidance and am trying to learn this material on my own from the text book. Thank you for being patient with me so far and for being so helpful. You might consider using the formula for the sum of a geometric series to find $\sum \limits _{k=1}^n e^{i\left[(2k-1)\theta\right]}$ and then getting the real part of the result. Proof by induction might also work. 3. I think induction might be the way to go here. Base step: $n= 1$... $LHS = \cos{\theta}$. $RHS = \frac{\sin{2\theta}}{2\sin{\theta}}$ $= \frac{2\sin{\theta}\cos{\theta}}{2\sin{\theta}}$ $= \cos{\theta}$ $= LHS$. Inductive step: Assume that the statement is true for $n = r$, in other words, that $\displaystyle{\sum_{k = 1}^r\cos{[(2r-1)\theta]} = \frac{\sin{(2r\theta)}}{2\sin{\theta}}}$. We need to show that it's true for $n = r+1$. $\displaystyle{LHS = \sum_{k = 1}^{r + 1}\cos{[(2k-1)\theta]}}$ $\displaystyle{= \sum_{k=1}^r\cos{[(2k-1)\theta]} + \cos\{[2(r+1)-1]\theta\}}$ $\displaystyle{ = \frac{\sin{(2r\theta)}}{2\sin{\theta}} + \cos\{[2(r+1)-1]\theta\}}$ See if you can go from here... 4. Thanks guys. I was able to do it via induction but I think it was intended that I use some more direct method. It says "obtain the formula" I tried complexification as mr. fantastic recommended and obtained some interesting results but nothing I could use trig identities to simplify into the required form. Also I was wondering if there is a closed form of $\sum \limits _{k=1}^{n}cos(k)$ or $\sum \limits _{k=1}^{n}sin(k)$. I figured if there is such a formula it might come in handy here. I've also tried using the infinite series definition of cosine and sine and again came up with interesting results but nothing that would help me unless I had a formula for the series I just mentioned. Thank you two for your help thus far. I really just wish my book had more examples that I could build on for these problems. Does anyone know of some catalogue of proofs that I could look at to get some more inspiration? 5. Originally Posted by magus Thanks guys. I was able to do it via induction but I think it was intended that I use some more direct method. It says "obtain the formula" I tried complexification as mr. fantastic recommended and obtained some interesting results but nothing I could use trig identities to simplify into the required form. [snip] I get $\displaystyle \frac{1 - e^{i(2n) \theta}}{e^{-i \theta} - e^{i \theta}}$ and the real part of this is $\displaystyle \frac{\sin (2n \theta) }{2 \sin (\theta)}$. Your job is to get these two expressions. Originally Posted by magus [snip] Also I was wondering if there is a closed form of $\sum \limits _{k=1}^{n}cos(k)$ or $\sum \limits _{k=1}^{n}sin(k)$. [snip] Yes there are. But they are not relevant to the proof you are working on. 6. Oh yeah the formula for a geometric sum. But what I get is $\sum \limits _{k=1} ^n e^{(2k-1)\theta} = \sum \limits _{k=1} ^n e^{2k\theta}e^{-\theta} = \dfrac{\sum \limits _{k=1} ^ne^{(2k)\theta}}{e^{\theta}} = \dfrac{\dfrac{1-e^{2n\theta+1}}{1-e^{2n\theta}}}{e^{\theta}} = \dfrac{1-e^{2n\theta+1}}{e^{\theta}-e^{2n\theta-1+\theta}}=\dfrac{1-\cos(2n\theta+1)}{\cos(\theta)-\cos(2n\theta-1+\theta)}$ I've looked over the identities and I can't see how I get get to the form you got. Did I take a wrong step? 7. Originally Posted by magus Oh yeah the formula for a geometric sum. But what I get is $\sum \limits _{k=1} ^n e^{(2k-1)\theta} = \sum \limits _{k=1} ^n e^{2k\theta}e^{-\theta} = \dfrac{\sum \limits _{k=1} ^ne^{(2k)\theta}}{e^{\theta}} = \dfrac{\dfrac{1-e^{2n\theta+1}}{1-e^{2n\theta}}}{e^{\theta}} = \dfrac{1-e^{2n\theta+1}}{e^{\theta}-e^{2n\theta-1+\theta}}=\dfrac{1-\cos(2n\theta+1)}{\cos(\theta)-\cos(2n\theta-1+\theta)}$ I've looked over the identities and I can't see how I get get to the form you got. Did I take a wrong step? You're missing the i's (I was tempted to say you have no i-dea, open your i's etc. ....) Anyway, it's meant to be $\sum \limits _{k=1} ^n e^{i(2k-1)\theta}$. Do you see the i? It is important (each word starts with i ....) Now use the formula for the sum of a geometric series where the first term is $e^{i \theta}$, $\displaystyle r = e^{i 2 \theta}$ and there are n terms in the series. Then get the real part of the result. Refer to my previous post. 8. Darn it! I always make mistakes like those. Thank you for that so *I* have $\sum \limits _{k=1} ^n e^{i(2k-1)\theta} = \sum \limits _{k=1} ^n e^{i2k\theta}e^{-i\theta} = \dfrac{\sum \limits _{k=1} ^ne^{i(2k)\theta}}{e^{i\theta}} =\dfrac{\dfrac{1-e^{i 2n\theta+1}}{1-e^{i2n\theta}}}{e^{i\theta}}=\dfrac{1-e^{i 2n\theta+1}}{e^{i\theta}-e^{i2n\theta-i\theta}}$ Is this right so far? Because what comes after when I apply Euler's identity again is a god forsaken mess when I try to get the real part. Just to show I'm doing the work though $= \dfrac{1-\cos(2n\theta+1)+i\sin(2n\theta+1)}{\cos\theta+i\s in\theta+(\cos\theta-i\sin\theta)(\cos(2n\theta)+i\sin(2n\theta))}=$ $\dfrac{1-\cos(2n\theta+1)+i\sin(2n\theta+1)}{\cos\theta+i\s in\theta+\cos\theta\cos(2n\theta-1)+i\cos\theta\sin(2n\theta-1)-i\sin\theta\cos(2n\theta-1)-\sin\theta\sin(2n\theta-1)}$ $=\dfrac{1-\cos(2n\theta+1)+i\sin(2n\theta+1)}{\cos\theta+\co s\theta\cos(2n\theta-1)-\sin\theta\sin(2n\theta-1)+i(\sin\theta+\cos\theta\sin(2n\theta-1)-\sin\theta\cos(2n\theta-1))}$ Then comes the process of making the denominator real by multiplying by a conjugate over a conjugate. I'd show you the work I've done for that but unfortunately the size limit for the LaTeX images created prohibits me from doing so. How does this reduce? 9. Originally Posted by magus Darn it! I always make mistakes like those. Thank you for that so *I* have $\sum \limits _{k=1} ^n e^{i(2k-1)\theta} = \sum \limits _{k=1} ^n e^{i2k\theta}e^{-i\theta} = \dfrac{\sum \limits _{k=1} ^ne^{i(2k)\theta}}{e^{i\theta}} =\dfrac{\dfrac{1-e^{i 2n\theta+1}}{1-e^{i2n\theta}}}{e^{i\theta}}=\dfrac{1-e^{i 2n\theta+1}}{e^{i\theta}-e^{i2n\theta-i\theta}}$ Is this right so far? Because what comes after when I apply Euler's identity again is a god forsaken mess when I try to get the real part. Just to show I'm doing the work though $= \dfrac{1-\cos(2n\theta+1)+i\sin(2n\theta+1)}{\cos\theta+i\s in\theta+(\cos\theta-i\sin\theta)(\cos(2n\theta)+i\sin(2n\theta))}=$ $\dfrac{1-\cos(2n\theta+1)+i\sin(2n\theta+1)}{\cos\theta+i\s in\theta+\cos\theta\cos(2n\theta-1)+i\cos\theta\sin(2n\theta-1)-i\sin\theta\cos(2n\theta-1)-\sin\theta\sin(2n\theta-1)}$ $=\dfrac{1-\cos(2n\theta+1)+i\sin(2n\theta+1)}{\cos\theta+\co s\theta\cos(2n\theta-1)-\sin\theta\sin(2n\theta-1)+i(\sin\theta+\cos\theta\sin(2n\theta-1)-\sin\theta\cos(2n\theta-1))}$ Then comes the process of making the denominator real by multiplying by a conjugate over a conjugate. I'd show you the work I've done for that but unfortunately the size limit for the LaTeX images created prohibits me from doing so. How does this reduce? $\displaystyle \sum_{k = 1}^{n}a r^{k-1} = \frac{a(1 - r^n)}{1 - r}$ (think carefully because this is what you essentially have) and I have told you what a and r are. After substitution you get $\displaystyle \frac{e^{i \theta} (1 - (e^{i2\theta})^n)}{1 - e^{i\theta}} = \frac{1 - e^{i2n\theta}}{e^{-i\theta} - e^{i\theta}}$. Getting the real part of this expression should be trivial at this level (but making careless mistakes will make it seem a lot harder I suppose). 10. Originally Posted by mr fantastic $\displaystyle \sum_{k = 1}^{n}a r^{k-1} = \frac{a(1 - r^n)}{1 - r}$ (think carefully because this is what you essentially have) and I have told you what a and r are. After substitution you get $\displaystyle \frac{e^{i \theta} (1 - (e^{i2\theta})^n)}{1 - e^{i\theta}} = \frac{1 - e^{i2n\theta}}{e^{-i\theta} - e^{i\theta}}$. Getting the real part of this expression should be trivial at this level (but making careless mistakes will make it seem a lot harder I suppose). That's my problem I was using $\sum_{k = 1}^{n} r^{k} = \frac{1 - r^{n+1}}{1 - r}$ and never thought of futtzing with that. So for $a=e^{i\theta}$, $r=e^{i2\theta}$, and we obviously get $\displaystyle \frac{e^{i \theta} (1 - (e^{i2\theta})^n)}{1 - e^{i\theta}}$ Now what I don't see is how you get $\displaystyle \frac{e^{i \theta} (1 - (e^{i2\theta})^n)}{1 - e^{i\theta}} = \frac{1 - e^{i2n\theta}}{e^{-i\theta} - e^{i\theta}}$ because for me if I bring the $e^{-i\theta}$ into the denominator I get $\displaystyle \frac{ (1 - (e^{i2\theta})^n)}{e^{-i \theta}(1 - e^{i\theta)}}=\frac{1 - e^{i2n\theta}}{e^{-i\theta} - e^{-i\theta}e^{i\theta}}=\frac{1 - e^{i2n\theta}}{e^{-i\theta} - 1}$ 11. Originally Posted by magus That's my problem I was using $\sum_{k = 1}^{n} r^{k} = \frac{1 - r^{n+1}}{1 - r}$ and never thought of futtzing with that. So for $a=e^{i\theta}$, $r=e^{i2\theta}$, and we obviously get $\displaystyle \frac{e^{i \theta} (1 - (e^{i2\theta})^n)}{1 - e^{i\theta}}$ Now what I don't see is how you get $\displaystyle \frac{e^{i \theta} (1 - (e^{i2\theta})^n)}{1 - e^{i\theta}} = \frac{1 - e^{i2n\theta}}{e^{-i\theta} - e^{i\theta}}$ because for me if I bring the $e^{-i\theta}$ into the denominator I get $\displaystyle \frac{ (1 - (e^{i2\theta})^n)}{e^{-i \theta}(1 - e^{i\theta)}}=\frac{1 - e^{i2n\theta}}{e^{-i\theta} - e^{-i\theta}e^{i\theta}}=\frac{1 - e^{i2n\theta}}{e^{-i\theta} - 1}$ If you are following closely you will see that I made a typo in the denominator of $\displaystyle \frac{e^{i \theta} (1 - (e^{i2\theta})^n)}{1 - e^{i\theta}}$. Recall what r is .... 12. Ahhh So what we really have is $\displaystyle \frac{e^{i \theta} (1 - (e^{i2\theta})^n)}{1 - e^{i2\theta}}$ Which then leaves an $e^{i\theta}$ left in that term when we bring down the $-e^{i\theta}$ so we have $\displaystyle \frac{ (1 - (e^{i2\theta})^n)}{e^{-i \theta}(1 - e^{2i\theta)}}=\frac{ (1 - (e^{i2\theta})^n)}{(e^{-i \theta} - e^{-i \theta}e^{2i\theta})}=\frac{ 1 - e^{i2n\theta}}{e^{-i \theta} - e^{i\theta}}$ Using Euler again $\displaystyle \frac{1-\cos(2n\theta)-i\sin(2n\theta)}{\cos\theta-\sin\theta-\cos\theta-i\sin\theta)}=\frac{1-\cos(2n\theta)-i\sin(2n\theta)}{-2i\sin\theta}}=\frac{1-\cos(2n\theta)}{-2i\sin\theta}-\dfrac{i\sin \left(2n \theta\right) }{-2i\sin \theta}=i\frac{1-\cos(2n\theta)}{2\sin\theta}+\dfrac{\sin \left(2n \theta\right) }{2\sin \theta}$ Taking the real part of which leaves me $\dfrac{\sin \left(2n \theta\right) }{2\sin \theta}$ Wow it's finally done. Thank you so so very much for your help. Thank goodness it's finally done!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 59, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9683142304420471, "perplexity_flag": "head"}
http://mathoverflow.net/questions/122728/abelian-group-objects-category/122742
## abelian group objects category ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose that $\mathcal{C}$ is a cartesian closed category. When is the category of abelian group objects $\mathcal{Ab}(\mathcal{C})$ a symmetric monoidal closed with respect to something substituting tensor product in the classical case $\mathcal{C}=\mathcal{Set}$? I need reference or sketch of an argument in the case when $\mathcal{C}$ is a topos. - ## 1 Answer I previously indicated how to construct the tensor product for $\textbf{Ab}(\mathcal{V})$ when $\mathcal{V}$ is a sufficiently nice cartesian closed category. (The comments are probably more helpful than the post itself.) A Grothendieck topos is certainly nice enough, but so is an elementary topos with a natural numbers object. (See Theorem D5.3.5 in Sketches of an elephant.) Basically, we mimic the standard construction of tensor products by generators and relations; the only stumbling block is the construction of free internal abelian groups. Once this is done, $\textbf{Ab}(\mathcal{V}) \to \mathcal{V}$ will be monadic, say with left adjoint $F : \mathcal{V} \to \textbf{Ab}(\mathcal{V})$, and $\textbf{Ab}(\mathcal{V})$ will have coequalisers for reflexive pairs. (See Lemma D5.3.2 in Sketches of an elephant.) One can then define a certain object $I$ in $\mathcal{V}$ for each pair of internal abelian groups $A$ and $B$ and a pair of morphisms $I \rightrightarrows F (A \times B)$ in $\mathcal{V}$ such that $$F I \rightrightarrows F (A \times B) \to A \otimes B$$ presents $A \otimes B$ as a coequaliser of a reflexive pair in $\textbf{Ab}(V)$. (We could take $$I = (A \times A \times B) \amalg (A \times B \times B) \amalg (F1 \times A \times B) \amalg (A \times F1 \times B)$$ so that we can code the four equations \begin{align} (a + a') \otimes b & = a \otimes b + a' \otimes b \newline a \otimes (b + b') & = a \otimes b + a \otimes b' \newline (r a) \otimes b & = r (a \otimes b) \newline a \otimes (r b) & = r (a \otimes b) \end{align} but one can probably get away with less.) It turns out to be much easier to construct internal homs for $\textbf{Ab}(\mathcal{V})$: you just need $\mathcal{V}$ to be cartesian closed with equalisers. Of course, once you have both the tensor product and internal homs, then they make $\textbf{Ab}(\mathcal{V})$ into a symmetric monoidal closed category. One can also carry out this construction in much greater generality for any commutative monad on any sufficiently nice symmetric monoidal closed category $\mathcal{V}$. This, I think, is an old result of Kock [1971, 1972] generalising Linton [1966]. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9023613929748535, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/04/02/regular-outer-measures/?like=1&source=post_flair&_wpnonce=8cb43e0c56
# The Unapologetic Mathematician ## Regular Outer Measures As usual, let $\mathcal{R}$ be a ring of sets, $\mathcal{S}(\mathcal{R})$ be the smallest $\sigma$-algebra containing $\mathcal{R}$, and $\mathcal{H}(\mathcal{R})$ be the smallest hereditary $\sigma$-algebra containing $\mathcal{R}$. We’ve asked about the relation between a measure $\mu$ on $\mathcal{R}$, the outer measure $\mu^*$ it induces on $\mathcal{H}(\mathcal{R})$, and the measure $\bar{\mu}$ we get by restricting $\mu^*$ to $\mathcal{S}(\mathcal{R})$. But for now, let’s consider what happens when we start with an outer measure on $\mathcal{H}(\mathcal{R})$. Okay, so we’ve got an outer measure $\mu^*$ on a hereditary $\sigma$-ring $\mathcal{H}$ — like $\mathcal{H}(\mathcal{R})$. We can define the $\sigma$-ring $\overline{\mathcal{S}}$ of $\mu^*$-measurable sets and restrict $\mu^*$ to a measure $\bar{\mu}$ on $\overline{\mathcal{S}}$. And then we can turn around and induce an outer measure $\bar{\mu}^*$ on the hereditary $\sigma$-ring $\mathcal{H}(\overline{\mathcal{S}})$. Now, in general there’s no reason that these two should be related. But we have seen that if $\mu^*$ came from a measure $\mu$ (as described at the top of this post), then $\mathcal{H}(\overline{\mathcal{S}})=\mathcal{H}(\mathcal{R})$, and the measure $\bar{\mu}^*$ induced by $\bar{\mu}$ is just $\mu^*$ back again! When this happens, we say that $\mu^*$ is a “regular” outer measure. And so we’ve seen that any outer measure induced from a measure on a ring is regular. The converse is true as well: if we have a regular outer measure $\mu^*=\bar{\mu}^*$, then it is induced from the measure $\bar{\mu}$ on $\overline{\mathcal{S}}$. Induced and regular outer measures are the same. Doesn’t this start to look a bit like a Galois connection? ### Like this: Posted by John Armstrong | Analysis, Measure Theory No comments yet. « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 38, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304927587509155, "perplexity_flag": "head"}
http://polymathprojects.org/2012/07/12/minipolymath4-project-imo-2012-q3/?like=1&_wpnonce=76a2d9cd23
# The polymath blog ## July 12, 2012 ### Minipolymath4 project: IMO 2012 Q3 Filed under: research — Terence Tao @ 10:00 pm Tags: mini-polymath4 This post marks the official opening of the mini-polymath4 project to solve a problem from the 2012 IMO.  This time, I have selected Q3, which has an interesting game-theoretic flavour to it. Problem 3.   The liar’s guessing game is a game played between two players $A$ and $B$.  The rules of the game depend on two positive integers $k$ and $n$ which are known to both players. At the start of the game, $A$ chooses two integers $x$ and $N$ with $1 \leq x \leq N$.  Player $A$ keeps $x$ secret, and truthfully tells $N$ to player $B$.  Player $B$ now tries to obtain information about $x$ by asking player A questions as follows.  Each question consists of $B$ specifying an arbitrary set $S$ of positive integers (possibly one specified in a previous question), and asking $A$ whether $x$ belongs to $S$.  Player $B$ may ask as many such questions as he wishes.  After each question, player $A$ must immediately answer it with yes or no, but is allowed to lie as many times as she wishes; the only restriction is that, among any $k+1$ consecutive answers, at least one answer must be truthful. After $B$ has asked as many questions as he wants, he must specify a set $X$ of at most $n$ positive integers.  If $x$ belongs to $X$, then $B$ wins; otherwise, he loses.  Prove that: 1. If $n \geq 2^k$, then $B$ can guarantee a win. 2. For all sufficiently large $k$, there exists an integer $n \geq 1.99^k$ such that $B$ cannot guarantee a win. The comments to this post shall serve as the research thread for the project, in which participants are encouraged to post their thoughts and comments on the problem, even if (or especially if) they are only partially conclusive.  Participants are also encouraged to visit the discussion thread for this project, and also to visit and work on the wiki page to organise the progress made so far. This project will follow the general polymath rules.  In particular: 1. All are welcome. Everyone (regardless of mathematical level) is welcome to participate.  Even very simple or “obvious” comments, or comments that help clarify a previous observation, can be valuable. 2. No spoilers! It is inevitable that solutions to this problem will become available on the internet very shortly.  If you are intending to participate in this project, I ask that you refrain from looking up these solutions, and that those of you have already seen a solution to the problem refrain from giving out spoilers, until at least one solution has already been obtained organically from the project. 3. Not a race. This is not intended to be a race between individuals; the purpose of the polymath experiment is to solve problems collaboratively rather than individually, by proceeding via a multitude of small observations and steps shared between all participants.   If you find yourself tempted to work out the entire problem by yourself in isolation, I would request that you refrain from revealing any solutions you obtain in this manner until after the main project has reached at least one solution on its own. 4. Update the wiki. Once the number of comments here becomes too large to easily digest at once, participants are encouraged to work on the wiki page to summarise the progress made so far, to help others get up to speed on the status of the project. 5. Metacomments go in the discussion thread. Any non-research discussions regarding the project (e.g. organisational suggestions, or commentary on the current progress) should be made at the discussion thread. 6. Be polite and constructive, and make your comments as easy to understand as possible. Bear in mind that the mathematical level and background of participants may vary widely. Have fun! ## 87 Comments » 1. [...] just opened the research thread for the mini-polymath4 project over at the polymath blog to collaboratively solve one of the six [...] Pingback by — July 12, 2012 @ 10:01 pm 2. Obvious observations: It seems for part 1 we have to provide a strategy for B to always win and for part 2 to give a strategy for A to win at least sometime. But it is not clear if A can ALWAYS win in case 2, finding a counter example would be a good first step Comment by Bob — July 12, 2012 @ 10:11 pm 3. The fact that player A has to choose the number N at the beginning of the game is intriguing. The number of possibilities for x is originally N, so it would seem like large N would make the game harder for B. I suspect that B can counteract the difficulty by asking many more questions for large N than small N. Comment by — July 12, 2012 @ 10:17 pm 4. Are there any results from Ramsey theory or related which might be useful for part one? Comment by Jaakko — July 12, 2012 @ 10:19 pm 5. Obvious: If we choose sets S_p to be of the form “numbers less than N with a 1 in the pth place of their binary expansion,” then we can get at least 1/(k+1) of the binary digits of x by asking about the p’s in turn. Comment by Jon — July 12, 2012 @ 10:22 pm • More generally, for any partition of some set of numbers into 2^(k+1) parts, one round of questioning allows to rule out one of the parts. Comment by — July 12, 2012 @ 10:34 pm • …hence, while we can partition N into 2^(k+1) non-empty parts, we can rule out something each round of k+1 questions. We can go on ruling out numbers until no more than 2^(k+1) possible answers remain. After that, we need to somehow cut that in half. Comment by robotact — July 12, 2012 @ 10:48 pm 6. Could part 1 be proved inductively with respect to N? Comment by Olli — July 12, 2012 @ 10:22 pm • It seems to me that if we could ask a series of questions to guarantee that x falls inside, say, [0, N/2], then we could reduce to a previous case, but once we find such a series of questions we more or less have solved the problem. Comment by Jon — July 12, 2012 @ 10:24 pm • My idea was that since the solution is obvious if N is inside [1,n], then we could simply prove the case n+1 and the rest would follow inductively. Comment by Olli — July 12, 2012 @ 10:27 pm • It suffices to prove it for $N=n+1$. See comment 12. Comment by Kreiser — July 12, 2012 @ 10:44 pm 7. Since there is a possibility that B would win the game simply by guessing, there is no “always win” for A. Comment by S — July 12, 2012 @ 10:25 pm • Good point, so it seems part 2 is somewhat harder than part 1. So it might be wise to completely focus on part 1 first. Comment by Bob — July 12, 2012 @ 10:29 pm • I agree – this relates back to comment 2 Comment by Alison — July 12, 2012 @ 10:31 pm 8. Some observations. For the first part, proving for $n=2^k$ suffices. The first approach that comes to my mind is to induct on $k$. Comment by — July 12, 2012 @ 10:26 pm 9. B can as well ask questions in “rounds” of k+1 questions. Then, each round is guaranteed to have at least 1 correct answer. Comment by robotact — July 12, 2012 @ 10:27 pm • While this is true, it is not very constructive. Player A can just answer about half truth and about half lies, making this strategy hard to implement. Comment by Kreiser — July 12, 2012 @ 10:43 pm 10. So for k=0 any version of binary search works. The next step should be to find the strategy for k=1, n=2. I first thought I have found the strategy, but it doen’t work. Comment by Florian — July 12, 2012 @ 10:29 pm • I am working on this case too. Here player A can never tell two lies in a row. Here is a little observation I have made. Let Q1, and Q2 be questions that player B can ask, and I will use the notation like: Q’s: Q1 Q2 … A’s: L T … To denote that we asked Q1, then Q2 and we recieved a lie and a truth respectivly (of course, B doesnt know which). Here is a cute little lemma: Lemma: “If B asks Q1 Q2 Q1, then A must give the same answer for Q1 both times it is asked, or else tell the truth for Q2″ Proof: There are 5 possible ways A can answer. LTL, LTT, TLT, TTL, TTT. From here we see that if the answers to Q1 are different, then the only possibilities are LTT and TTL, in either case the answer to Q2 must be true. \box Comment by — July 12, 2012 @ 10:38 pm • Two more little lemmas: Lemma: “If Player B asks the same question twice in a row and the answer is the same both times, then it must have been true both times” Proof: True since k=1 Lemma: “Let Q1,Q2 be questions. If player B asks the sequence of questions Q1 Q2 Q1 Q1 and gets answers A1 A2 A3 A4 (each Ai is either an L (lie) or a T(truth)). Then player A is forced to reveal one of the following pieces of information to player B. (i.e. player B will know which of them is true.) i) A2 = T ii) A3 = A4 = T iii) A2 = A4″ Proof: By the last lemma for the sequence of questions Q1 Q2 Q1, player B knows that either A2=T or the answers to the first three questions are LTL, TLT, or TTT. In the former case we have i), in the latter case we know that the possible answers for all four questions is LTLT, TLTT, TLTL, TTTL, or TTTT. If A3=A4 then player B knows that they are both truths so we have ii). If not then the possibilies are LTLT, TLTL which gives iii). Comment by — July 12, 2012 @ 10:59 pm • I think the second lemma can be used to make a binary seach by making Q1 = half the numbers, Q2= the other half of the numbers Comment by — July 12, 2012 @ 11:06 pm • The second lemma solved the case k=1, see comment 15. Comment by Olli — July 12, 2012 @ 11:35 pm 11. Here is an idea. Let’s first assume that $N$ is a power of 2. Say $N = 2^r$. Suppose that $r \geq k+1$. Think of numbers from $1$ to $N$ as vertices of $r$-dimensional Boolean cube. Then let $x_i\in\{0,1\}$ be the $i$-th coordinate of $x$. First, $B$ asks if $x_1 = 0$ (formally, B gives set $\{x:x_i = 0\}$), then he asks if $x_2 =0$, and so on. Finally, he asks if $x_{k+1} = 0$. He gets “yes” and “no” answers. In other words, he gets $b_1, \dots, b_{r+1}$ such that one of the statements “$x_i = b_i$” is true. Therefore, the first $r+1$ bits of $x$ cannot be equal $1-b_1,1-b_2, \dots, 1-b_{r+1}$. Thus B can exclude some values of $x$, and essentially reduces $N$. He proceeds until $N \leq n$. Then outputs the remaining numbers. Comment by Yury — July 12, 2012 @ 10:35 pm • I thought about something similar, but it doesn’t seem to work since A could possibly answer the questions in a way so that no number gets excluded if $N$ is not a power of 2. Comment by Olli — July 12, 2012 @ 10:42 pm • If N is not a power of 2 but N > 2^{r+1}, B groups all numbers in 2^{r+1} non-empty clusters. The he labels each cluster, and all numbers in it, with a unique binary vector of length r+1. He then ask if the first bit of the label of x is 0; then if the second bit is 0, and so on. Finally, he finds a label L = (1 – b_1, …, 1 – b_{r+1}) such that x is not labeled with L. Now he can exclude all number with label B and iterate. Comment by — July 12, 2012 @ 10:50 pm • Without loss of generality N can be assumed to be very large (at least for the first problem) because smaller Ns only make life easier for B. But the problem is that you can only exclude one number which A can choose by answering appropriately. Comment by Florian — July 12, 2012 @ 11:03 pm • You can assume that N = 2^k + 1, n = 2^k Comment by — July 12, 2012 @ 10:51 pm • In this case x will have at most k+1 binary digits, and the only case with k + 1 digits is (100…0), and all the combinations of k digits, how to exclude one number from there? Comment by — July 12, 2012 @ 11:11 pm • What we can do in this case is we keep asking if the first digit is 1, there are three possibilities, k + 1 answers are YES, then the number is 10…0 k + 1 answers are NO, then we can exclude 10…0 Some answers are YES’s some NO’s, then we can choose the continent one, NO, and continue to ask for the other digits. After we are done we can exclude a number whose first digit is 0 (because of the NO answer). Comment by — July 12, 2012 @ 11:22 pm • The method will handle the case when N \geq 2^{k+1}, in that case you can just take any subset of size 2^{k+1} and numerate from 1, …, 2^{k+1}, then exclude from that set, but what to do when the range gets less than 2^{k+1} ? Comment by — July 12, 2012 @ 11:00 pm • You need to ask at least k + 1 questions to make a conclusion. Comment by — July 12, 2012 @ 11:01 pm 12. Reduction for part 1. (Assume all integers are from $\{1,\cdots,N\}$.) For part 1, it suffices to produce a winning strategy for $N=n+1$. In other words, player $B$ can win for all $N$ iff he can win for $N=n+1$. $\Rightarrow$ is trivial. $\Leftarrow$ is as follows: Go by induction. Suppose $N>n+1$ and a winning strategy is known for all $n+1\leq N'< N$. Partition $N$ into $n+1$ nonempty sets $G_1,\cdots,G_{n+1}$. Then instead of asking "is it in $S\subset \{1,\cdots,n+1\}$", he asks "is it in $\cup_{s\in S} G_s$? By using the winning strategy for $n+1$, we can throw out one of the $G_i$ and we now have a winning strategy by assumption. Comment by Kreiser — July 12, 2012 @ 10:41 pm 13. Let’s say we get answer $A_i$ for set $S_i$, for $i = 1, \dots, k + 1$, then if we take the complement of the answers then they can’t be true, but all of them being true corresponds an intersection of $S_i$‘s or its complement, and in the case intersection is non-empty we can exclude the intersection points from the range for $x$. Comment by — July 12, 2012 @ 10:42 pm 14. [...] Check out (and join in on) the discussion here. Share this:FacebookEmailTwitterLike this:LikeBe the first to like this. This entry was posted in Event, Math, On the Web, Something Extra and tagged Game On!, IMO, International Math Olympiad, mini-polymath, Polymath by U. of Oklahoma Math Club. Bookmark the permalink. [...] Pingback by — July 12, 2012 @ 11:31 pm 15. Case $k = 1$: By comment 12, it suffices to show this when $n = 2$ and $N = 3$. Let Q1 and Q2 be the questions “is it in $\{1, 2 \}$” and “is it in $\{2, 3\}$“. Then after asking them in order Q1 Q2 Q1 Q1, we know by comment 10 either the thruth value of Q1 or Q2, in which case we are done, or using notation as in comment 10: 1) If A2 and A4 were both “yes” or “no”, then $x = 2$. 2) If one of A2 and A4 was “yes” and the other was “no”, then $x \in \{ 1, 3\}$. Comment by Olli — July 12, 2012 @ 11:32 pm • This doesn’t quite seem correct. For instance, if x=1, then A can answer “yes” to all questions and only have to lie once. Comment by — July 12, 2012 @ 11:38 pm • But if A answers “yes” to all questions, B knows that x must be either 1 or 2, because A cannot lie twice in a row. Comment by Olli — July 12, 2012 @ 11:41 pm • Well, ok, but A can instead answer “yes yes yes no” and this seems consistent with all three possibilities x=1, x=2, x=3, so B has been unable to eliminate anything. Comment by — July 12, 2012 @ 11:52 pm • Oh yes, true. My bad. Comment by Olli — July 12, 2012 @ 11:58 pm 16. We can assume N = 2^k + 1, n = 2^k. It means that x has at most k + 1 binary digits (k+1 digits only for n = 2^k) $x = b_1 b_2\dots b_{k+1}$ Then we can keep asking if b_1 is 1, there are two possibilities, (a) k + 1 times we get the answer NO, then we exclude the number 10…0 (b) There is a YES answer. Then we stop asking about b_1 and ask b_2 = 1, b_3 = 1 … b_{k+1} = 1. After we are done we can exclude the number for which all the last k + 1 asnwers would have been lies whose first digit is 0 (because of the YES answer). Comment by — July 12, 2012 @ 11:43 pm • Another way (which seems to solve the first question). We ask the sequence of question $Q_i$: “Does $b_i = 1$?” in a row. That makes k+1 questions. Then we must have at least one of the digits right. In particular, let $y = c_1 \dots c_{k+1}$ be such that $c_i = 0$ if the answer to $A_i$ is Yes, and $c_i=1$ if the answer to $A_i$ is No. Then $x \neq y$. We have excluded a possibility, which by the reduction of comment 15 is enough. Comment by Garf — July 13, 2012 @ 12:13 am • Which number will you exclude in that case? (It might not be in the range) Comment by — July 13, 2012 @ 12:17 am • When c_1 = 1, then the number might be out of the range. Comment by — July 13, 2012 @ 12:25 am • I’m not sure I totally understand your argument, but your argument lead me towards the following: Let $B_i$ be the subset of $\{0,\cdots,N-1\}$ with 0 as the $i^{th}$ digit in their binary expansion (note we’re leaving out one member). Let B ask $B_1,\cdots,B_k$ in that order, and let $b_i$ be 0 if A says yes to $B_i$ and 1 else. Then let $s_i$ be the number with binary expansion $a_0a_1\cdots a_i a_{i+1}' \cdots a_k'$ where $a_j'=1-a_j$. Now ask $\{s_0\},\cdots,\{s_k\}$ in order. Suppose A answers at least once that $x\neq s_i$, and pick the first such instance of this. Then if $x=s_i$, A will have lied for the last $k+1$ questions, ie $B_i,B_{i+1},\cdots,B_k,\{s_0\},\cdots,\{s_i\}$. So $x$ cannot be $s_i$ and we have the required win. On the other hand, if A always says that $x=s_i$ for any $i$, then if $x$ was the one member we didn’t manipulate, A lied $k+1$ times (all $\{s_i\}$ questions). So if A says that $x=s_i$ for all $i$, then the one member we didn’t manipulate is actually not $x$, so we’ve discarded one member, and B wins. Comment by Kreiser — July 13, 2012 @ 2:40 am • I think you are missing one case: “Suppose A answers at least once that $x \neq s_i$ , and pick the first such instance of this. Then if $x = s_i$, A will have lied for the last questions, ie . So cannot be and we have the required win.” What if A answers at least once that $x \neq s_i$ , and $x \neq s_i$ which mean that he didn’t lie. Comment by — July 13, 2012 @ 3:32 am • Then \$x\neq s_i\$ and you guess everything except for \$s_i\$. Comment by Kreiser — July 13, 2012 @ 5:57 am • When you ask “if b_i = 1, …”, do you mean “if b_i is the only digit that has the value 1, … “? Comment by abellong — July 13, 2012 @ 8:23 am • Example for k=1 to see if I am following your logic. Change indexing to be (0,N-1) to make indexing easier. Let N=3 and k=1 and n=2. Ask about 2, if all No x is either 0 or 1 and done. If we get a yes then ask about 1. If A answers yes then we have had two yes in a row and x is in (1,2) and we are done. If we got a no then , A has answered 2 : yes, 1: no. The only possibilities compatible with this set of answers are, 2:true,1:false or 2: false,1: false. Therefore we can eliminate 1 and x is either 0 or 2. It would be nice if someone writes out yhe same for N=5 and k=2. Comment by Jeff — July 13, 2012 @ 2:40 pm 17. [...] 4 has started. It is based on question 3 of the IMO. The research thread is here. There is a wiki here. Like this:LikeBe the first to like [...] Pingback by — July 13, 2012 @ 12:06 am 18. Isn’t there something missing on the statement of the question? If I am B, then all my questions are of the form “is x \in { j }?”, where j = 1, 2, …, N, and I repeat each question k + 1 times. Since A cannot lie k+1 consecutive times, and x must be in { x }, the answer must be ‘yes’ after at most (k+1) x questions. Comment by dd — July 13, 2012 @ 12:11 am • Sorry! It is not as simple as I first thought because if there are mixed answers for a set, I may still not know which answer is correct. For instance, A can alternate between yes and no and no matter how many times I ask the same question I still can’t figure out which way is the truth. Comment by dd — July 13, 2012 @ 12:15 am 19. For part 2, we can again assume N = n + 1, n = 1.99^k (the ceiling). Comment by — July 13, 2012 @ 12:38 am • And A shouldn’t give B the possibility to exclude any number from the range [1, N] Comment by — July 13, 2012 @ 12:42 am • For each number in the rang [1, N] at least one of every k + 1 answers in a row should be satisfied, so the number can be valid value for x. A’s goal is to choose YES’s and NO’s in such a way Comment by — July 13, 2012 @ 4:34 am 20. Cant B just keep asking questions with singleton {y} for 2k consecutive times and then B can be sure of whether y=x?? Comment by Ronny — July 13, 2012 @ 1:35 am • No. This has been discussed at least 2 other times- please read the thread before contributing. The standard counterexample is what happens if the answers are half yes and half no. Comment by Kreiser — July 13, 2012 @ 2:00 am 21. I think it goes along the following lines: suppose that you have built a sequence of k + 1 nested sets X_0 <= X_1 <= … <= X_k such that the answers of A indicate that none of the X_i contain the element x. Then we have confirmation that x is not an element of X_0. Indeed, if that was the case, then x must be in X_1, X_2, …, X_k and all the answers A gave were lies. Try to build such a sequence where each nested set is at least half as large as the parent. Comment by dd — July 13, 2012 @ 2:18 am • [SPOILER: don't read if you don't want to see the solution] To get the first part of the problem, proceed as follows. Suppose that we have determined that a set of t >= 0 values from [N] *cannot* be the element x. Then let us eliminate one more element provided that N – t >= 2^k + 1. In this case, pick an arbitrary element y that you are still unsure could be x. Ask A repeatedly whether y is x. If the answer you get k + 1 times is NO, then it is the truth and you eliminated y as well. Otherwise, at some point A answers YES, which is equivalent to saying that there is a set X_k of size N – t – 1 >= 2^k which does not contain x. Now ask A about an arbitrary half of X_k if it contains x and this will determine a set X_{k – 1} \subset X_k with at least 2^{k – 1} elements that does contain x (according to A’s answer). Iterating yields a non-empty set X_0 for which you know x is not in X_0 (see the parent comment). You have thus eliminated |X_0| >= 1 elements from the set of candidates for x. Doing this until N – t = 2^k yields the set with the desired property. Comment by dd — July 13, 2012 @ 2:36 am 22. I think Comment 16 (and also Comment 21, which seems to be basically the same argument) has resolved the first part of the problem. Looks like the second part is still almost untouched, though (except for the short observation in Comment 19)… Comment by — July 13, 2012 @ 4:08 am 23. You can try to solve both parts for the “truth”, which is harder but doable (and was the original problem). I won’t (yet) mention the optimal $n$ so as to not risk giving any hints for part (b) Comment by — July 13, 2012 @ 5:45 am • What do you mean by truth? In the above we have solved for the set containing X. Comment by jbergmanster — July 13, 2012 @ 2:45 pm • I think Ralph is using “the truth” to refer to the true threshold for n (i.e. the least n for which B has a winning strategy, or equivalently the first n for which A no longer has a counterstrategy). The problem as stated places this threshold below or equal 2^n, and above or equal (1.99)^n for sufficiently large n, but is presumably somewhere in between. (e.g. it might be something like $\binom{n}{\lfloor n/2\rfloor}$, which sometimes shows up as a threshold in some other extremal combinatorics problems.) Comment by — July 13, 2012 @ 4:07 pm 24. Some ideas for the second part: 1) Could it work for any $n < 2^k$? In that case, (b) follows from the fact that there exists an integer $n \in [1.99^k , 2^k)$ for sufficiently large $k$. 2) If above is true, we would only need to show that for $N = 2^k$, $n < N$, B does not have a winning strategy. Comment by Olli — July 13, 2012 @ 7:12 am • Possibly some variant of greedy algorithm would work? Comment by Olli — July 13, 2012 @ 7:33 am • So should k=6 work with ceil(1.99^6) = 63 and N=64. Or do we need a larger gap of order k? Comment by jbergmanster — July 13, 2012 @ 3:01 pm 25. I guess we can abate the rules of game namely that instead of A not being able to lie for k+1 moves in a row, he is able to lie for k+1 moves in a row, but he will lose at the end of the game if he ever does that. Looks like one possible approach might be probabilistic deduction. Something like A’s strategy is random and B’s strategy is deterministic. Can we say that it is enough that B asks only finitely many questions by some reasoning (something similar than some problems needs to be checked only in finitely many cases by compactness argument)? Comment by Jaakko — July 13, 2012 @ 8:51 am 26. [...] Edit: As of Friday morning (7/13/2012), the problem still has not been completely solved, so there’s time to chime in on the discussion thread! [...] Pingback by — July 13, 2012 @ 1:21 pm 27. Some observations: we can again assume N = n + 1, n = 1.99^k (the ceiling). The strategy for A will be for each value of x in the range [1, N] to have at least 1 correct answer in each k + 1 consecutive queries. Each time A picks to answer he can choose YES or NO in such a way that at least for half of the numbers in {1, 2, … N} it will be true. Comment by Anonymous — July 13, 2012 @ 3:25 pm 28. It would be enough to show that there is a set S of size $M$ where $1.99^k + 1 <= M <= 2^k$ such that for all choices of $x$ in $S$, A could give the same answers to all of B's questions while still maintaining the required conditions. In this case it would seem that B cannot be done if n = M – 1. Comment by Nirman — July 13, 2012 @ 3:26 pm 29. To approach part 2, I guess the easiest idea is to try to give a strategy for A. From part 1, we learned that we can as well start with a set of size N = n+1, and B tries to exclude some element i in {1,…,n+1} as being x. Thus, B will specify sets S_1, S_2, … and A needs to answer in turn. At any point in time, A wants that *each* value i in {1,…,n+1} is still a possibility for x. Thus, suppose we answered the previous questions S_1,…,S_k somehow, and we are given S_{k+1} which we want to answer. We now build a 0/1-matrix, with columns 1,…,k and rows 1,…,n+1. For each (i,j) we put a 1 in an entry in case we answered question j (about S_j) such that x = i is a possibility. Thus, this looks now like the following game: B provides a subset of the columns S. A then add a column to the matrix, where either exactly the entries in S have a 1, or exactly the entries in {1,..,n+1} \ S have a 1. If at any point in time some row has only zeros in the last k+1 columns, A loses. We can now try to give strategies which can handle as many rows as possible (even if it is way less than 2^k). Comment by Anonymous — July 13, 2012 @ 3:29 pm 30. This is a very vague idea but part b) reminds me of the sort of problems I encountered in an Information Theory class a long while ago. It was especially common to prove inequalities in the limit like that in part b). So perhaps the problem could be phrased in Information Theoretic language? e.g. “The answers provided by A can only reduce the entropy so much”. But as I hardly remember the details of information theory I may be way off base… Comment by letmeitellyou — July 13, 2012 @ 3:57 pm 31. Since the ratio of 1.99^n to 2^n becomes arbitrarily small the following method could possibly work. Start with any example that doesn’t work and find a way to double it one more lie any number of times. Then eventually it will be between 1.99^n and 2^n and it will be the desired counterexample. This has the virtue of allowing us to look at small counterexamples first. Do we have a list of counterexamples when the number is less than 2*n and A wins? Perhaps that would be useful. I would be interested in what information A can hide and how. Comment by kristalcantwell — July 13, 2012 @ 5:29 pm • So let’s take a concrete factor. Suppose you face half the number of questions which guarantee success – is there a strategy that works? The number 2 seems to be intimately connected to the problem, which suggests that half might be a good proportion to try. Comment by Mark Bennet — July 13, 2012 @ 6:38 pm 32. In part (b), we need to develop a strategy for 1st player. If I were him, I would take X=n+1, and do not choose x at all. Then, I would try to answer in such a way, that after all n+1 possible choices of x there is no k+1 succesive lies. If this can be done, after any n-element guess of 2nd player, I can claim that x is the n+1-st element. This is a beat cheting, but it works :) Comment by Anonymous — July 13, 2012 @ 5:36 pm • I mean N=n+1 Comment by Anonymous — July 13, 2012 @ 5:38 pm 33. Now, let a(i), i=1,…,n+1 be a number of succesive lies up to now, if x=i. Then, at every step, I have vector (a(1),a(2),…,a(n+1)). For example, with n+1=4 I start with vector (0,0,0,0). After question about set {1,2,3}, I can say “yes” to get vector (0,0,0,1) or “no” to get (1,1,1,0). Obviously, better to say “yes”. In this case, if next question is about set {1,4}, I can say “yes” to get (0,1,1,0) or “no” to get (1,0,0,2). In any case I can round some numbers to 0 but increase other numbers by 1. I loose if I get k+1 somewhere. Comment by Bogdan — July 13, 2012 @ 5:49 pm • In case there are two entries which contain k, we also already lost, because we can’t guarantee neither of them goes to k+1. Analogously, if there are 4 entries which contain k-1, or if there are 8 entries containing k-2. Comment by Anonymous — July 13, 2012 @ 6:38 pm 34. We can think of the game in the following way. S_1, S_2, … , S_k,… And A should make a choice (by answering YES or NO) of S_i or its complement in each time and we will get D_1, D_2, … D_k, … where D_i is either S_i or the complement of S_i (A has the option to choose) A wants to do in a way that for each m, all the numbers from the range [1, N] appear at least once in the sequence D_m, D_{m+1}, … D_{m+k} Comment by — July 13, 2012 @ 7:23 pm • We use a greedy approach to choose D_i We choose D_1 = S_1 if | S_1 | > N/2 t, otherwise D_1 = Compliment(S_1). To pick D_2 we can see whether S_2 or complement (S_2) covers at least 1/2 portion of [1, N] \ D_1 We pick D_{i+1} such that it covers at least 1/2 portion of D\(D_1 u D_2 … u D_i}. We can claim that in at least p = log_2 N steps D_1 u D_2 … u D_p = [1, N] Where p = k log_2 (1.99). It means that for each of the numbers [1, N] we A gave at least one correct answer in the first p steps. Because p > k / 2, it will not imply part b), probably some modifications are needed. Comment by — July 13, 2012 @ 8:04 pm 35. Since this is a cooperative effort, let me blurt out some weaker result for part 2 which may be refined to give the desired result. Suppose that we just want to prove that for $n < 2^{k/2}$ there is no winning strategy for B. We let N = n + 1. Before describing A's strategy let us look at what B must do. After a finite number of rounds, B provides a set of n elements which he claims must contain x. In other words, B is stating that precisely one element y from [1, N] should not be x. What we have to show is that all of A's answer are compatible with y being equal to x. More precisely, we have to show that for x = y, the sequence of answers do not contain any k+1 consecutive lies. A's answers are encoded as a sequence of sets $S_0, S_1, \dots, S_M$ such that each answer is of the form x does not belong to $S_i$. If the set given by B on the ith round is $T_i$ then the set $S_i$ is either $T_i$ or its complement. A's choice for $S_0$ is arbitrary (say $S_0 = T_0$). For $i = 1,\dots, k/2$, pick $S_i \in \{T_i, T_i^c\}$ such that |intersection of S_0, S_1, …, S_i| <= |intersection of S_0, …, S_{i – 1}| / 2. In particular, the intersection of $S_0, \dots, S_{k/2}$ is empty. After this, A starts fresh with the $S_{k/2 + 1} = T_{k/2 + 1}$ and repeats the same process again. Now for any choice of y that B selects, all the answers of A are compatible, since for any k + 1 consecutive sets in the sequence $S_0, \dots, S_M$ there must be a subsequence of k/2 + 1 terms $S_j, S_{j + 1}, \dots, S_{j + k/2}$ such that their common intersection is empty. In particular, y cannot be in all of the sets $S_j, \dots, S_{j + k/2}$, so one of those answers was truthful. Comment by dd — July 13, 2012 @ 7:39 pm 36. The game can be re-formulated in an equivalent one: The player $A$ chooses an element $x$ from the set $S$ (with $|S|=N$) and the player $B$ asks the sequence of questions. The $j$-th question consists of $B$ choosing a set $D_j\subseteq S$ and player $A$ selecting a set $P_j\in\left\{ D_j,D_j^C\right\}$. For every $j\geq 1$ the following relation holds: $x\in P_j\cup P_{j+1}\cup\cdots \cup P_{j+k}.$ The player $B$ wins if after a finite number of steps he can choose a set $X$ with $| X|\leq n$ such that $x\in X$ a) It suffices to prove that if $N\geq 2^k+1$ then the player $B$ can determine a set $S^{\prime}\subseteq S$ with $|S^{\prime}|\leq N-1$ such that $x\in S^{\prime}$. Assume that $N\geq 2^n+1$. In the first move $B$ selects any set $D_1\subseteq S$ such that $|D_1|\geq 2^{k-1}$ and $|D_1^C|\geq 2^{k-1}$. After receiving the set $P_1$ from $A$, $B$ makes the second move. The player $B$ selects a set $D_2\subseteq S$ such that $| D_2\cap P_1^C|\geq 2^{k-2}$ and $|D_2^C\cap P_1^C|\geq 2^{k-2}$. The player $B$ continues this way: in the move $j$ he/she chooses a set $D_j$ such that $| D_j\cap P_j^C|\geq 2^{k-j}$ and $|D_j^C\cap P_j^C|\geq 2^{k-j}$. In this way the player $B$ has obtained the sets $P_1$, $P_2$, $\dots$, $P_k$ such that $\left(P_1\cup \cdots \cup P_k\right)^C\geq 1$. Then $B$ chooses the set $D_{k+1}$ to be a singleton containing any element of $P_1\cup\cdots \cup P_k$. There are two cases now: $1^{\circ}$ The player $A$ selects $P_{k+1}=D_{k+1}^C$. Then $B$ can take $S^{\prime}=S\setminus D_{k+1}$ and the statement is proved. $2^{\circ}$ The player $A$ selects $P_{k+1}=D_{k+1}$. Now the player $B$ repeats the previous procedure on the set $S_1=S\setminus D_{k+1}$ to obtain the sequence of sets $P_{k+2}$, $P_{k+3}$, $\dots$, $P_{2k+1}$. The following inequality holds: $\left|S_1\setminus \left(P_{k+2}\cdots P_{2k+1}\right)\right|\geq 1,$ since $|S_1|\geq 2^k$. However, now we have $\left|\left(P_{k+1}\cup P_{k+2}\cup\cdots\cup P_{2k+1}\right)^C\right|\geq 1,$ and we may take $S^{\prime}=P_{k+1}\cup \cdots \cup P_{2k+1}$. (b) Let $p$ and $q$ be two positive integers such that $1.99\lneq p\lneq q\lneq 2$. Let us choose $k_0$ such that $\left(\frac{p}{q}\right)^{k_0}\leq 2\cdot \left(1-\frac{q}2\right)\quad\quad\quad\mbox{and}\quad\quad\quad p^k-1.99^k\gneq 1.$ We will prove that for every $k\geq k_0$ if $|S|\in\left(1.99^k, p^k\right)$ then there is a strategy for the player $A$ to select sets $P_1$, $P_2$, $\dots$ (based on sets $D_1$, $D_2$, $\dots$ provided by $B$) such that for each $j$ the following relation holds: $P_j\cup P_{j+1}\cup\cdots\cup P_{j+k}=S.$ Assuming that $S=\{1,2,\dots, N\}$, the player $A$ will maintain the following sequence of $N$-tuples: $(\mathbf{x})_{j=0}^{\infty}=\left(x_1^j, x_2^j, \dots, x_N^j\right)$. Initially we set $x_1^0=x_2^0=\cdots =x_N^0=1$. After the set $P_j$ is selected then we define $\mathbf x^{j+1}$ based on $\mathbf x^j$ as follows: $x_i^{j+1}=\left\{\begin{array}{rl} 1,&\mbox{ if } i\in P_i\\ q\cdot x_i^j, &\mbox{ if } i\not\in P_i. \end{array}\right.$ The player $A$ can keep $B$ from winning if $x_i^j\leq q^k$ for each pair $(i,j)$. For a sequence $\mathbf x$, let us define $T(\mathbf x)=\sum_{i=1}^N x_i$. It suffices for player $A$ to make sure that $T\left(\mathbf x^j\right)\leq q^{k}$ for each $j$. Notice that $T\left(\mathbf x^0\right)=N\leq p^k \lneq q^k$. We will now prove that given $\mathbf x^j$ such that $T\left(\mathbf x^j\right)\leq q^k$, and a set $D_{j+1}$ the player $A$ can choose $P_{j+1}\in\left\{D_{j+1},D_{j+1}^C\right\}$ such that $T\left(\mathbf x^{j+1}\right)\leq q^k$. Let $\mathbf y$ be the sequence that would be obtained if $P_{j+1}=D_{j+1}$, and let $\mathbf z$ be the sequence that would be obtained if $P_{j+1}=D_{j+1}^C$. Then we have $T\left(\mathbf y\right)=\sum_{i\in D_{j+1}^C} qx_i^j+\left|D_{j+1}\right|$ $T\left(\mathbf z\right)=\sum_{i\in D_{j+1}} qx_i^j+\left|D_{j+1}^C\right|.$ Summing up the previous two equalities gives: $T\left(\mathbf y\right)+T\left(\mathbf z\right)= q\cdot T\left(\mathbf x^j\right)+ N\leq q^{k+1}+ p^k, \mbox{ hence}$ $\min\left\{T\left(\mathbf y\right),T\left(\mathbf z\right)\right\}\leq \frac{q}2\cdot q^k+\frac{p^k}2\leq q^k,$ because of our choice of $k_0$. Comment by akash chayan — July 13, 2012 @ 7:53 pm • I think it’s correct solution, just in the definition of x_i^{j+1} should be P_i instead of S. [Corrected, -T.] Comment by — July 13, 2012 @ 8:14 pm 37. Dear all, As this thread is becoming quite full, I am opening a fresh thread at http://polymathprojects.org/2012/07/13/minipolymath4-project-second-research-thread/ to refocus the discussion. I’ll leave this thread open for responses to existing comments here, but if you could put all new comments in the new thread, that would be great. (Now would also be a good time to resummarise some of the observations made in this thread onto the fresh thread, to make it easier to catch up.) Comment by — July 13, 2012 @ 7:56 pm 38. [...] the previous research thread is getting quite lengthy (and is mostly full of attacks on the first part of the problem, which is [...] Pingback by — July 13, 2012 @ 8:04 pm 39. [...] (Rubinstein would say that this is true of most real-life applications of game theory as well.) Try your hand, or look at the comments, which surely have spoilers by now as it has been up for about a day: [...] Pingback by — July 13, 2012 @ 8:47 pm 40. Reblogged this on Wikipedia Afficianado and commented: IMO is the Mecca of young mathematicians battling out in this divine field of which I am in oblivion of until now. Whenever I try and study mathematics it is with a notion of solving a problem and that problem is hard enough for veterans to try but what I have come to know from those who do “Research” is that they don’t do it to solve the problem but to firstly understand it well and secondly to find why is that problem tough than what it seems to be. Terence Tao as you all know is a known child prodigy and inculcated abilities to solve problems involving numbers at a very young age. He id the youngest even to have received a fields medal. This Re-Blogged post concerns a question which appeared in this year’s IMO (International Mathematical Olympiad in case you are not familiar with what it is) and a good thread to discuss what comes to your mind while approaching it. Comment by — July 30, 2012 @ 10:58 pm 41. 1) for the case of k consecutive responses one of which is true: we supoose k = 0, so all answers are true, so we can know the exact number x, so n = 1 ≥ 2 ^ 0 I now propose a case that will help me in my demonstration: 2) For k+2 consecutive responses which one is true and one is false : we assume that k = 0, so two consecutive responses contiennt true and the other false, so thanks to our given 1 ≤ x ≤ N, making this series of questions: * X = 0? * 1 ≤ x ≤ N? * X = 1? * 1 ≤ x ≤ N? . . . I will know the exact number x, so in this case n = 1 ≥ 2^0 *** Now assume that for k+1 consecutive responses which one is true n must be n ≥ 2 ^ k one must demonstrate that responses to k+2 n in this case must be n ≥ 2 ^ (k +1) and *** k+2 consecutive responses which one is true and one is false n must be n ≥ 2 ^ k we must show that for k+3 consecutive responses in this case n must be n ≥ 2 ^ (k +1) 1 *) where k+2 consecutive responses which one is true can be divided into two cases: — K 2 answers are true then n ≥ 1 — K 2 answers contiennet true and another false must therefore n ≥ 2^k so it is sufficient to write n ≥ 2^(k+1) 2 *) where k+3 consecutive responses which one is true and the other must be false can be divided into two cases: — K+2 answers are wrong and one seulle is true: in this case the placement of the true answer is the same after each k+2 answers, so to see her placement just ask the same question k+2 times and the correct answer will be different among the answers … and how we can know the exact number x, so n ≥ 1 — K+3 answers contain at least two answers true and one false: this case is a special case of the general case (k+2 consecutive responses of which is true), so n ≥ 2 ^ (k +1) It’s finished Comment by Anonymous — August 22, 2012 @ 3:27 pm RSS feed for comments on this post. TrackBack URI Theme: Customized Rubric. Blog at WordPress.com. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 266, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9543529152870178, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/83479-irreduclible-polynomials.html
# Thread: 1. ## Irreduclible polynomials Show that $f(x)=x^4+bx^2+d$ is irreducible over $\mathbb Q(\sqrt{d(b^2-4d)})[x]$ 2. Originally Posted by ZetaX Show that $f(x)=x^4+bx^2+d$ is irreducible over $\mathbb Q(\sqrt{d(b^2-4d)})[x]$ this can't be right! did you mean reducible? 3. No this is correct, it is irreducible, that is what I am suppose to show. 4. Originally Posted by ZetaX No this is correct, it is irreducible, that is what I am suppose to show. the question doesn't make sense! for example, choose d = 1 and b = 3. then $\sqrt{d(b^2 - 4d)}=\sqrt{5}.$ now in $\mathbb{Q}(\sqrt{5})[x]$ we have: $x^4+3x^2 + 1 = \left(x^2 + \frac{3 + \sqrt{5}}{2} \right) \left(x^2 + \frac{3 - \sqrt{5}}{2} \right).$ 5. I have that $<br /> f(x)=x^4+ax^2+b<br />$ is irreducible over rationals, and its galois group is $<br /> D_4<br />$ And M is the splitting field of $f(x)$ and I have that $<br /> <br /> \mathbb Q(\sqrt{d(b^2-4d)})<br />$ is a quadratic subfield of M. 6. Originally Posted by NonCommAlg this can't be right! did you mean reducible? Originally Posted by ZetaX No this is correct, it is irreducible, that is what I am suppose to show. Originally Posted by NonCommAlg the question doesn't make sense! for example, choose d = 1 and b = 3. then $\sqrt{d(b^2 - 4d)}=\sqrt{5}.$ now in $\mathbb{Q}(\sqrt{5})[x]$ we have: $x^4+3x^2 + 1 = \left(x^2 + \frac{3 + \sqrt{5}}{2} \right) \left(x^2 + \frac{3 - \sqrt{5}}{2} \right).$ Originally Posted by ZetaX I have that $<br /> f(x)=x^4+ax^2+b<br />$ is irreducible over rationals, and its galois group is $<br /> D_4<br />$ And M is the splitting field of $f(x)$ and I have that $<br /> <br /> \mathbb Q(\sqrt{d(b^2-4d)})<br />$ is a quadratic subfield of M. I think that ZetaX is referring to this thread. In that thread he asks that given that $x^4 + bx^2 + d$ is irreducible over $\mathbb{Q}$ with Galois group $D_4$ then show that the three quadradic extensions (we know there has to be three since $D_4$ has three subgroups of index $2$) are: $\mathbb{Q}(\sqrt{b^2-4d}),\mathbb{Q}(\sqrt{d}),\mathbb{Q}(\sqrt{d(b^2-4d)})$. He is asking to complete that solution. 7. I am not asking to complete this but the question I am asking is continuation of this.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9700907468795776, "perplexity_flag": "head"}
http://mathoverflow.net/questions/99736/beautiful-descriptions-of-exceptional-groups/99744
## Beautiful descriptions of exceptional groups ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm curious about the beautiful descriptions of exceptional simple complex Lie groups and algebras (and maybe their compact forms). By beautiful I mean: simple (not complicated - it means that we need not so many words to describe this). For $G_2$ we know the automorphisms of octonions and the rolling distribution (and also the intersection of three $Spin_7$-s in $Spin_8$). For $F_4$ we know the automorphisms of Jordan algebra $H_3(\mathbb O)$ and Lie algebra of commutators of right multiplications in this algebra (see Chevalley-Schafer's paper for details). For $E_6$ we know the automorphisms of determinant in $H_3(\mathbb O)$ and Lie algebra linearly spanned by right multiplications and $\mathfrak f_4$. For $\mathfrak f_4$, $\mathfrak e_6$, $\mathfrak e_7$, $\mathfrak e_8$ we know the Vinberg-Freudenthal Magic Square. What do we know (expressing in a simple form) about $E_7$ and $E_8$? - 2 You're presumably aware of Baez's paper on the octonions, and of math.ucr.edu/home/baez/octonions – André Henriques Jun 15 at 19:25 @Andre: of course I read this – zroslav Jun 15 at 19:26 1 A related question is «what are ugly descriptions of exceptional groups?» – Mariano Suárez-Alvarez Jul 6 at 13:17 @Mariano: I think that "Freudenthal Space Representation" is a very ugly description of $E_7$ – zroslav Jul 8 at 13:27 For me descriptions in Adams book "Exceptional Lie Groups" are ugly. He uses arguments that something exists but does not give explicit construction. – Marek Mitros Jul 30 at 10:13 ## 4 Answers It is not always clear what one means by 'the simplest description' of one of the exceptional Lie groups. In the examples you've given above, you quote descriptions of these groups as automorphisms of algebraic structures, and that's certainly a good way to do it, but that's not the only way, and one can argue that they are not the simplest in terms of a very natural criterion, which I'll now describe: Say that you want to describe a subgroup $G\subset \text{GL}(V)$ where $V$ is a vector space (let's not worry too much about the ground field, but, if you like, take it to be $\mathbb{R}$ or $\mathbb{C}$ for the purposes of this discussion). One would like to be able to describe $G$ as the stabilizer of some element $\Phi\in\text{T}(V{\oplus}V^\ast)$, where $\mathsf{T}(W)$ is the tensor algebra of $W$. The tensor algebra $\mathsf{T}(V{\oplus}V^\ast)$ is reducible under $\text{GL}(V)$, of course, and, ideally, one would like to be able to chose a 'simple' defining $\Phi$, i.e., one that lies in some $\text{GL}(V)$-irreducible submodule $\mathsf{S}(V)\subset\mathsf{T}(V{\oplus}V^\ast)$. Now, all of the classical groups are defined in this way, and, in some sense, these descriptions are as simple as possible. For example, if $V$ with $\dim V = 2m$ has a symplectic structure $\omega\in \Lambda^2(V^\ast)$, then the classical group $\text{Sp}(\omega)\subset\text{GL}(V)$ has codimension $m(2m{-}1)$ in $\text{GL}(V)$, which is exactly the dimension of the space $\Lambda^2(V^\ast)$. Thus, the condition of stabilizing $\omega$ provides exactly the number of equations one needs to cut out $\text{Sp}(\omega)$ in $\text{GL}(V)$. Similarly, the standard definitions of the other classical groups as subgroups of linear transformations that stabilize an element in a $\text{GL}(V)$-irreducible subspace of $\mathsf{T}(V{\oplus}V^\ast)$ are as 'efficient' as possible. In another direction, if $V$ has the structure of an algebra, one can regard the multiplication as an element $\mu\in \text{Hom}\bigl(V\otimes V,V\bigr)= V^\ast\otimes V^\ast \otimes V$, and the automorphisms of the algebra $A = (V,\mu)$ are, by definition, the elements of $\text{GL}(V)$ whose extensions to $V^\ast\otimes V^\ast \otimes V$ fix the element $\mu$. Sometimes, if one knows that the multiplication is symmetric or skew-symmetric and/or traceless, one can regard $\mu$ as an element of a smaller vector space, such as $\Lambda^2(V^\ast)\otimes V$ or even the $\text{GL}(V)$-irreducible module $\bigl[\Lambda^2(V^\ast)\otimes V\bigr]_0$, i.e., the kernel of the natural contraction mapping $\Lambda^2(V^\ast)\otimes V\to V^\ast$. This is the now-traditional definition of $G_2$, the simple Lie group of dimension $14$: One takes $V = \text{Im}\mathbb{O}\simeq \mathbb{R}^7$ and defines $G_2\subset \text{GL}(V)$ as the stabilizer of the vector cross-product $\mu\in \bigl[\Lambda^2(V^\ast)\otimes V\bigr]_0\simeq \mathbb{R}^{140}$. Note that the condition of stabilizing $\mu$ is essentially $140$ equations on elements of $\text{GL}(V)$ (which has dimension $49$), so this is many more equations than one would really need. (If you don't throw away the subspace defined by the identity element in $\mathbb{O}$, the excess of equations needed to define $G_2$ as a subgroup of $\text{GL}(\mathbb{O})$ is even greater.) However, as was discovered by Engel and Reichel more than 100 years ago, one can define $G_2$ over $\mathbb{R}$ much more efficiently: Taking $V$ to have dimension $7$, there is an element $\phi\in \Lambda^3(V^\ast)$ such that $G_2$ is the stabilizer of $\phi$. In fact, since $G_2$ has codimension $35$ in $\text{GL}(V)$, which is exactly the dimension of $\Lambda^3(V^\ast)$, one sees that this definition of $G_2$ is the most efficient that it can possibly be. (Over $\mathbb{C}$, the stabilizer of the generic element of $\Lambda^3(V^\ast)$ turns out to be $G_2$ crossed with the cube roots of unity, so the identity component is still the right group, you just have to require in addition that it fix a volumme form on $V$, so that you wind up with $36$ equations to define the subgroup of codimension $35$.) For the other exceptional groups, there are similarly more efficient descriptions than as automorphisms of algebras. Cartan himself described $F_4$, $E_6$, and $E_7$ in their representations of minimal dimsension as stabilizers of homogeneous polynomials (which he wrote down explicitly) on vector spaces of dimension $26$, $27$, and $56$ of degrees $3$, $3$, and $4$, respectively. There is no doubt that, in the case of $F_4$, this is much more efficient (in the above sense) than the traditional definition as automorphisms of the exceptional Jordan algebra. In the $E_6$ case, this is the standard definition. I think that, even in the $E_7$ case, it's better than the one provided by the 'magic square' construction. In the case of $E_8\subset\text{GL}(248)$, it turns out that $E_8$ is the stabilizer of a certain element $\mu\in \Lambda^3\bigl((\mathbb{R}^{248})^\ast\bigr)$, which is essentially the Cartan $3$-form on on the Lie algebra of $E_8$. I have a feeling that this is the most 'efficient' description of $E_8$ there is (in the above sense). This last remark is a special case of a more general phenomenon that seems to have been observed by many different people, but I don't know where it is explicitly written down in the literature: If $G$ is a simple Lie group of dimension bigger than $3$, then $G\subset\text{GL}({\frak{g}})$ is the identity component of the stabilizer of the Cartan $3$-form $\mu_{\frak{g}}\in\Lambda^3({\frak{g}}^\ast)$. Thus, you can recover the Lie algebra of $G$ from knowledge of its Cartan $3$-form alone. On 'rolling distributions': You mentioned the description of $G_2$ in terms of 'rolling distributions', which is, of course, the very first description (1894), by Cartan and Engel (independently), of this group. They show that the Lie algebra of vector fields in dimension $5$ whose flows preserve the $2$-plane field defined by $$dx_1 - x_2\ dx_0 = dx_2 - x_3\ dx_0 = dx_4 - {x_3}^2\ dx_0 = 0$$ is a $14$-dimensional Lie algebra of type $G_2$. (If the coefficients are $\mathbb{R}$, this is the split $G_2$.) It is hard to imagine a simpler definition than this. However, I'm inclined not to regard it as all that 'simple', just because it's not so easy to get the defining equations from this and, moreover, the vector fields aren't complete. In order to get complete vector fields, you have to take this $5$-dimensional affine space as a chart on a $5$-dimensional compact manifold. (Cartan actually did this step in 1894, as well, but that would take a bit more description.) Since $G_2$ does not have any homogeneous spaces of dimension less than $5$, there is, in some sense, no 'simpler' way for $G_2$ to appear. What doesn't seem to be often mentioned is that Cartan also described the other exceptional groups as automorphisms of plane fields in this way as well. For example, he shows that the Lie algebra of $F_4$ is realized as the vector fields whose flows preserve a certain 8-plane field in 15-dimensional space. There are corresponding descriptions of the other exceptional algebras as stabilizers of plane fields in other dimensions. K. Yamaguchi has classified these examples and, in each case, writing down explicit formulae turns out to be not difficult at all. Certainly, in each case, writing down the defining equations in this way takes less time and space than any of the algebraic methods known. Further remark: Just so this won't seem too mysterious, let me describe how this goes in general: Let $G$ be a simple Lie group, and let $P\subset G$ be a parabolic subgroup. Let $M = G/P$. Then the action of $P$ on the tangent space of $M$ at $[e] = eP\in M$ will generally preserve a filtration $$(0) = V_0 \subset V_1\subset V_2\subset \cdots \subset V_{k-1} \subset V_k = T_{[e]}M$$ such that each of the quotients $V_{i+1}/V_i$ is an irreducible representation of $P$. Corresponding to this will be a set of $G$-invariant plane fields $D_i\subset TM$ with the property that $D_i\bigl([e]\bigr) = V_i$. What Yamaguchi shows is that, in many cases (he determines the exact conditions, which I won't write down here), the group of diffeomorphisms of $M$ that preserve $D_1$ is $G$ or else has $G$ as its identity component. What Cartan does is choose $P$ carefully so that the dimension of $G/P$ is minimal among those that satisfy these conditions to have a nontrivial $D_1$. He then takes a nilpotent subgroup $N\subset G$ such that $T_eG = T_eP \oplus T_eN$ and uses the natural immersion $N\to G/P$ to pull back the plane field $D_1$ to be a left-invariant plane field on $N$ that can be described very simply in terms of the multiplication in the nilpotent group $N$ (which is diffeomorphic to some $\mathbb{R}^n$). Then he verifies that the Lie algebra of vector fields on $N$ that preserve this left-invariant plane field is isomorphic to the Lie algebra of $G$. This plane field on $N$ is bracket generating, i.e., 'non-holonomic' in the classical terminology. This is why it gets called a 'rolling distribution' in some literature. In the case of the exceptional groups $G_2$ and $F_4$, the parabolic $P$ is of maximal dimension, but this is not so in the case of $E_6$, $E_7$, and $E_8$, if I remember correctly. - If I remember correctly, it is not very hard to prove that any element of $GL(J_3(\mathbb{O}))$ which preserves determinant and trace is an automorphism of the Jordan algebra $J_3(\mathbb{O})$. Does the quartic form of E_7 also have some "geometric" explanation? Is it a determinant of 4x4 octonionic matrix or some hyperdeterminant? – robot Jun 16 at 22:03 2 @robot: Yes, of course, this is very easy: If $\tau\in V^\ast$ and $\delta\in\mathsf{S}^3(V^\ast)$ are given as you describe, one shows that there is an $e\in V$ such that $\tau(v) = \delta(e,e,v)$. Then, defining $\gamma\in\mathsf{S}^2(V^\ast)$ by $\gamma(v,w)=\delta(e,v,w)$ gives a nondegenerate quadratic form on $V$, thereby establishing an isomorphism between $V$ and $V^*$. Then the algebra multiplication $\mu\in\mathsf{S}^2(V^\ast)\otimes V$ is just $\delta$ itself under the mapping $\mathsf{S}^3(V^\ast)\to \mathsf{S}^2(V^\ast)\otimes V^\ast \to \mathsf{S}^2(V^\ast)\otimes V$. – Robert Bryant Jun 17 at 13:53 2 @robot: Cartan's $E_7$-defining quartic is not in terms of octonions: Let $W$ be $8$-dimensional with volume form $\nu\in\Lambda^8(W^\ast)$. Set $V = \Lambda^2(W)\oplus\Lambda^2(W^\ast)$. For $(x,\xi)\in V$, let $x\cdot\xi\in \text{End}(W)$ be the image under the contraction $\Lambda^2(W)\otimes\Lambda^2(W^\ast)\to W\otimes W^\ast = \text{End}(W)$. Set $q(x,\xi) = \text{Pf}(x) + \text{tr}\bigl((x\cdot\xi)^2\bigr) + \text{Pf}(\xi)$. Then the identity component of the stabilizer of $q\in\mathsf{S}^4(V^\ast)$ is $E_7$. ($\nu$ is used to turn $\text{Pf}(x)$ and $\text{Pf}(\xi)$ into scalars). – Robert Bryant Jun 17 at 14:19 @Robert: Can you write the references on automorphisms of plane fields? – zroslav Jun 17 at 16:38 @zroslav: Sure. To my knowledge, the most modern reference with complete proofs is by Keizo Yamaguchi, Differential systems associated with simple graded Lie algebras. Progress in differential geometry, 413–494, Adv. Stud. Pure Math., 22, Math. Soc. Japan, Tokyo, 1993. This proves general results about Lie groups defined as automorphisms of homogeneous plane fields, but he doesn't write out the formulae as explicitly as one might like sometimes. In particular, one doesn't see the simple explicit formulae that Cartan wrote down, but one can work it out without much difficulty in each case. – Robert Bryant Jun 18 at 15:04 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There's a nice construction of the $E_8$ Lie algebra due to Borcherds based on methods from vertex operator algebras, but with no understanding of vertex algebras needed. See p. 152 of these notes from a course by Borcherds and others. See also section 7.4 of notes by Johnson-Freyd. The idea is to start with the root system and root lattice, and construct the Lie algebra using Serre's relations. But with the relations there is a sign ambiguity, so one passes to a 2-fold cover of the lattice to resolve the sign issues, and check that everything works. Once you have $E_8$, you can find $E_7$ sitting inside it. Since the lattice is self-dual (simply-connected), you can just exponentiate to get the Lie group. - 3 Incidentally, the sign ambiguity can be traced to the fact that the Weyl group is not a subgroup of $G$ but only a subquotient, and "hence" there can't be a functorial construction of groups from their Dynkin diagrams. The double cover of the lattice carries an action of the actual subgroup of $G$ one uses instead of the Weyl group. – Allen Knutson Jun 16 at 14:40 @Allen Knutson: Great comment! I missed this fact during my studies of Lie theory. Would you please recommend something where I can read about this in detail? – robot Jun 16 at 21:46 @ robot: the notes I linked to describe this. – Agol Jun 17 at 2:00 @robot: The most basic example is the fact that the reflection $\binom{01}{10}$ is not an element of $SL_2(\mathbb{R})$. – S. Carnahan♦ Jun 17 at 9:47 2 @Allen: There is a functorial construction of Lie groups from their Dynkin diagrams (via the Serre relations). There isn't a functorial construction of Lie groups from their root systems. – André Henriques Jun 17 at 10:20 show 1 more comment If you start from basics, then J.Tits' "Local approach to buildings" [1] would certainly win, as you won't even need a definition of a group to describe the natural geometries for the exceptional Lie groups. [1] Tits, J. "A local approach to buildings", The geometric vein: The Coxeter Festschrift, Springer-Verlag, 1981, pp. 519–547 - Personally I like the definition in Barton, Sudbery paper (thank you, Bruce for adding the reference): MR2020553 (2005b:17017) Barton, C. H. ; Sudbery, A. Magic squares and matrix models of Lie algebras. Adv. Math. 180 (2003), no. 2, 596--647. This is also available at: http://arxiv.org/abs/math/0203010 It uses triality algebra based on R, C, H, O composition algebras. Using this I have constructed all compact and non-compact exceptional Lie algebras in GAP. Magic square correspond to square of algebras: R*R, R*C, R*H, R*O C*R, C*C, C*H, C*O H*R, H*C, H*H, H*O O*R, O*C, O*H, O*O where * is the tensor product. You can replace algebra A with split version {A^~} to obtain non compact version. Lie algebra in position A*B is TriA + TriB + A*B + A*B + A*B. What is remaining is just to define the bracket. To obtain f4 with compact spin9 I have changed sign in last two A*B. Regards, Marek - 1 When you say "all compact and non-compact exceptional Lie algebras", do you mean "all real forms"? – Bruce Westbury Jul 6 at 12:58 There are three real forms of $F_4$ and the Barton & Sudbery paper constructs two of them. – Bruce Westbury Jul 6 at 13:04 Bruce, I am only intrested in real Lie algebras. To obtain f4 with compact spin9 I have changed sign in last two A*B. – Marek Mitros Jul 30 at 10:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 189, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302107095718384, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/105059-abstract-algebra-groups.html
# Thread: 1. ## Abstract Algebra-Groups puzzled by this problem. Suppose that a is a group element and a^(6)=e. What are the possibities for |a|? Intuitively I want to say that the answer is 6 because of the definition of order of an element. However another wants to say infinite... Suggestions? 2. Originally Posted by RoboMyster5 puzzled by this problem. Suppose that a is a group element and a^(6)=e. What are the possibities for |a|? Intuitively I want to say that the answer is 6 because of the definition of order of an element. However another wants to say infinite... Suggestions? The order of an element, |a| , is defined as the smallest possible integer n such that $a^n = e$ . Immediately you should notice that this rules out infinity as a possiblity as we're given that $a^6=e$, and clearly 6 < infinity . Since we're given $a^6=e$ it must be that |a| divides 6 . This gives the possibilites: 1) |a| = 1 2) |a| = 2 3) |a| = 3 4) |a| = 6 You can test to see if our original equation is satisfied: Suppose |a|= 2, then is still true that $a^6 = e$ ? Well, $a^6 = (a^2)^3$ and we know that |a| = 2 and so $a^2 = e$ therefore $a^6=e^3=e$ . So it still is true. If you're not convinced you can repeat this for the other cases. Hope this helps. Pomp
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.95074063539505, "perplexity_flag": "middle"}
http://gilkalai.wordpress.com/2010/09/29/polymath-3-polynomial-hirsch-conjecture/?like=1&source=post_flair&_wpnonce=eb9f8a7f00
Gil Kalai’s blog ## Polymath 3: Polynomial Hirsch Conjecture Posted on September 29, 2010 by I would like to start here a research thread of the long-promised Polymath3 on the polynomial Hirsch conjecture. I propose to try to solve the following purely combinatorial problem. Consider t disjoint families of subsets of {1,2,…,n}, . Suppose that (*) For every , and every and , there is which contains . The basic question is: How large can t  be??? (When we say that the families are disjoint we mean that there is no set that belongs to two families. The sets in a single family need not be disjoint.) In a recent post I showed the very simple argument for an upper bound $n^{\log n+1}$. The major question is if there is a polynomial upper bound. I will repeat the argument below the dividing line and explain the connections between a few versions. A polynomial upper bound for $f(n)$ will imply a polynomial (in $n$) upper bound for the diameter of graphs of polytopes with $n$ facets. So the task we face is either to prove such a polynomial upper bound or give an example where $t$ is superpolynomial. The abstract setting is taken from the paper Diameter of Polyhedra: The Limits of Abstraction by Freidrich Eisenbrand, Nicolai Hahnle,  Sasha Razborov, and Thomas Rothvoss. They gave an example that $f(n)$ can be quadratic. We had many posts related to the Hirsch conjecture. Remark: The comments for this post will serve both the research thread and for discussions. I suggested to concentrate on a rather focused problem but other directions/suggestions are welcome as well. Let’s call the maximum t,  f(n). Remark: If you restrict your attention  to sets in these families containing an element m and delete m from all of them, you get another example of such families of sets, possibly with smaller value of t. (Those families which do not include any set containing m will vanish.) Theorem: . Proof: Consider the largest s so that the union of all sets in is at most n/2.   Clearly, . Consider the largest r so that the union of all sets in is at most n/2.   Clearly, . Now, by the definition of s and r, there is an element m shared by a set in the first s+1 families and a set in the last r+1 families. Therefore (by (*)), when we restrict our attention to the sets containing ‘m’ the families all survive. We get that . Q.E.D. Remarks: 1) The abstract setting is taken from the paper  by Eisenbrand,  Hahnle,  Razborov, and  Rothvoss (EHRR). We can consider families of d-subsets of {1,2,…, n}, and denote the maximum cardinality t by . The argument above gives the relation , which implies . 2) (and thus also ) are upper bounds for the diameter of graphs of d-polytopes with n facets. Let me explain this and also the relation with another abstract formulation. Start with a $d$-polytope with $n$ facets. To every vertex v of the polytope associate the set $S_v$ of facets containing $v$. Starting with a vertex $w$ we can consider ${\cal F}_i$ as the family of sets which correspond to vertices of distance $i+1$ from \$w\$. So the number of such families (for an appropriate $w$ is as large as the diameter of the graph of the polytope. I will explain in a minute why condition (*) is satisfied. 3) For the diameter of graphs of polytopes we can restrict our attention to simple polytopes namely for the case that all sets $S_v$ have size $d$. 4)  Why the families of graphs of simple polytopes satisfy (*)? Because if you have a vertex $v$ of distance $i$ from $w$, and a vertex $u$ at distance $k>i$. Then consider the shortest path from $v$ to $u$ in the smallest face $F$ containing both $v$ and $u$. The sets $S_z$ for every vertex $z$ in $F$ (and hence on this path) satisfies $S_v\cap S_u \subset S_z$. The distances from $w$ of adjacent vertices in the shortest path from $v$ to $u$ differs by at most 1. So one vertex on the path must be at distance $j$ from $w$. 5) EHRR considered also the following setting: consider a graph whose vertices are labeled by $d$ subsets of {1,2,…,n}. Assume that for every vertex v labelled by S(v) and every vertex u labelled by S(u)  there is a path so that all vertices are labelled by sets containing $S(v)\cap S(u)$. Note that having such a labelling  is the only properties of graphs of simple $d$-polytopes that we have used in remark 4. ### Like this: This entry was posted in Convex polytopes, Open discussion, Open problems, Polymath3 and tagged Hirsch conjecture, Polymath3. Bookmark the permalink. ### 117 Responses to Polymath 3: Polynomial Hirsch Conjecture 1. Pingback: Brawer Hirsch And Assoc Pa lawyer in Fort Lauderdale | Florida Personal Injury Lawyer 2. Nicolai Hähnle says: Dear Gil, I’ve only recently thought about this problem again, so let me just throw some thoughts out there. I have been considering a variant of this problem where instead of sets one allows multisets of fixed cardinality d, or equivalently monomials of degree d over n variables. In this setting, there is actually a very simple construction that gives a lower bound of d(n-1) + 1, the sequence of multisets (for the case d=4 but it easily generalizes): 1111, 1112, 1122, 1222, 2222, 2223, etc., Note that here we have a family of multi-sets where each family in fact only contains a single multi-set. There are alternative constructions that achieve the same “length” without singletons, e.g. you can also partition all d-element multisets into families and achieve d(n-1) + 1. My current guess would be that this construction is best possible, i.e. I would conjecture d(n-1) + 1 to be an upper bound. This upper bound holds for all _partitions_ of the d-multisets into families, i.e. it holds in the case where every multi-set appears in exacly one of the families, via a simple inductive argument: Take one multiset from the first family and one of the last, then take a (d-1)-subset of the first and one element of the last. The union is a d-multiset that must appear in one of the families, proceed by induction on the “dimension” d. So to disprove my guess for the upper bound would require cleverly _not using_ certain multisets somehow. The upper bound holds for n <= 3 and all d, d <= 2 and all n, and – provided that I made no mistake in my computer codes – it also holds for: - n=4 and d<= 13 - n=5 and d<= 7 - n=6 and d<= 5 - n=7 and d<=4 - n=8 and d=3 Yes, the order of n and d is correct. I used SAT solvers to look for counter-examples to my hypothesis, and I stopped when I reached instances that ran out of memory after somewhere between one and two weeks of computation. The case that bugs me the most personally is that I could not prove anything for d=3, where the best upper bound I know of is essentially 4n (via the Barnette/Larman argument for linear diameter in fixed dimension). Though it would be also interesting to think about what can be said about fixing n=4. I have not given the problem any thought yet in the form that you stated it; I will give it a try, maybe it leads to some ideas that I haven't thought of in the "dimension-constrained" version that I looked at. 3. Gil Kalai says: Dear Nicolai, This is very interesting! Pleas do elaborate on the various general constructions giving d(n-1)+1. The argument that it is tight for families which contains all elements is also interesting. Can’t you get even better constructions for multisets based on the ideas from your paper? • Nicolai Hähnle says: Here’s a construction giving d(n-1)+1 that partitions the set of d-multisets. Take the groundset of n elements to be {0,1,2,…,n-1} and define for each multiset S the value s(S) to be the sum of its elements. Then the preimages of the numbers 0 through d(n-1) partition the multisets into families with the desired property. This is the only other general construction I have. There are other examples for small cases that I found, and it seems like in general there should be many such examples, though so far I can only back this up with fuzzy feelings. As for the constructions of the paper, interestingly it turns out that the two variants (with sets and with multisets) are in a sense asymptotically equivalent. Basically, what we do in the paper can be interpreted in the following way. You start with the simple multiset construction that I outlined in my earlier post using {1,2,…,n} as your basic set. Then you get a set construction on the set {1,2,…,n}x{1,2,…,m} by replacing each of the multisets in the original by one of the blocks that we construct in the paper using disjoint coverings. It turns out that this can be generalized to starting with arbitrary multisets. What you get is a result somewhat like this: if f(d,n) is the upper bound in the sets variant and f’(d,n) is the upper bound on the multisets variant, then of course f(d,n) <= f'(d,n) just by definition, and by the construction f'(d,nm) <= DC(m) f(d,n), where the DC(m) part is essentially the number of disjoint coverings you can find, as this determines the "length" of the blocks in the construction. 4. Terence Tao says: I’ve started a wiki page for this project at http://michaelnielsen.org/polymath1/index.php?title=The_polynomial_Hirsch_conjecture but it needs plenty of work, ideally by people who are more familiar with the problem than I. 5. Pingback: Polymath3 (polynomial Hirsch conjecture) now officially open « The polymath blog 6. Terence Tao says: One place to get started is to try to work out some upper and lower bounds on f(n) (the largest t for which such a configuration can occur) for small n, to build up some intuition. I take it all the families F_i are assumed to be non-empty? Otherwise there is no bound on t because we could take all the F_i to be empty. Assuming non-emptiness (and thus the trivial bound f(n) <= 2^n), one trivially has f(0)=1 (take F_1 to consist of just the emptyset), f(1)=2 (take F_1 = {emptyset}, F_2 = { {1} }, say), and f(2) = 4 (take F_1 = {emptyset}, F_2 = { { 1 } }, F_3 = { { 1,2} }, F_4 = { { 2 } }), if I understand the notations correctly. So I guess the first thing to figure out is what f(3) is… 7. Terence Tao says: I can show f(3) can’t be 8. In that case, each of the families would consist of a single set, and one of the families, say F_i, would consist of the whole set {1,2,3}. Then the sets in F_1, …, F_{i-1} must be increasing, as must the sets in F_8, …, F_{i+1}. Only one of these sequences can grab the empty set and have length three; the other can have length at most 2. This gives a total length of 3+1+2 = 6, a contradiction. On the other hand, one can attain f(3) >= 6 with the families {emptyset}, {{2}}, {{1,2}}, {{1}}, {{1,3}}, {{3}}. Not sure whather f(3)=7 is true yet though. 8. Terence Tao says: More generally, one might like to play with the restricted function f’(n), defined as with f(n) except that each of the F_i are forced to be singleton families (i.e. they consist of just one set F_i = A_i, with the $A_1,\ldots,A_t$ distinct). The condition (*) then becomes that $A_i \cap A_k \subset A_j$ whenever $i < j < k$. It should be possible to compute f'(n) quite precisely. Unfortunately this does not upper bound f(n) since $f'(n) \leq f(n)$, but it may offer some intuition. 9. Terence Tao says: [Gil, if you can fix the latex in my previous comment that would be great. GK:done] I can now rule out f(3)=7 and deduce that f(3)=6. The argument from 8:19pm shows that if one of the families contains {1,2,3}, then there must be an ascending chain to the left of that family and a descending chain to the right giving at most 6 families, a contradiction. So we can’t have {1,2,3} and so must distribute the remaining seven subsets of {1,2,3} among seven families, so that each family is again a singleton. The sets {1}, {2}, {3} must appear somewhere; without loss of generality we may assume that {1} appears to the left of {2}, which appears to the left of {3}. Then none of the families to the left of {2} can contain a set that contains 3, and none of the families to the right can contain a set that contains 1. In particular there is nowhere for {1,3} to go and so one cannot have seven families. 10. Pingback: Polymath3 « Euclidean Ramsey Theory 11. Terence Tao says: I may be making a stupid error here, but it seems that the proof of $f(n) \leq f(n-1) + 2 f( n/2 )$ in the previous post http://gilkalai.wordpress.com/2010/06/19/the-polynomial-hirsch-conjecture-the-crux-of-the-matter/ can be easily modified to give $f(n) \leq 3 f(2n/3)$ which would then give the polynomial growth bound $f(n) = O( n^{ \log 3 / \log 3/2 } )$. Indeed, if we have t families F_1, …, F_t, we let s be the largest number such that $F_1 \cup \ldots \cup F_s$ is supported in a set of size at most 2n/3, and similarly let r be the largest number such that $F_{t-r+1} \cup \ldots \cup F_t$ is supported in a set of size 2n/3. Then s and r are at most $f(2n/3)$, and there is a set of size at least n/3 that is common to at least one member of each of the intermediate families $F_{s+1},\ldots,F_{t-r}$. Restricting to those members and then deleting the common set, it seems to me that we have $t-r-s \leq f(2n/3)$, which gives the claimed bound, unless I’ve made a mistake somewhere… 12. noamnisan says: It seems that the f’(n) (suggested at 8:22pm comment) are easy to figure out: Look at the first and second sets in the list, some element x is in the first but not in the second. This x can never appear again in the list of sets, so we get the recursion f’(n) <= f'(n-1)+1, and thus f'(n)<=n which can be achieved by the list of singletons. Somethings similar should apply also if we allow multisets (as in the first comment), where each time the maximum allowed multiplicity of some element decreases by 1, never to increase again. 13. Terence Tao says: Noam, it could be that the first set is completely contained in the second, but of course this situation cannot continue indefinitely, so one certainly gets a quadratic bound f’(n) = O(n^2) at least out of this. Gil, I was reading the Eisenbrand-Hahnle-Razborov-Rothboss paper and they seem to have a slightly different combinatorial setup than in your problem, in which one has a graph whose vertices are d-element subsets of [n] with the property that any two vertices u, v are joined by a path consisting only of vertices that contain $u \cap v$. Could you explain a bit more how your combinatorial setup is related to this (if it is), and what the connection to the polynomial Hirsch conjecture is? 14. noamnisan says: Terry, regarding the 11:32 attempt, I think that the bug is that even though the support of the prefix has a n/3 intersection with the support of the suffix this does not imply that this intersection is common to every set in the middle but rather only to the support of the middle. 15. noamnisan says: Lets fix the f’(n) bound to at least f’(n)<=2n (not tight, it seems). Define f'(n,k) to be the max length you can get if the first set in the sequence has size k. So, I would claim that f'(n.k)k then we have f’(n,k) <=1 + f'(n,k') If k=k' then we have f'(n,k) <= 1+f'(n-1,k) (since some element was removed and will never appear again If k'<k then we have f'(n,k) <= 1+f'(n-(k-k'), k') (since at least k' were removed forever) 16. noamnisan says: the last comment got garbled… i was claiming that f’(n,k)<=2n-k. 17. Terence Tao says: Ah, I see where I went wrong now, thanks! (Gil: can you set the thread depth in comments from 1 to 2? This makes it easier to reply to a specific comment.) Noam, I don’t see how one can derive f’(n) <= 2n, though I do find the bound plausible. I can get f'(n) <= n + f'(n-1) but this only gives a quadratic bound. • Gil Kalai says: I did not believe I can set it, but I succeded!!! 18. Klas Markström says: Is it not trivial that $f'(n)\geq 2n-1$? Take $F_i=${1,…,i} for $i\leq n$ Take $F_i=${(i-n+2),…,n}\$ for $i\geq n$ In general, if each $F_i$ is a single set then the families must be convex in the “i”-direction in the sens that if an element in present in two sets then it must be present in all sets between them. Well, it’s late at my end of the world so I’ll look in tomorrow again. 19. Klas Markström says: That should have said that the first sets are of the form {1,…,i} and the later sets of the form {i-n+1,…,n} 20. Terence Tao says: Ah, it looks like f’(n) = 2n for all n. To get the lower bound, look at {}, {1}, {1,2}, {1,2,3}, …, {1,2,..,n}, {2,..,n}, {3,…,n}, …, {n}. To get the upper bound, we follow Noam’s idea and move from each set to the next. When we do so, we add in some elements, take away some others, or both. But once we take away an element, we can never add it back in again. So each element gets edited at most twice; once to put it in, then again to take it out. (We can assume without loss of generality that we start with the empty set, since there’s no point putting the empty set anywhere else, and it never causes any harm.) This gives the bound of 2n. I think the same argument gives f’(n,k) = 2n-k (or maybe 2n-k+1). • Klas Markström says: There is another nice way to construct a maximal sequence, we start with {}, {1},{1,2},{2},….{i,i+1},{i+1},{i+1,i+2},{i+2}…. This also gives length 2n, and uses only small sets. A nice way to visualize this family is that we are following a hamiltonian path in a graph and alternatingly state the edges and vertices we visit. 21. Terence Tao says: I guess I just repeated what Klas said. 22. Pingback: Polymath3 now active « What’s new 23. Terence Tao says: It should be possible to compute f(4). We have a general lower bound $f(n) \geq 2n$ which gives f(4) >= 8, and the recursion (if optimised) gives f(4) 2 and $|A \cap B| \geq 2$ (as there are not enough sets containing $A \cap B$ to fill the intervening families, now that 1234 is out of play). I also know that without loss of generality one can take $F_1 = \{ \emptyset\}$ (or one can simply remove the empty set family altogether and drop f(n) by one). This already eliminates a lot of possibilities, but I wasn’t able to finish the job. 24. Terence Tao says: Oops, wordpress ate a chunk of my previous post. Here is another attempt (please delete the older post) It should be possible to compute f(4). We have a general lower bound $f(n) \geq 2n$ which gives f(4) >= 8, and the recursion (if optimised) gives $f(4) \leq 11$. Actually I conjecture f(4)=8, after failing several times to create a 9-family sequence. What I can say is that given a 9-family sequence, one cannot have the set 1234={1,2,3,4} (as this creates an ascending chain to the left and a descending chain to the right, which leads to at most 8 families). I also know that there does not exist F_i, F_j with $|i-j| \geq 3$ that contains A, B respectively with $|A \cap B| \geq 2$ (as there are not enough sets containing to fill the intervening families, now that 1234 is out of play). I also know that without loss of generality one can take $F_1 = \{\emptyset\}$ (or one can simply remove the empty set family altogether and drop f(n) by one). This already eliminates a lot of possibilities, but I wasn’t able to finish the job. • Yury Volvovskiy says: I think it’s easy to show that f’(n) <2n+1. Each element i is contained in the sets [F_b_i, F_b_i+1,...,F_e_i]. So we have n intervals corresponding to n elements. Since all sets are different each set has to be either a beginning or an end of an interval, so we can't have more that 2n. 25. Klas Markström says: On my flight down to Stockholm this moring I thought about this problem afgain and I think I can push the lower bound up a bit. Define $G_i =\{1,...,i\}$ and for $j\leq i-1$ w define $F_{i,j}=\{G_i,G_j\}$. Now order the $F_{i,j}$ lexicographically according to the indices. Unless I’ve missed something this gives a quandratic lower bound on f(n) Right away I don’t see why this can’t be done with more indices on the “$F$” families. E.g. for $k \leq j-1 \leq i-1$ take $F_{i,j,k}$ and order them lexicographically. But if we do this with a number of indices which grows with n we seem to get a superpolynomial lower bound, which makes me a bit worried. Now it’s time for me to pick up my luggage and travel on. • Klas Markström says: As stated earlier this example does not work because it violates the disjointness condition in the definition of the problem. 26. Nicolai Hähnle says: Here’s an alternative perspective on the problem. Take an n-dimensional cube, its vertices corresponding to sets. Now color a subset of its vertices using the integers such that for certain faces F one has the constraint: (*) The colors in F have to be contiguous subset of integers. What is the maximum number of colors that such a coloring can have? If the set of faces on which (*) has to hold is the set of all faces containing a designated special vertex (corresponding to {1,2,…,n}), then this is just a reformulation of the original problem. On the other extreme, if (*) has to hold for all faces, then the maximum number of colors is n+1: Restrict to the smallest face containing both the min and max color vertex. (u and v respectively) If this face contains no other colored vertex, one is done. Otherwise, take a third vertex w and recurse on the minimum faces containing u and w, and w and v, respectively. Can one say anything about the case where the faces on which (*) has to hold is somewhere between those two extremes? • Ryan O'Donnell says: I like this alternate perspective, but I don’t quite see why it’s a reformulation of the original problem when you consider faces that contain a designated vertex. It seems to me that this colouring version of the problem corresponds to following constraint in the original problem: for each \$i\$< \$j\$< \$k\$ and for each \$a\$, if \$a \in S \in F_i\$ and \$a \in T \in F_k\$ for some \$S\$ and \$T\$, then there exists \$U \in F_j\$ with \$a \in U\$. And this doesn’t seem quite the same as the original constraint; e.g., \$F_1 = \{\{1,2,3\}\}\$, \$F_2 = \{\{2\}, \{3\}\}\$, \$F_3 = \{\{2,3,4\}\}\$ seems to satisfy the colouring constraint but not the original constraint. Perhaps I’ve not understood things properly though… • Gabor Tardos says: Nicolai meant faces of any dimension containing the designated vertex. You took only faces of co-dimension 1. Hence the discrepancy. 27. Yann Strozecki says: In the following I try to generalize the idea of Terence Tao that you cannot have a big set (the full set in his case) in a family. I hope proof is right and that I did not mess up with the constants. Let \$F_1,..,F_l\$ be a sequence of disjoint families of sets over \$[n]\$ which satisfy condition (*). Say that \$\{1,\dots,n-k\} \in F_l\$, we prove that \$l \leq (n-k+1) f(k)\$. Let \$S_i \in F_i\$ and write \$A_i\$ for its restriction to \$[n-k]\$. Because of condition (*), we have a sequence of sets \$S_1,\dots,S_l\$ such that the sequence \$A_1,\dots,A_l\$ is increasing (non necessarily strictly) (you build it as in the case of one set by family). Moreover, we can choose each \$A_i\$ to be maximal for the restriction of the sets of \$F_i\$ to \$[n-k]\$. Let \$A_j\$ the first set of size \$s\$ and let \$A_w+1\$, the first set of size strictly greater than \$s\$. We have \$A=A_j=A_{j+1} = \dots = A_{w}\$. Consider now the restriction of \$F_j,\dots,F_{w}\$ to elements containing \$A\$: we remove \$A\$ from these elements and we remove the others entirely. We have a sequence of disjoint families \$F’_{j},\dots,F’_{w}\$ over \$[n] \setminus A\$ which satisfy (*). Since \$A\$ has been chosen to be maximal, the families \$F’_{j},\dots,F’_{w}\$ contains only sets over \$\{n-k+1,\dots,n\}\$. Therefore \$w-j+1 \leq f(k)\$. Since there are at most \$n-k+1\$ possible sizes of sets \$A_i\$, we have \$l \leq (n-k+1) f(k)\$. Therefore if there is a set of size \$k\$ in one of the family \$F_i\$ in a sequence \$F_1,\dots,F_t\$, \$t \leq (2 (n-k)+1) f(k)\$. This idea is an attempt to give a different upper bound to \$f(n)\$ and maybe to obtain \$f(4)=8\$. For \$n=4\$ and \$k=1\$, that is \$\{1,2,3\}\$ appears in a family, we have \$t \leq (2(4-1)+1)(f(1)-1)=7\$ : it rules out this case. For \$n = 4\$ and \$k=2\$, that is \$\{1,2\}\$ appears in a family, we have \$t \leq (2(4-2)+1)(f(2)-1) = 15\$ and it does not give anything interesting. We could try to improve this result by using the constraints between sequences of the form \$A_j,\dots,A_w\$. 28. Yury Volvovskiy says: I’m trying to concentrate on the case where the support of each family is the entire set [n]. I think it suffices to establish the polynomial estimate for this case since, in the generic case, the support can only change 2n-1 times. For the case of 4 I seem to have found the chain of length 6 : {{1},{2},{3},{4}}, {{23},{13},{24}}, {{123},{234}}, {{1234}},{{134,124}},{{14,12,34}} and I’m pretty convinced there’s no chain of length 7. 29. Kristal Cantwell says: I think I can show f(4) is less than 11. Assume we have 11 or more elements then 8 types of sets in terms of which of the the first three elements are in the set. We must have a repetition of the same type in two different families. Then every set must contain an element that contains the 3 elements of the repetition. Now if the repetition is not null there can be at most 8 elements that contain the repetition but we have 9 families besides A and B and so there is a contradiction. Now we can repeat this argument for each set of four elements. so we have at most 5 families containing the null set and each single element. And we have adding one element not in a set in a family and having the resulting augmented set outside the family is forbidden. so outside of the 5 sets that contain the singleton elements and the null set there are no two element sets, no single element sets and no null set. but that leaves 5 elements for 6 sets which gives a contradiction. So f(4) cannot be 11. 30. gowers says: Apologies if I ask questions that are answered in earlier comments or earlier discussion posts on this problem. It occurred to me that if we are trying to investigate sequences of set systems $(F_i)$ such that for every $i<j<k$ a certain property holds, then it might be interesting to try to understand what triples with that property can be like. That is, suppose you have three families $\mathcal{A},\mathcal{B},\mathcal{C}$ of subsets of $[n]$ such that no set belongs to more than one family, and suppose that for every $A\in\mathcal{A}$ and every $C\in\mathcal{C}$ there exists $B\in\mathcal{B}$ such that $A\cap C\subset B.$ What can these families be like? At this stage I have so little intuition about the problem that I’d be happy with a few non-trivial examples, whether or not they are relevant to the problem itself. To set the bar for non-triviality, here’s a trivial example: insist that all sets in $\mathcal{A}$ are contained in $[1,s]$ and not contained in $[r,s]$ (where $r\leq s$), that all sets in $\mathcal{C}$ are contained in $[r,n]$ but not in $[r,s],$ and that $\mathcal{B}$ contains the set $[r,s].$ Now any example that looks anything like that is fairly hopeless for the problem because if you have a parameter that “glides to the right” as the family “glides to the right” then it can take at most $n$ values, so there can be at most $n$ families. Let me ask a slightly more precise question. Are there good examples of triples of families where the middle family has several sets, and needs to have several sets? Actually, I’ve thought of an example, but it’s somehow trivial in a different way. Let $\mathcal{A}$ be a fairly large collection of random sets and let $\mathcal{C}$ be another fairly large collection of random sets, chosen to be disjoint. (That is, choose a large collection of random sets and then randomly partition it into two families.) Now let $\mathcal{B}$ consist of all sets $A\cap C$ such that $A\in\mathcal{A}$ and $C\in\mathcal{C}.$ Then trivially it has the property we want, and since with high probability the sets in $\mathcal{B}$ have size around $n/4$ and the sets in $\mathcal{A}$ and $\mathcal{C}$ have size around $n/2$ the three families are disjoint. This raises another question. Is there a useful sense in which this second example is trivial? (By “useful sense” I mean a sense that shows that a random construction like this couldn’t possibly be used to create a counterexample to the combinatorial version of the polynomial Hirsch conjecture.) 31. gowers says: Another weirdness is that my last comment has appeared before some comments that were made several hours earlier. 32. gowers says: Let me think briefly about random counterexamples. Basically, I have an idea for such an example and I want to check that it doesn’t work. The idea is this. If you take a random collection of sets of size $\alpha n,$ then as long as it is big enough its lower shadow at the layer $\alpha^2n$ will be full. (By that I mean that every set of size $\alpha^2n$ will be contained in one of the sets in the collection.) Also, as long as it is small enough, the intersection of any two of the sets will have size about $\alpha^2n.$ I can feel this not working already, but let me press on. If we could get both properties simultaneously, then we could just take a whole bunch of random set systems consisting of sets of size $\alpha.$ Any two sets in any two of the collections would have small intersection and would therefore be contained in at least one set from each collection. This is of course a much much stronger counterexample than is needed, since it dispenses with the condition $i<j<k.$ So obviously it isn’t going to work. [Quick question: does anyone have a proof that you can't have too many disjoint families of sets such that any three families in the collection have the property we are talking about? Presumably this is not hard.] But in any case it’s pretty obvious that if you’ve got enough sets to cover all sets of size $\alpha^2n$ then you’re going to have to have some intersections that are a bit bigger than that. Nevertheless, let me do a quick calculation in case it suggests anything. If I want to choose a random collection of sets of size $r$ in such a way that all sets of size $s$ are covered, then, crudely speaking, I need the probability of choosing a given set of size $r$ to be the reciprocal of the number of $r$-sets containing any given $s$-set. That is, I need to take a probability of $\binom{n-s}{r-s}^{-1}.$ That’s actually the probability that means that the expected number of sets in the family that contain any given $s$-set is 1, which isn’t exactly right but gives the right sort of idea. So if $r=\alpha n$ and $s=\alpha^2n$ then we get that the number of sets is $\binom n{\alpha n}/\binom{(1-\alpha^2)n}{(\alpha-\alpha^2)n}.$ Hmm, the other probability I wanted to work out was the probability that the intersection of two sets of size $\alpha n$ has size substantially different from $\alpha^2n.$ In that way, I wanted to work out how many sets you could pick with no two having too large an intersection. If that was bigger than the size above then it would be quite interesting, but of course it won’t be. • Terence Tao says: Quick question: does anyone have a proof that you can’t have too many disjoint families of sets such that any three families in the collection have the property we are talking about? Presumably this is not hard. The elementary argument kills this off pretty quickly and gives a bound of $t \leq 2n$ in this case. Indeed, the case n=1 follows from direct inspection, and for larger n, once one has t families for some $t > n$, two of them have a common element, say n; then the other t-2 families must have sets that contain n. Now restrict to those sets that contain n, then delete n, and we get from induction hypothesis that $t-2 \leq 2(n-1)$, and the claim follows. 33. gowers says: A rather general question is this. A basic problem with trying to find a counterexample is that the linear ordering on the families makes it natural to try to associate each family with … something. But what? With a ground set of size $n,$ using the ground set is absolutely out. So we need to create some other structure. Klas tried this above, with the set $\{(i,j):j<i\}.$ I vaguely wonder about something geometric, but I start getting the problem that if one has a higher-dimensional structure (in order to get more points) then one still has to find a nice one-dimensional path through it. Maybe something vaguely fractal in flavour would be a good idea. (Please don’t ask me to say more precisely what I mean by this …) 34. Gil Kalai says: I am sorry about the wordpress strange behavior. For improved lower bounds: Considering random examples for our families is appealing. One general “sanity test” (suggested by Jeff Kahn) that is rather harmfull to some random suggestion is: “Check what the proof does for the example.” Often when you look at such an example then the union of the sets in the first very few families will be everything and also in the very few last families. And this looks to be the case also when you restrict yourself to sets containing certain elements. This “sanity check” does not kill every random example but it is useful. For improved upper bounds: In the present proof in order to reach a set R from the first r families and a set T from the last r families so that R and T share k elements we can only guarantee that for r = f(n/2) + f((n-1)/2)+ f((n-2)/2))+…+ f((n-k)/2). Somehow it looks that when k is large we can do better. And that we do not have to waste so much effort further down in the recursion. In particular we reach the same set in the first r families and last r families for r around roughly nf(n/2) it would be nice to improve it. 35. Kristal Cantwell says: I think I can show f(4) is less than 10. We have it must be less than 11 previously. Assume we have 10 or more elements then there are 8 types of sets in terms of which of the the first three elements are in the set. We must have a repetition of the same type in sets in two different families. Then every set must contain an element that contains the 3 elements of the repetition. Now if the repetition is not null there can be at most 8 elements that contain the repetition but we have 8 families besides A and B and so there is a contradiction. But we can improve this since we have two instances of the repetition in the first two elements so we have at most 6 unused elements. Now we can repeat this argument for each set of four elements. so we have at most 5 families containing the null set and each single element. And we have adding one element not in a set in a family and having the resulting augmented set outside the family is forbidden. so outside of the 5 sets that contain the singleton elements and the null set there are no two element sets, no single element sets and no null set. but that leaves 5 elements for 5 sets. This means that each family must contain one of the sets with more than two elements. In particular one must contain the set with four elements and one a set with three elements. Then since their intersection will have three elements every family must have a set with three elements but there are not enough sets with three elements to go around. sets which gives a contradiction. So f(4) cannot be more than 9. 36. noamnisan says: Gil, could you remind us what is known about the case d=2? From your previous blog post I understand that f(2,n)=O(n log^2 n), while the only construction I can see is f(2,n) is of length n-1. Are these the best that is known? • noamnisan says: I think that the following argument establishes an O(nlogn) upper bound for f(2,n): Define the “prefix-support” of an index i to be the support of all sets in all F_j for ji. Now let us ask whether there exists some i in the middle third (t/3 <i < 2t/3) such that the intersection of the prefix-support and the suffix-support of i is less than k=n/log(n). If not, then every F_i in the middle third must have at least n/(2k) pairs in it so then t/3 is bounded by the the total number of pairs (n choose 2) divided by n/(2k), which gives an O(nlogn) bound on t (for k=n/logn). Otherwise, fix such i, and let m be the size of of the prefix-support, so the size of the suffix-support is at most n-m+k, and we get the recursion $f(2,n) \le f(2,m) + f(2,n-m+k) + 1$, which (I think) solves to O(nlogn) for k=n/logn when taking into account that m=theta(n). • noamnisan says: WordPress ate part of the definitions in the beginning: the prefix-support of i is the support of $\cup_{j<i>i} F_j$. • noamnisan says: WordPress ate the LaTex again, so lets try verbally: the prefix-support is the combined support of all set systems that come before i, while suffix-support is the support of all sets that come after i. • noamnisan says: OK, this was not only garbled by wordpress but also by myself and is quite confused and with typos. I think it still works, and will try to write a more coherent version later… 37. gowers says: As another attempt to gain intuition, I want to have a quick try at a just-do-it construction of a collection of families satisfying the given condition. Let’s call the families I’m trying (with no hope of success) to construct $F_1,\dots,F_m.$ The pair of families with most impact on the other families is $(F_1,F_m),$ so let me start by choosing those so as to make it as easy as possible for every family in between to cover all the intersections of sets in $F_1$ and sets in $F_m.$ Before I do that, let me introduce some terminology (local to this comment unless others like it). I’ll write $F_i\sqcap F_j$ for the “pointwise intersection” of $F_i$ and $F_j,$ by which I mean $\{A\cap B:A\in F_i,B\in F_j\}.$ And if $F$ and $G$ are set systems I’ll say that $F$ covers $G$ if for every $A\in G$ there is some $B\in F$ such that $A\subset B.$ Then the condition we want is that $F_j$ covers $F_i\sqcap F_k$ whenever $i<j<k.$ If we want it to be very easy for the families $F_i$ with $2\leq i\leq m-1$ to cover $F_1\sqcap F_m$ then the obvious thing to do is make $F_1$ and $F_m$ as small as possible and to make the intersections of the sets they contain as small as possible as well. Come to think of it (and although this must have been mentioned several times, it is only just registering with me) it’s clear that WLOG there is just one set in $F_1$ and one set in $F_m,$ because making those families smaller does not make any of the conditions that have to be satisfied harder. A separate minimality argument also seems to say that WLOG not only are $F_1$ and $F_m$ singletons, but they are “hereditarily” singletons in that their unique elements are singletons. Why? Well, if $F_1=\{A\}$ then removing an element from $A$ does not make anything harder (because $F_1$ is not required to cover anything) and makes all covering conditions involving $F_1$ easier (because sets in the intermediate $F_j$ cover a proper subset of $A$ if they cover $A$). In fact, we could go further and say that $F_1=\{\emptyset\}$ and that $F_m=\{\{1\}\}$ for some $r,$ but I don’t really like that because we can play the empty-set trick only once and it destroys the symmetry. So I’m going to ban the empty set for the purposes of this argument. So now $F_1=\{\{r\}\}$ and $F_m=\{\{s\}\}$ for some $r,s.$ The next question is whether $r$ should or should not equal $s.$ This is no longer a WLOG I think, because if $r\ne s$ then it makes it easier for intermediate families to cover $F_1\sqcap F_m$ (they do automatically) but when we put in $F_i$ it means that $F_i\sqcap F_1$ and $F_i\sqcap F_m$ are (potentially) different sets, so there is more to keep track of. But a further $F_j$ will either be earlier than $F_i$ and not care about $F_m$ or later and not care about $F_1,$ so my instinct is that it is better to make $r\ne s.$ (And since this is a greedyish just-do-it, there is no real harm in imposing some minor conditions like this as we go along.) Since this comment is getting quite long, I’ll continue in a new one rather than risk losing the whole lot. 38. gowers says: This comment continues from this one. There is now a major decision to be made: which family should we choose next? Should it be one of the extreme families — without loss of generality $F_2$ — or should we try a kind of repeated bisection and go for a family right in the middle? Since going for a family right in the middle doesn’t actually split the collection properly in two (since the two halves will have plenty of constraints that affect both at the same time) I think going for an extreme family is more natural. So let’s see whether we can say anything WLOG-ish about $F_2.$ I now see that I said something false above. It is not true that the unique set in $F_1$ is WLOG a singleton, because there is one respect in which that makes life harder: since the $F_i$ are disjoint we cannot use that singleton again. So let us stick with the decision to choose singletons but bear in mind that we did in fact lose generality (but in a minor way, I can’t help feeling). I also see that I said something very stupid: I was wondering whether it was better to take $r=s$ or $r\ne s,$ but of course taking $r=s$ was forbidden by the disjointness condition. The reason I noticed the first mistake was that that observation seemed to iterate itself. That is, if we think greedily, then we’ll want to make $F_2$ be of the form $\{\{t\}\}$ and so on, and we’ll quickly run out of singletons. So the moral so far is that if we are greedy about making it as easy as possible for $F_j$ to cover $F_i\sqcap F_k$ whenever $i<j<k,$ then we make the disjointness condition very powerful. Since that is precisely the sort of intuition I was hoping to get from this exercise, I’ll stop this comment here. But the next plan is to try once again to use a greedy algorithm, this time with a new condition that will make the disjointness condition less powerful. Details in the next comment, which I will start writing immediately. 39. noamnisan says: It seems that my <a href="http://gilkalai.wordpress.com/2010/09/29/polymath-3-polynomial-hirsch-conjecture/#comment-3448"previous comment went somewhat back in history. 40. Yann Strozecki says: Sorry my previous post was awful since wordpress does not understand latex directly. I try to add latex markups, I hope it is the right thing to do. The computations on examples were false because I use something I did not establish. Now let’s try to improve a bit what I have said. The idea is that if as big set {1,…,n-k} appears in a family, we can decompose the sequence of families [latex]F_1,\dots,F_t[/latex] into levels: we have a sequence of sets [latex]S_i \in F_i[/latex] such that the intersection of S_i with {1,…,n-k} is constant on a level. Moreover each level is of size f(k) at most. We can improve on that a bit to try to compute the first values of f. Let g(n,m) the maximum size of the sequences of disjoint families satisfying (*) such that only sets of size at most m appears in the families. We have g(n,n) = f(n) and g(n,.) is incresaing. Assume now there is a set of size n-k but no larger one in a sequence of t families. Then we can bound t by [latex]1 + 2\sum_{0 < i \leq n-k} g(k,i)[/latex] Consider now the case where we have a set of size n-1 but not of size n. We can see that the first and last levels have only two sets to share therefore we can say wlog that the first level is of size 2 and the last of size 0. By a bit of case study I think I can prove that there is at most two levels with two families. Therefore we have that the size of such a sequence is bounded by 1 + 2*2 (the two levels of size two) + 2(n-1)-3 (the last level is removed as well as the two of size 2). So if there is a set of size n-1 (but not the one of siz n), the size of the sequence is at most 2n. Moreover, one can find such a sequence of size 2n: [latex]\emptyset, n, 1n, 1, 12, 123, \dots, 123\dots n-1, 23\dotsn-1,\dots, n-1[/latex]. Well, now that really rules out the case of a set of size 3, but no of size 4 in a sequence built over {1,2,3,4}. Therefore to prove f(4)=8, one has only to look at a sequence of families containing only pairs. That is computing g(4,2) and it must not be that hard. 41. gowers says: Before starting this comment I took a look back and saw that I had missed Gil’s discussion of $f(d,n).$ Well, my basic thought now is that since a purely greedy algorithm encourages one to look at $f(1,n),$ which is a bit silly, it might be a good idea to try to apply a greedy algorithm to the question of lower bounds for $f(d,n).$ I’ll continue with the notation I suggested in this comment. As before, it makes sense for $F_1$ and $F_m$ to be singletons (where by “makes sense” I mean that WLOG this is what happens). Now of course by “singleton” I mean “set of size $d$“. Actually, since this is just a start, let’s take $d=2$ and try to get a superlinear lower bound (which does in fact exist according to an earlier post of Gil’s, according to Noam Nisan above). So now $F_1=\{\{r,s\}\}$ and $F_m=\{\{t,u\}\}.$ The first question is whether we should take $\{r,s\}$ and $\{t,u\}$ to be disjoint or to intersect in a singleton. It seems pretty clearly better to make them disjoint, so as to minimize the difficulties later. Now what? Again, let us go for $F_2$ next. Here’s an argument that I find quite convincing. Since every set system covers $F_1\sqcap F_m,$ WLOG $F_2$ is a singleton $A_2.$ (Let’s also write $F_1=\{A_1\}$ and $F_m=\{A_m\}.$) Now we care hugely about $F_2\sqcap F_m$ because there are lots of intermediate $F_i,$ and not at all about $F_1\sqcap F_2.$ Oh wait, that’s false isn’t it, because for each $i$ we need $F_2$ to cover $F_1\sqcap F_i.$ So it’s not even obvious that we want $F_2$ to be a singleton. Indeed, if $A_1=\{1,2\}$ and $F_2=\{A_2\}=\{\{2,3\}\},$ then all sets in all $F_i$ must either be disjoint from $\{1,2\}$ or must intersect it in $\{2\}.$ That tells us that the element 1 is banned, and banning elements is a very bad idea because you can do it at most $n$ times. On the other hand, if we keep insisting that we mustn’t ban elements, that is going to be a problem as well, so there seem to be two conflicting pressures here. For now, however, I’m going to go with not banning elements. So that tells us that, in order not to restrict the later $F_i$ too much, it would be a good idea if $F_2$ contains at least one set that contains $1$ and at least one set that contains $2.$ But in order to do that in a “minimal” way, let us take $F_2=\{12,13\},$ where I am now abbreviating $\{r,s\}$ by $rs.$ I am of course also thinking of the $F_i$ as graphs, so let me make that thought explicit: we want a sequence of graphs $F_i$ such that no two of the graphs share an edge, and such that if $i<j<k$ and $F_i$ and $F_k$ contain edges that meet at a vertex, then $F_j$ must also contain an edge that meets at that vertex. Ah — rereading Noam Nisan’s question I now see that the superlinear bound he was referring to was an upper bound. So it could be that obtaining a lower bound even of $n+100$ would be interesting. So now we have chosen the following three graphs: $F_1=\{12\}, F_2=\{13,23\}, F_m=\{rs\}.$ Let’s assume that $r=n-1$ and $s=n,$ but write it $\{rs\}$ because it is harder to write $\{\{n-1,n\}\}.$ What constraint does that place on the remaining graphs $F_i$? Since no set in $F_1$ or $F_2$ intersects any set in $F_m,$ it is not hard for $F_i$ to cover $F_1\sqcap F_m$ and $F_2\sqcap F_m.$ So we just have to worry about $F_2$ covering $F_1\sqcap F_i.$ And we have made sure that that happens automatically, since $F_2$ covers all possible intersections with a set in $F_1.$ (This is another sort of greed that is clearly too greedy.) So the only constraint comes from the disjointness condition: every edge in $F_i$ must contain a vertex outside the set $\{1,2,3\},$ since we have used up the edges $12,23,13.$ Now let’s think about $F_3.$ (It’s no longer clear to me that we wanted to decide $F_m$ at the beginning — let’s keep an open mind about that.) The obvious analogue of what we did when choosing $F_1$ and $F_2$ is to set $F_3=\{14,24,34\}.$ But this is starting to commit ourselves to a pattern that we don’t want to continue, since it stops at $n.$ (But let me check whether it works. If $F_i$ is the set of all pairs with maximal element $i,$ then if $i<j<k$ and an edge in $F_i$ meets an edge in $F_k,$ then those two edges must either be of the form $ri,ik$ or of the form $ri,rk$ for some $r<i.$ So they intersect in some $s\leq i,$ which means that their intersection is covered by the edge $sj\in F_j.$) If we decide at some point to break the pattern, then what happens? Again, this comment is getting long so I’ll start a new one. 42. gowers says: This is a continuation of my previous comment. Let us experiment a bit and try $F_1=\{12\},F_2=\{13,23\},F_3=\{14,24\}.$ That is, in order to break a pattern that we need to break sooner or later, let us miss out $34$ from $F_3$ and explore the implications for future $F_i.$ For convenience I am now not going to assume that we have chosen $F_m:$ my greedy (but I hope not overgreedy) algorithm just chooses the families in the obvious order. What condition does this impose on future $F_i$? Any intersection with $\{12\}$ will be $\{1\}$ or $\{2\},$ so it will be covered by $F_2$ and $F_3.$ So that’s fine. But the fact that we have missed out $34$ from $F_3$ means that we can’t afford any edge that contains the vertex 3, since that will intersect an edge in $F_2$ in the vertex 3, and will then not be covered by $F_3.$ That means that the best our lower bound for $f(2,n)$ can be is $3+f(2,n-1),$ and it probably can’t even be that. So this is bad news if we are going for a superlinear lower bound. Let’s instead try $F_1=\{12\}, F_2=\{12,13\}, F_3=\{24,34\},$ which is genuinely different example. Can any edge in $F_i$ contain 1? If it does, then … OK, it obviously can’t, so once again we’re in trouble. If we don’t ban vertices, then what are our options? Let’s try to understand the general circumstance under which a vertex gets banned. Suppose that a vertex $r$ is used in some $F_i$ and not in $F_j$, where $j>i.$ Then that vertex cannot be used in $F_k$ if $k>j,$ for the simple reason that then $\{r\}\in F_i\sqcap F_k$ and $\{r\}$ is not covered by $F_j.$ So let $U_i$ be the set of all vertices used by $F_i$ (or equivalently the union of the sets in $F_i$). We have just seen that if $i<j<k$ and $r\in U_i\cap U_k,$ then $r\in U_j.$ So a preliminary question we should ask is whether we can find a nice long sequence of sets $U_i$ satisfying this condition. The answer is potentially yes, since there is no reason to suppose that the sets $U_i$ are distinct. But let us first see what happens if they are distinct. I think there should be a linear upper bound on their number. Let $U_1,\dots,U_m$ be a sequence of distinct sets with this property. Let us assume that the ground set is ordered in such a way that each new element that is added is as small as possible. Hmm, I don’t think that helps. But how about this. For each element $r$ of the ground set, the condition tells us that the set of $i$ such that $r\in U_i$ is an interval $I_r.$ So we ought to be able to rephrase things using those intervals. The question becomes this: let $I_1,\dots,I_n$ be a collection of subintervals of $[m].$ For each $i\in[m]$ let $U_i=\{r:i\in I_r\}.$ How big can $[m]$ be if all the sets $U_i$ are distinct? That’s much better, since it tells us that between each $i$ and $i+1$ some interval $I_r$ must either just have started or just have finished. So we get a bound of $m\leq 2n.$ (Perhaps it’s actually $2n-1,$ but I can’t face thinking about that.) This suggests to me a construction, but once again this comment is getting a bit long so I’ll start a new one. • gowers says: I’ve just been reading properly some of the earlier comments and noticed that the argument I gave in the third-to-last paragraph is essentially the argument that Noam and Terry came up with to bound the function $f'(n).$ 43. gowers says: I want to try out a construction suggested by the thoughts at the end of this comment, though it may come to nothing. The basic idea is this. Let’s define our sets $U_i$ as follows. We begin with a collection of increasing intervals $[1,i]$ and then when we reach $[1,n]$ we continue with $[2,n],[3,n],\dots,\{n\}.$ These sets have the property that if $i<j<k$ and $r\in U_i\cap U_k$ then $r\in U_j.$ Now I want to define for each $i$ a graph $F_i$ with vertex set $U_i$ (except that strictly speaking $F_i$ has vertex set $[n]$ and the vertices outside $U_i$ are isolated). I want these $F_i$ to have the following two properties. First, $F_i$ is a vertex cover of $U_i:$ that is, all the vertices in $U_i$ are used. Secondly, the $F_i$ are edge-disjoint. Suppose we have these two conditions. Then if $i<j<k,$ then $F_i\sqcap F_k$ consists of vertices that belong to $U_i\cap U_k$ and hence to $U_j,$ and they are then covered by $F_j.$ The most economical vertex covers will be perfect matchings, though that causes a slight problem if $|U_i|$ is odd. But maybe we can cope with the extra vertices somehow. I think that so far these conditions may be too restrictive. Indeed, for a trivial reason we can’t cover $U_1=\{1\},$ so we should have started with $U_1=\{1,2\}.$ But if we do that, then $F_1$ is forced to be $\{12\},$ which means that $F_2$ is forced to contain $\{13\},$ and all sorts of other decisions are pretty forced as well. So the extra idea that might conceivably help (though it is unlikely) is to start with $U_1=\{1,2,\dots,\epsilon n\}$ for some small $\epsilon.$ That might give us enough flexibility to make the choices we need to make and obtain a lower bound of $2(1-\epsilon)n.$ However, I don’t rule out some simple counting argument showing that we use up the edges in some $U_i$ too quickly. 44. noamnisan says: Looking at n=4, here’s a nontrivial (better than n-1) sequence for f(2,n): {12}, {23,14}, {13,24}, {34}. I think that this generalizes for general n: at the beginning put a recursive sequence on the first n/2 elements; at the end put a recursive sequence on the last n/2 elements; now at the middle we’ll put n/2 set systems where each of these has exactly n/2 pairs, where each pair has one element from the first n/2 elements and another from the last n/2 elements. (There are indeed n/2 such disjoint matchings in the bipartite graph with n/2 vertices on each side.) The point is that the support on all set systems in the middle is complete, so they trivially contain any intersection of two pairs. If this works the we get the recursion f(2,n) \ge 2f(2,n/2) + n/2, which solves at Omega(n log n). • Terence Tao says: Noam, I don’t think this construction works. The problem is that the recursive sequences at either end obey additional constraints coming from the stuff in the middle. For instance, because the support on all the middle stuff is complete, the supports on the first recursive sequence have to be monotone increasing, and the supports on the second recursive sequence have to be monotone decreasing. (One can already see this if one tries the n=8 version of the construction.) I discovered this by reading [AHRR] and discovering that they had the bound $f(d,n) \leq 2^{d-1} n$, or in this case $f(2,n) \leq 2n$. I’ll put their argument on the wiki. • Klas markström says: Is it known if 2n is the right bound here? The best construction I have been able to do is this: Start by dividing 1….n into n/2 disjoint pairs {1,2}{3,4},…,{i,i+1},… next insert the following families between {a,b} and {c,d}: {{a,c},{b,d}} , {{a,d},{b,c}} This gives a family of size 1+3(n/2-1) and I have not been able to improve it for, very, small n • Klas markström says: {a,b} and {c,d} should be consecutive pairs in the first sequence. • noamnisan says: Ahh, I see. Indeed broken. Thanks. 45. gowers says: That looks convincing (this is a reply to Noam Nisan’s comment which should appear just before this one but may not if WordPress continues to act strangely). It interests me to see how it fits with what I was writing about, so let me briefly comment on that. First of all, let’s think about the sequence of $U_i$s. I’ll write what the sequence is in the case $n=8.$ It is 12,1234,34,12345678,56,5678,78. If we condense the pairs to singletons it looks like this: 1, 12, 2, 1234, 3, 34, 4. We can produce this sequence by drawing a binary tree in a natural way and then visiting its vertices from left to right. We label the leaves from 1 to $2^n,$ and we label vertices that appear higher up by the set of all the leaves below them. The gain from $n-1$ to $n\log n$ comes from the fact that these sets appear with a certain multiplicity: the multiplicity of a set $U_i$ in the sequence is proportional to the size of that set. By the way, is the following argument correct? Let $F_1,\dots,F_m$ be a general system of families satisfying the condition we want. For each $i$ let $U_i$ be the union of all the sets in $F_i.$ Then if $i<j<k$ and $r\in U_i\cap U_k,$ it follows that $r\in U_j.$ (Otherwise just pick a set in $F_i$ and a set in $F_k$ that both contain $r$ and their intersection won’t be contained in any set in $U_j.$) Therefore, by the observation I made in this comment (a fact that I’m sure others were well aware of long before my comment) there are at most $2n$ distinct sets $U_i.$ It follows that if we have a superpolynomial lower bound, then we must have a superpolynomial lower bound with the same $U_i$ for every family. So an equivalent formulation of the problem would be to find a polynomial upper bound (or superpolynomial lower bound) for the number of $F_i$ with the additional constraint that the union of every $F_i$ is the whole of $[n].$ If that is correct, then I think it helps us to think about the problem, because it raises the following question: if the unions of all the set systems are the same, then what is going to give us a linear ordering? In Noam’s example, if you look at the popular $U_i$ there is no ordering — any triple of families that share a union has the desired relationship. A closely related question is one I’ve asked already: what is the upper bound for the number of families if you ask for $F_j$ to cover $F_i\sqcap F_k$ for every $i,j,k$? 46. noamnisan says: The case that the support of every family is [n], seems indeed enough and concentrating on it was also suggested by Yuri ( http://gilkalai.wordpress.com/2010/09/29/polymath-3-polynomial-hirsch-conjecture/#comment-3438 ). This restriction doesn’t seem so convenient for proving upper bounds, since the property disappears under “restrictions”. For example running the EHRR argument would reduce n by 1 without shortening the sequence at all but loosing the property. • gowers says: Ah, thanks for pointing that out — I’m coming late to this discussion and have not been following the earlier discussion posts. For now I myself am trying to think about lower bounds (either to find one or to understand better why the conjecture might be true) so I quite like the support restriction because it somehow stops one from trying to make the “wrong” use of the ground set. It also suggests the following question that directly generalizes what you were doing in your example. Suppose we have three disjoint sets X,Y,Z of size n. How many families $F_i$ can we find with the following properties? (i) For every $i$ and every $A\in F_i$ the intersections $A\cap X, A\cap Y$ and $A\cap Z$ all have size 1. (ii) Each $F_i$ is a partition of $X\cup Y\cup Z$ into sets of size 3. (iii) For every $i,j,k$ and every $A\in F_i$ and $B\in F_j$ there exists $C\in F_k$ such that $A\cap B\subset C.$ (iv) No set belongs to more than one of the $F_i.$ If we do it for X and Y and sets of size 2 then the answer is easily seen to be n (and condition (iii) holds automatically), but for sets of size 3 it doesn’t seem so obvious, since each element of X can be in $n^2$ triples, but if we exploit that then we have to worry about condition (iii). I haven’t thought about this question at all so it may have a simple answer. If it does, then I would follow it up by modifying condition (iii) so that it reads “For every $i<j<k$ and every $A\in F_i$ and $C\in F_k$ there exists $B\in F_j$ such that $A\cap C\subset B.$“ • gowers says: I now see that Yuri’s comment was not from an earlier discussion thread — I just missed it because I was trying to think more about general constructions than constructions for small $n.$ 47. Steve Klee says: Hi everyone, Here’s another thought on handling the d=2 case (and perhaps the more general case): Suppose we add the following condition on our collection of set families \$F_i\$: (**) Each \$(d-1)\$-element subset of \$[n]\$ is active on at most two of the families \$F_i\$. Here, I mean that a set \$S\$ is active on \$F_i\$ if there is a \$d\$-set \$T \in F_i\$ for which \$T \supset S\$. This condition must be satisfied for polytopes — working in the dual picture, any codimension 1 face of a simplicial polytope is contained in exactly two facets. So how does this help us in the case that \$d=2\$? Suppose \$F_1,\ldots,F_t\$ is a family of disjoint 2-sets on ground set \$[n]\$ that satisfies Gil’s property (*) and the above property (**) Let’s count the number of pairs \$(j,k) \in [n] \times [t]\$ for which \$j\$ is active on \$F_k\$. Each \$j \in [n]\$ is active on at most two of the families \$F_i\$, and hence the number of such pairs is at most \$2n\$. On the other hand, since each family \$F_k\$ is nonempty, there are at least two vertices \$j \in [n]\$ that are active on each family \$F_k\$. Thus the number of such ordered pairs is at least \$2t\$. Thus \$t \leq n\$, giving an upper bound of \$f(n,2) \leq n-1\$ when we assume the additional condition (**). This technique doesn’t seem to generalize to higher dimensions without some sort of analysis of how the sizes of the set families \$F_i\$ grow as \$i\$ ranges from \$1\$ to \$t\$. 48. Kristal Cantwell says: I think I can show f(4) is less than 9. We have it must be less than 10 previously. Assume we have 9 or more elements then there are 8 types of sets in terms of which of the the first three elements are in the set. We must have a repetition of the same type in sets in two different families. Then every set must contain an element that contains the 3 elements of the repetition. Now if the repetition is not null there can be at most 8 elements that contain the repetition but we have 7 families besides A and B and so there is a contradiction. But we can improve this since we have two instances of the repetition in the first two elements so we have at most 6 unused elements. Now we can repeat this argument for each set of four elements. so we have at most 5 families containing the null set and each single element. And we have adding one element not in a set in a family and having the resulting augmented set outside the family is forbidden. so outside of the 5 sets that contain the singleton elements and the null set there are no two element sets, no single element sets and no null set. but that leaves 5 elements for 4 sets. This means that each family must contain one of the sets with more than two elements. We divide the proof into two cases In the first case one family must contains the set with four elements. Then another family must consist of a single set with three elements. Then since their intersection will have three elements every family besides these two must have a set with three elements but there are not enough sets with three elements to go around. sets which gives a contradiction. The second case is when the four element set is not in one of these four sets. Then these sets must consist of four families each containing one of the three element sets. But then the families containing 123 and 124 will contain sets whose intersection is 12 but the family containing 134 will not contain a set which contains the elements 1 and 2 so in this case we have a contradiction. So in both cases we have a contradiction and So f(4) cannot be more than 9. However since f(4) must be 8 or 9 from previous comments in fact it must be 8. • Terence Tao says: Kristal, I am having trouble following the argument (I think part of the problem is that not enough of the objects in the argument are given names, and the two objects that are given names – A and B – are not defined). Could you put it on the wiki in more detail perhaps? • Kristal Cantwell says: I have rewritten it and put it on the wiki. • Terence Tao says: Thanks Kristal! But I’m still having trouble with the first part of the argument. It seems you are classifying subsets of [4]={1,2,3,4} into eight classes, depending on how these sets intersect {1,2,3}. You then use the pigeonhole principle to find two families A, B that contain sets (let’s call them X and Y) which intersect {1,2,3} in the same way, e.g. {1,2} and {1,2,4}. What I don’t see then is why the other seven families also have to contain a set that intersects {1,2,3} in this way. The convexity condition tells us that every family between A and B will contain a set that contains the intersection of X and Y (in this case, {1,2}), but this doesn’t seem to be the same as what you are saying. • Kristal Cantwell says: I went back and looked at my attempt at a proof and the difficulty you mention cannot be overcome. So I went and deleted the material I posted from the wiki. I am looking at a different idea to try and find out the value of f(4). 49. noamnisan says: Let me try again to proof of an upper bound: f(2,n)=O(n log n). Fix a sequence $F_1 ... F_t$. Let us denote by $U_i$ the support of $F_i$, by $U_{\prec i} = \cup_{j \le i-1} U_j$ the support of the i’th prefix, and by $U_{\succ i} = \cup_{j \ge i+1} U_j$ the support of the i’th suffix. The condition on the sequence of F’s implies that $U_{\prec i} \cap U{ \succ i} \subseteq U_i$ for all i. Intuitively, a small Ui will allow us to get a good upper bound by induction since the prefix and the suffix are on almost disjoint supports. On the other hand, a large Ui implies a large Fi (since the F’s contain pairs), and if many U’s are large then we exhaust the (n choose 2) pairs quickly. More formally, let us prove by induction $f(2,n) \le 100 n log n$. We now claim that for some $45n \log n \le i \le 55n\log n$ we have that $|U_i| \le n/(5log n)$. Otherwise these 10nlogn Fi’s will each hold more than n/(10log n) pairs, exceeding the total number of possible pairs. Let us denote $k=|U_i|$, $m=|U_{\prec i}|$ and so we have \$|latex U_{ \succ i}| \le n+k-m\$, with \$k \le n/(5logn)\$. Now we observe that m and n+k-m are both about half of n, formally $m \le 0.6 n$ and $n+k-m \le 0.6n$. This is implied by the induction hypothesis since $f(2,m) \ge 45 n \log n$ and $f(2,n+k-m) \ge 45n \log n$. We now get the recursion $f(2,n) \le 1 + f(2,m) + f(2,n+k-m)$ of which the RHS can be bounded using the induction hypothesis by $100 (n+k) (\log (0.6 n))$ which is less than the required 100nlogn. [GK: Noam, I fixed the latex by copy and paste, I dont understand why it did not parse.] • Terence Tao says: Looks good to me; I’ve put it on the wiki. 50. noamnisan says: Using the wordpress comment mechanism is pretty annoying, as I can’t edit my remarks, nor do I even get to see a preview of my comment, not to mention the bugs (I think that the greater-than and less-than symbols confuse it due to it looking like an html tag). Maybe a better platform for polymath would be something like stack overflow? http://cstheory.stackexchange.com/questions/1814/a-combinatorial-version-for-the-polynomial-hirsch-conjecture [GK: Dear Noam, I agree that the graphic, ability to preview and edit, and easier latex, will make a platform like stackoverflow more comfortable. I do not see how we can change things for this project] • gowers says: Just to comment that it’s true that the symbols < and > are to be treated with extreme caution. I don’t use them: instead I type “ampersand lt/gt semicolon” and then it works. Of course, that’s quite annoying too … • onlooker says: The trick is to set up your own wordpress blog, and post/edit your comment there (you can even set them as private or draft) and then copy/paste here when you’re done editing. • Terence Tao says: What one can do is put any lengthy arguments on the wiki at http://michaelnielsen.org/polymath1/index.php?title=The_polynomial_Hirsch_conjecture#f.28n.29_for_small_n and just put a link to it and maybe a brief summary in the blog. 51. Gil Kalai says: First sorry for the wordpress problems. It looks right now that we try several avenues to improve the upper bounds or the lower bounds. When it comes to upper bounds even if we come up with new arguments which give weaker results than the best known results, then this can still serve us later. (I suppose this is true for lower bounds as well.) I find it hard to digest everything everybody wrote, but I will try to write a post soon with summary of what is known on upper bounds and reccurence relations for f(d,n). (Essentially everything is in the EHRR’s paper and is quite simple.) 52. gowers says: From the elementary proof: “If you restrict your attention to sets in these families containing an element m and delete m from all of them, you get another example of such families of sets, possibly with smaller value of t. (Those families which do not include any set containing m will vanish.)” I’m sure I’m being stupid, but I’ve just realized I don’t understand this argument. The reason I don’t understand it is that I don’t see why removing m from all the sets can’t identify families that don’t vanish, or destroy the disjointness condition. For example, suppose we have just three families {1,2,3}, {12,23,13} and {123}. If we remove 3 from all sets then we get {1,2,emptyset}, {1,2,12} and {12}, which violates the disjointness condition. And if we have the three families {1,2}, {12}, {13,23}, then removing 3 from all sets identifies the third family with the first (which is just a stronger form of their not being disjoint. I can tell that what I’m writing is nonsense but some mental block is stopping me from seeing why it is nonsense. • gowers says: Sorry — the second example wasn’t a good one because not all the sets in the first two families contained the element 3. Here’s another attempt. The families could be {13,2}, {123}, {1,23} and now removing 3 from all sets identifies the first and third families. 53. noamnisan says: I think that the idea is to also completely delete sets that do NOT contain m. Thus {13,2}, {123}, {1,23} would become (after restricting on 3) {1}, {12}, {2} • gowers says: Thanks — that makes good sense. 54. gowers says: I’m trying to think what an extremal collection of families might look like if the proof of the upper bound was actually sharp. It would be telling us that to form an extremal family for $n=2m+1$ we would need to form an extremal family inside $[m]$ and another one inside $[m+2,n],$ and between the two one would take an extremal family in $[n]\setminus\{m+1\}$ and add $m+1$ to each set in that family. Can we say anything further about the supports of the various families? (Apologies once again if, as seems likely, I am repeating arguments that have already been made.) Well, if the middle family is extremal, then there must once again be a partition of the ground set into two such that the first few families occupy one part and the last few the other. So can we say anything about how this partition relates to the first partition? I think we can. Let’s call the two subsets of $[n]\setminus\{m+1\}$ $U$ and $V.$ If $U$ contains any element bigger than $m+1,$ then $V$ must also contain that element, which is a contradiction. So now, if I’m not much mistaken, we know that the first $f(m)$ sets live inside $[m]$ and the last $f(m)$ sets live inside $[m+2,n].$ We also know that something similar holds for the family in between, except that all sets there include the middle element $m+1.$ I feel as though we may be heading towards a contradiction here. Suppose that the first $f(m)$ sets form an extremal family inside $[m]$ and the next $f(m)$ (or possibly $f(m-1)$ — there are irritating parity considerations here) also form an extremal family inside $[m]$ except that $m+1$ is tacked on to all the sets. The extremality of these two bunches of families ought now to force us to find two partitions of $[m]$ into roughly equal subsets. Let’s call these two pairs of sets $(U,V)$ and $(W,Z).$ So the first $f(m/2)$ of the first lot of families live in $U,$ the last $f(m/2)$ live in $V,$ the first $f(m/2)$ of the second lot of families in $W,$ and the last $f(m/2)$ in $Z.$ We now know that any element of $U\cap W$ must be in $V$ as must any element of $U\cap Z,$ a contradiction. In fact, we can put that more concisely. Suppose the first $f(m/2)$ sets have union $[m/2]$ and the last $f(m/2)$ of the first $f(m)$ sets have union roughly $[m/2,m].$ Then later sets seem to be forced to avoid $[m/2]$ altogether, which makes the maximum size of the first intermediate collection of families much smaller. This is starting to remind me of a proposed argument of Terry’s that didn’t work, so let me look back at that. Hmm, I’m not sure it’s quite the same. Now I’m definitely not claiming to have a proof of anything, since as soon as we allow the example not to be extremal it becomes much harder to say anything about it. But I find these considerations suggestive: it seems that one ought to be able to improve considerably on $n^{\log_2n+1}.$ If one were going to try to turn the above argument into a proof, I think one would have to show that if you have a large sequence of families then there must be a bipartition of the ground set such that the first few families are concentrated in the first set of the bipartition and the last few in the last set. Equivalently, the idea would be to show that the sets in the first few families were disjoint from the sets in the last few. It might be that one couldn’t prove this exactly, but could show that it was almost true after a small restriction, or something along those slightly vaguer lines. • gowers says: Just to add an obvious thought to that last paragraph: if we have a large but not maximal family, then it could be that the supports of all the families are equal to the whole of the ground set, as we have already seen. So some kind of restriction would be essential to get the disjointness (if it is possible at all). 55. gowers says: As I have already mentioned, it seems to me that for the purposes of an upper bound it would be extremely useful if one could prove that for any sequence of families $F_1,\dots,F_m$ over a certain length, the first few $F_i$ would have to be disjoint from the last few. However, that is not the right statement to try to prove because at the cost of dividing by $2n$ we can insist that the support of every single family is the whole ground set. But as Noam points out, this property is not preserved by restrictions, so it would be interesting to know whether it is possible to formulate a reasonable conjecture that would still be useful. As a first step in that direction, I want to consider a rather extreme situation. What can we say about a family if the supports of all its restrictions are full? Let us use the term $m$-restriction of a family $F$ for the collection of all sets $A\setminus\{m\}$ such that $A\in F$ and $m\in A.$ If the $m$-restriction of $F$ has support equal to $[n]\setminus\{m\},$ then for every $r\in[n], r\ne m$ there exists a set $A\in F$ such that $\{r,m\}\subset A.$ So this condition is saying that every set of size 2 is covered by a set in $F.$ So whereas the original support condition says that every 1-set is covered, now we are talking about 2-sets. This leads immediately to a question that I find quite interesting. It is quite easy to describe how the sequence of supports of the $F_i$ can behave, and this leads to the argument that there can be at most $2n-1$ different supports. But what about the “2-supports”? That is, if we define $V_i$ to be the set of all pairs covered by $F_i,$ then what can we say about the sequence $V_1,\dots,V_m$? In particular, how long can it be? I would like to guess that it can have length at most $Cn^2,$ but that really is a complete guess based on not having thought about the problem. If it turns out to be correct, then the “we might as well assume that all the 2-supports are the same” argument is that much weaker, which suggests that there is more chance of saying something interesting by considering a restriction. • gowers says: I’ve just realized that there is a fairly simple answer to part of the question, which is that the sets $V_1,\dots,V_m$ must themselves form a sequence of families of the given kind. Why? Well, if $i<j<k$ and there exist $A\in V_i,$ $C\in V_k$ such that $A\cap C$ is not contained in any set in $V_j,$ then pick sets $K\in F_i$ and $M\in F_k$ such that $A\subset K$ and $C\subset M.$ Then $K\cap M$ contains $A\cap C$ and is therefore not contained in any $L\in F_j$ (or else that $L$ would contain a pair $B$ that contains $A\cap C$). So it seems that taking the $d$-lower shadow of a system of families with the desired property gives you another system, though without the disjointness condition. Thus, another question of some minor interest might be this. If you want a system of (distinct but not necessarily disjoint) families $G_1,\dots,G_m$ of sets of size 2 such that for any $i<j<k$ and any $A\in G_i,$ $C\in G_k$ there exists $B\in G_j$ such that $A\cap C\subset B,$ then how big can $m$ be? The answer to the corresponding question for sets of size 1 is $2n-1.$ • gowers says: The answer to that last question is easy (and possibly not very interesting). There is an obvious upper bound of $2\binom n2,$ since the collection of $G_i$ that contain any given 2-set must be an interval. But we can also achieve this upper bound by simply adding the 2-sets one at a time and removing them again, just as one does for 1-sets. Anyhow, this confirms my earlier guess that the length would be quadratic. • gowers says: A further small remark: by taking into account both the 1-shadows and the 2-shadows we can deduce a bit more about how the 2-sets are added. First you need the set 12, then the two sets 13, 23 (not necessarily in that order), then the sets 14,24,34, and so on until you’ve added all sets. Then you have to remove all the 2-sets containing 1, then all the 2-sets containing 2 (but not 1) and so on. 56. Terence Tao says: Regarding f(2,n); the upper bound is actually 2n-1 rather than 2n, because there is at least one element in the first support block $S_1$ (see wiki for notation) that is not duplicated in any other block (indeed every element of $U_1$ is of this type). Conversely, I can attain this bound if I allow the “cheat” of using two-element multi-sets {a,a} in addition to ordinary two-element sets {a,b}. The argument on the wiki extends perfectly well to multi-sets. If one then sets $F_i = \{ \{ a,b\}: a+b = i+1 \}$ for i=1,…,2n-1, one obtains a family of sets obeying (*). In principle it should be easy to tinker with this example to remove the multisets {a,a} (which are only a small minority of the sets here) and get true sets, by paying a small factor in t (maybe a log factor), but this seems to be surprisingly tricky. Now, things get more interesting at the d=3 level. The elementary argument on the wiki gives $f(3,n) \leq 4n-1$, and this bound works for multisets also. However, the best example I can come up with is $F_i = \{ \{ a,b,c\}: a+b+c = i+2 \}$ for i=1,…,3n-2, so there is a gap between 3n-O(1) and 4n-O(1). This may seem trifling, but it may get to the heart of the issue because we really need to know whether the dependence on d is exponential (as the elementary argument suggests) or polynomial, and this is the first test case for this. • Terence Tao says: p.s. If one can fix the multiset issue, then the construction above should give a lower bound of about dn for f(d,n), which matches the quadratic lower bound on [AHRR]. • Ryan O'Donnell says: I think comment #1 by Nicolai says there is a lower bound of $dn - d+1$ if you allow multisets… Also, I don’t quite understand the argument on the wiki. I follow till the point that $S_i$ and $S_j$ are disjoint if $|i - j| \geq 3$, but I don’t see that this gives the conclusion that $\sum |S_i| \leq 2n$. If the hypothesis I just mentioned is the only one used for the conclusion, then why couldn’t one have \$S_1 = S_2 = S_3 = [n]\$? I suppose I should look at [EHRR]‘s Theorem 4 (which also gets $2^{d-1}n - 1$; from a quick glance it looks slightly different from what’s on the wiki, having a disjointness condition with $|i-j| \geq 2$… • Terence Tao says: Oops, the 3 was supposed to be a 2 in that argument, sorry! I edited the wiki argument accordingly. • Ryan O'Donnell says: Actually, with this change, I still don’t quite see the argument which is on the wiki; i.e., I don’t see why the convexity condition implies that $S_i$ and $S_j$ need to be disjoint if $i$ and $j$ differ by 2. • Ryan O'Donnell says: Hmm, a reply I wrote didn’t seem to show up. Maybe it’s in moderation? The short version of it was that Nicolai’s comment 1 says there is a $dn - d+1$ lower bound if you allow multisets. • Terence Tao says: Ah, yes, I seem to have once again repeated a previous comment. Well, at least we’re all on the same page now… • Klas Markström says: It looks natural to include a version of f when the problem is restricted to sets of size at most d, rather than exactly d. In that case this construction, by collapsing the multisets to sets, and the one I gave earlier http://gilkalai.wordpress.com/2010/09/29/polymath-3-polynomial-hirsch-conjecture/#comment-3466 both show that the d=2 case can reach 2n-1. I’m fairly sure that the example I gave here http://gilkalai.wordpress.com/2010/09/29/polymath-3-polynomial-hirsch-conjecture/#comment-3485 is optimal for n up to 6. I tried quite a few things by hand and did a partial computer check without coming up with anything better. So if it really is possible to remove the multisets the reduction in size has to be quite large for small n. But I wouldn’t be surprised if there is a difference between having sets of size exactly d and sets of size at most d. • gowers says: I like the idea of concentrating on $d=3$ (either in the sets case or the multisets case). As we have seen, one learns quite a lot by looking at the supports of the various families. I think one way of trying to get a handle on the $d=3$ case is to look at what I have been thinking of as the 2-support: that is, the set of all 2-sets (or multisets) that are subsets of sets in the family. To make that clear, if $F_i$ is a family of sets of size 3, let $U_i$ be the union of all the sets in $F_i$ (the 1-support) and let $V_i$ be the set of all (multi)sets $A$ of size 2 such that there is a set $B\in F_i$ with $A\subset B.$ We know a great deal about how the sequence $U_1,U_2,\dots,U_m$ can look. But what can we say about the sequence $V_1,\dots,V_m$? There are two constraints that I can see. One is that they satisfy condition (*), and the other is that their supports also satisfy condition (*). I feel as though I’d like to characterize all sequences $V_1,\dots,V_m$ that have these two properties, and then try to think which ones lift to sequences of disjoint families $F_1,\dots,F_m$ that also have property (*). 57. noamnisan says: A while ago, Gowers suggested to look at a harsher condition requiring that for all $A \in F_i$, $B \in F_k$ there exists $C \in F_j$ such that $A \cap B \subseteq C$ not only for i < j < k but for all i,j,k. Here’s an even more extreme version of this approach where we just look at the union of the F’s: Let’s say that a family F of sets has index t if for every $A,B \in F$ ($A \ne B$) there exist at least t different $C \in F$ such that $A \cap B \subseteq C$. How large an index can a family of subsets of [n] be? If we have a family with index t then we get a t/n lower bound for our original problem, by splitting the family at random into a t/n of families Fi (since for a specific intersection and a specific i the probability that there is no covering C in Fi is exp(-n) and then we can take the union bound on all intersections and all i’s.) This version may be not be so far from the original version: If we look at the “average index” rather than the index (How many sets in expectation cover $A \cap B$ for A and B chosen at random from F.). Take a sequence of Fi’s of length t for our original problem; wlog you may assume that all Fi’s have approximately the same size; and then take their union. For an average A and B from the union, there are theta(t) many levels between them in the original sequence so we get average index of at least theta(t). • gowers says: Here’s a quick calculation. Suppose we could find a Steiner system with parameters $(r,s,n).$ That is, F is a collection of $s$-sets such that each $r$-set is a subset of exactly one set in F. Then $t$ would be the number of $s$-sets containing each $(r-1)$-set. Unfortunately that works out as $\binom n{r-1}/\binom n{r-2},$ which is unhelpful. So it looks as though F can’t be too homogeneous. Another question that occurs to me is that if you are asking for something even more drastic than I asked for then what does Terry’s argument from this comment say? • gowers says: Sorry, that should have been $\binom nr/\binom n{r-1}.$ • noamnisan says: Right. So let me rephrase (slightly generalize) the impossibility of having a family with a super-linear index: let r be the largest size of an intersection of two sets from F. There can be at most n-r sets that contain this intersection since otherwise two of them will share an additional item giving a larger intersection hence a contradiction. I wonder if this logic can be generalized to the “average index” thereby proving an upper bound on the original problem. • gowers says: If we take all sets, then almost all intersections have size about $n/4,$ so the average index is exponentially large. (Or else I have misunderstood the question.) • noamnisan says: yes, this indeed doesn’t make sense. 58. Gabor Tardos says: Small inconsistency in the wiki: it says the $r$-shadows form a convex sequence. This should be qualified as to apply if each set has size $\ge r$. Otherwise $\{\{1,2\}\},\{\{2\}\},\{\{2,3\}\}$ is convex, but its 2-shadow is not. 59. Gabor Tardos says: This is an attempt to “fix the multiset issue” in Terrence Tao’s construction where $F_i$ consists of the multisets $\{a_1,\ldots,a_d\}$ with $\sum a_j=i+d-1$. I start with the \$\latex d=2\$ case. I introduce a new repetition symbol x and use the set $\{j,x\}$ instead of the multiset $\{j,j\}$. Now this is not working as $x$ will be in the support of every other $F_i$. So I break it up and use a different repetition symbol in different intervals for $j$. On can simply use the same symbol $x$ for $k\le j \le 2k-1\le n$ and “fill in” with pairs $\{j,x\}$ for $j\le k-1$ in between. This means cca $2\log n$ extra symbols (can be brought down to cca \$\latex \log n\$) and the estimate $f(2,n)\ge2n-O(\log n)$. Example: $\{1x\},\{12\},\{13,2y\},\{14,23,1y\},\{15,24,3y\},\{16,25,34\},\{26,35,4z\},\{36,45,6z\},\{46,5z\},\{56\},\{6t\}$. • Klas Markström says: For the example there will be a tiny improvement in the bound if you simply delete the final (6,6) pair instead of introducing a new repetition symbol. But that is just an additive constant. 60. Gabor Tardos says: In the general case $d\ge3$ one starts with introducing $d-1$ repetition symbols, where $x_i$ is to be interpreted as “elements number $i$ and and number $i+1$ are equal in the increasing ordering of Tao’s multiset”. As before, this is not good as is. I interpret the convexity condition as “the families containing a set containing $S$ must form an interval for every set $S$“. This is meaningful for $|S|\le d-1$, so for $d=2$ we had only to worry about the supports, now we have to worry about 2-shadows and higher too. But I still believe that with splitting the new symbols into a logarithmic number of intervals this is doable. If so, it would give $f(d,n)\ge dn-O(d\log d)$. • Paco Santos says: Gabor, I guess you mean $f(d,n)\ge dn-O(d\log n)$. Your bound, as stated, specializes to $f(2,n) \ge 2n -O(1)$ … 61. Gil Kalai says: I should have said it before but there is a new thread of comments http://gilkalai.wordpress.com/2010/10/03/polymath-3-the-polynomial-hirsch-conjecture-2/ 62. Pingback: Polymath3 : Polynomial Hirsch Conjecture 3 « Combinatorics and more 63. Warren D. Smith says: One idea I had a long time ago about Hirsch conjecture and also about linear programming (which I never got anywhere with) is this. It’s also an interesting problem in its own right. We want to walk from vertex A to vertex B with few steps. Set up a linear objective function F that is maximized at B. Now walk using simplex method from A to some neighboring vertex, CHOOSING AMONG NEIGHBORS THAT INCREASE F, UNIFORMLY AT RANDOM. (The “random simplex algorithm.”) The question is, what is the expected number of steps before you reach B? And can it be bounded by polynomial(#dimensions, #facet-hyperplanes)? Note, this randomness seems to defeat all (?) the constructions by Klee, Minty, & successors of linear programming problems with immensely long runtimes. 64. Gil Kalai says: Dear Warren Indeed this is a good idea and perhaps can be traced to early days og linear programming. It was an open problem if there are examples that this method called RANDOM EDGE is polynomial. This was resolved very recently and just a few days ago I had a post about it http://gilkalai.wordpress.com/2010/11/09/subexponential-lower-bound-for-randomized-pivot-rules/ 65. Warren D. Smith says: Aha… great. I was about to write all sorts of elementary facts about random simplex algorithm for your enjoyment… but since you now tell me about the Friedmann-Hansen-Zwick paper, I’ll refrain. This paper looks quite astonishing (I mean, they are bringing in what would seem at first sight, to be totally unrelated garbage, then using it as tools). It also seems highly devastating/painful to the simplex algorithm world and to any hopes of proving polynomial Hirsch. http://www.daimi.au.dk/~tdh/papers/random_edge.pdf • Paco Santos says: Commenting on Warren’s “It also seems highly devastating/painful to the simplex algorithm world and to any hopes of proving polynomial Hirsch”. I agree with the first part but not with the second part. Here is an optimistic–for the purposes of this polymath project– reading of the Friedmann-Hansen-Zwick paper: Since the lower bound(s) they prove for RANDOM EDGE ($2^{\Omega(n^{1/4})}$) and RANDOM FACET ($2^{\Omega(\sqrt{n}/\log^c n)}$) are actually higher than the upper bounds we know for polytope diameters ($2^{\Omega(\log^2 n)}$), their result should have no effect in our beliefs about polynomiality of diameters. Rather, it only confirms the fact that getting diameter bounds and getting pivot-rule bounds are fundamentally different problems. • Gil Kalai says: In the recent examples, the polytopes themselves are I believe combinatorially isomorphic to cubes. Eralier, for the abstract setting such lower bounds were achieved by Matousek (RANDOM FACET) and Matoousek and Szabo (RANDOM EDGE). Also there the polytope was combinatorially the cube. 66. Warren D. Smith says: It would be nice to have a high-level description of what is going on in the Friedmann-Hansen-Zwick paper. It contains a forest of details, but it’s pretty hard to feel confident they know what they are doing, if you don’t digest all the details. 67. Pingback: A combinatorial version for the polynomial Hirsch conjecture | Q&A System • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 597, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9469373226165771, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=420820
Physics Forums Thread Closed Page 1 of 3 1 2 3 > ## Why time is a separate physical quantity Hi! I have a question about time. I don't understand why everyone thinks of a time as some coordinate like x,y,z. I don't get it. From my point of view the time as some physical quantity doesn't exist. Because there is only moving matter. And we can talk about time by observing changes in that matter. So we can talk about time only when the hand of a clock moves from one position to another. And time dilation is just a different type of matter movement. And so time travel is impossible because one must change all matter to it's previous position. I really don't understand why we talk about time as some existing entity. I suppose I have some major flow in my understanding. So can anyone explain to me this issues or point to some books, articles? Thank you PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Gold Member Quote by ehpc And time dilation is just a different type of matter movement. And so time travel is impossible because one must change all matter to it's previous position. What does this mean? Quote by Pengwuino What does this mean? What exactly you don't understand? Recognitions: Gold Member ## Why time is a separate physical quantity What do you mean by "time dilation is just another matter movement"? I don't know if I can say it right. One says that time slows down when an object's velocity is approaching a speed of light. And I don't understand why we are talking about 'time'? Why not particles start to move slowly and because of that we perceive time differently. But my main point is why we are talking about time like any other real quantity like mass or position. What is a base for that assumption? Why one thinks that time actually exists? I can only see that particles/atoms/whatever are changing their positions. And because of that changing we can talk about time. But the time doesn't really exist. I'm not trying to prove my point, I just want to understand where I'm getting wrong. I hope you understand what I am talking about. Recognitions: Gold Member The particles don't begin to move slowly though. In classical mechanics, we define some abstract concept called "time" as some parameter that we can have as an invariant quantity. This is invariant because going from different reference frames, we always have this time ticking away at the same rate whereas other quantities change. Kinetic energy typically changes going from one inertial frame to the next. Velocity changes as well. Time doesn't. However, this doesn't jive well with observations of high speed systems. Enter Relativity and it helps explain everything we know to be true today. If you're really looking for a concrete example, look at high energy physics. Particles having lifetimes of $$10^-6s$$, a pure "time" quantity. However, when accelerated to near the speed of light, they take far far longer until they decay which allows us to actually do particle collisions and high energy physics. Time being an invariant quantity would not allow this. Quote by ehpc Hi! I don't understand why everyone thinks of a time as some coordinate like x,y,z. I don't get it. From my point of view the time as some physical quantity doesn't exist. Because there is only moving matter. And we can talk about time by observing changes in that matter. So we can talk about time only when the hand of a clock moves from one position to another I suppose I have some major flow in my understanding. So can anyone explain to me this issues or point to some books, articles? Thank you time has existence even if there is no significant movement you see time is not some thing that you see on the clock einstine imagined time as a dimension it self so yes there is a flaw in your understanding .time is the most essential ''ingredient'' for any process to proceed in the universe if there is no time there is no movement time allows movement time is not dependent on the movement of a object for its description its the other way around . what you see on the clock is actually the mathematical evaluation of time the clocks dont create time they are measuing tool of time if there be no clocks time would still have an existance. Recognitions: Gold Member Science Advisor Time keeps everything in the universe from happening all at once. It is inexorably tied to the finite speed of light and clearly has a place as a coordinate system in astrophysics. A universe without a time dimension is irrelevant. FizixFreak, yes that's exactly what I'm talking about. I know this point of view. But I don't understand it at all. What is the base for such assumptions? Why one thinks that time exists without movement? Is there some experimental data that proves that time is actually another dimension? Or maybe theoretical explanation? Chronos, why there is a need of such thing as time to keep everything from happening at once? Isn't it sufficient that there is matter (atoms and stuff) that moves at different speed thus creating different states one after another. Because speed is fixed (not infinite), there is no possible way to make all this states at once. They just follow one after another. Where am I wrong. And please consider answering my questions to FizixFreak. Recognitions: Gold Member Science Advisor What is 'speed' without time? Quote by Chronos What is 'speed' without time? Ok you made me thinking. We define speed through distance and time. That's odd. But what the time is anyway? I though of it as changings in a matter. So there is no 'time' at all. But FizixFreak says that time exists even if matter is not moving. So it's kind of 4-th dimension. But is there any scientific proof of time existing as separate entity? Had anyone made such effort? Had anyone explained in detail what time is and why it is what it is? Or we just rely on some intuitive understanding of 'time'? Speed without a time. How about speed is just a movement of matter perceived by other matter (aka observer). So we can talk about speed just because we participate in that movement ourselves. If nobody is watching is there any speed at all? Yes it is messy but I hope you'll get my point. And I'm hoping there is someone out there who is as confused as I am so he can contribute to my questions... Mentor Quote by ehpc But what the time is anyway? I though of it as changings in a matter. And what does the matter change with respect to? Do you understand the mathematical concept of a derivative, which encapsulates the idea of changing? Quote by ehpc Ok you made me thinking. We define speed through distance and time. That's odd. But what the time is anyway? I though of it as changings in a matter. So there is no 'time' at all. But FizixFreak says that time exists even if matter is not moving. So it's kind of 4-th dimension. But is there any scientific proof of time existing as separate entity? Had anyone made such effort? Had anyone explained in detail what time is and why it is what it is? Or we just rely on some intuitive understanding of 'time'? Speed without a time. How about speed is just a movement of matter perceived by other matter (aka observer). So we can talk about speed just because we participate in that movement ourselves. If nobody is watching is there any speed at all? Yes it is messy but I hope you'll get my point. Time is generally not considered separate from space. In fact, they are considered to be a part of the same thing, space-time. Maybe this will help. Consider the following example: Particle A is at position (x,y,z). Particle B interacts with particle A. The interaction causes Particle A to move to (x2,y2,z2). Note that I didn't use any time words. I just used the word "causes." By using this word, you understand that one event happens before the other. If there is no time, how can I know what happened first? Without time, A and B are simply particles moving in space. With time, it seems like B causes A to move. Time helps us to understand how events happen in sequence. This helps us to understand which events are the results of other events. We need a certain frame of reference in which to describe events. It turns out that we can describe many events with an arbitrary number of spacial dimensions, but we also need time for these events to make much sense to us. I don't claim to understand time. We're in the same boat here. These are just my thoughts. I hope I explained them clearly enough. Quote by DaleSpam And what does the matter change with respect to? Do you understand the mathematical concept of a derivative, which encapsulates the idea of changing? DaleSpam, yes I understand derivatives and I know that v=ds/dt. But my explanation was descriptive not formal. To make it clear can anyone answer this questions: 1. Is there some experimental data that proves that time is actually another dimension? Or maybe theoretical explanation? 2. Had anyone (i mean scientist) explained in detail what time is and why it is what it is? Or we just rely on some intuitive understanding of 'time'? I don't think that this questions are so hard to answer if there is an answer to them of course. adaptation, thank you for your explanation. But than I have another question. Does time actually exist? Or is it just how we perceive things? Is there a 'time' without us? Had any scientist answered such questions? Thread Closed Page 1 of 3 1 2 3 > Tags time Thread Tools Similar Threads for: Why time is a separate physical quantity Thread Forum Replies Quantum Physics 0 Special & General Relativity 16 Classical Physics 12 Special & General Relativity 4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9630059003829956, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/77639/left-and-right-cosets?answertab=votes
# Left and Right Cosets Let $H$ be a subgroup of a group $G$ and let $a, b \in G$. I need to give a counterexample or proof of the following statement: If $aH = bH$, then $Ha = Hb$ Proof: For every $h \in H, ah = bh$ $\begin{align} ah &= bh \newline ahh^{-1} &= bhh^{-1} \newline a &= b \newline ha &= hb \end{align}$ Could someone critic my proof? Thanks in advance. Edit Look at the answer below. - 2 $aH = bH$ means for any $h_1 \in H$, there exists $h_2 \in H$ such that $ah_1 = bh_2$ and for any $h_3 \in H$, there exists $h_4 \in H$ such that $bh_3 = ah_4$. It doesn't mean $ah = bh$ for all $h \in H$. The equality holds only as sets. – user17762 Oct 31 '11 at 21:52 2 You can't assume that "For every $h\in H,ah=bh$," but rather that $ah$ is equal to some $bh'\in bH$. You can tell there's been a problem when you wind up showing that $a=b$. – AMPerrine Oct 31 '11 at 21:53 @AMPerrine: So if $ah = bh'$, I need to show $ha = h'b$ for $h, h' \in H$? – Student Oct 31 '11 at 21:56 1 @Jon: Find a counterexample. – m. k. Oct 31 '11 at 22:26 1 Note that for a counterexample you’ll need a non-Abelian group. What’s the simplest one that you know? – Brian M. Scott Oct 31 '11 at 22:47 show 7 more comments ## 1 Answer Counterexample: We want to use a non-Abelian group such as $S_{3}$ $\mu_{1} = \pmatrix{1&2&3\\1 &3 &2}$, $\mu_{2} = \pmatrix{1&2&3\\3 &2 &1}$, $\mu_{3} = \pmatrix{1&2&3\\2 &1 &3}$ $\rho_{0} = \pmatrix{1&2&3\\1 &2 &3}$, $\rho_{1} = \pmatrix{1&2&3\\2 &3 &1}$, $\rho_{2} = \pmatrix{1&2&3\\3 &1 &2}$ Let $H = \{\rho_{0}, \mu_{2}\}$, $a = \mu_{1}$ and $b = \rho_{1}$ $aH = \mu_{1}\{\rho_{0}, \mu_{2}\} = \{\mu_{1}, \rho_{1}\}$ $bH = \rho_{1} \{\rho_{0}, \mu_{2}\} = \{\rho_{1}, \mu_{1}\}$ But $Ha = \{\rho_{0}, \mu_{2}\}\mu_{1} = \{\mu_{1}, \rho_{2}\}$ $Hb = \{\rho_{0}, \mu_{2}\}\rho_{1} = \{\rho_{1}, \mu_{3}\}$ So $Ha \neq Hb$ - 1 You are assuming, contrary to fact, that everyone uses the same notation for the elements of $S_3$. Please include in your answer what you mean by $\rho_0$, $\mu_2$, etc., as without that information it is impossible to evaluate your answer. – Gerry Myerson Nov 1 '11 at 0:03 @Gerry: Thanks for pointing that out. – Student Nov 1 '11 at 0:13 Small note, a subgroup $H$ has the property "If $aH = bH$, then $Ha = Hb$" if and only if it is a normal subgroup. – m. k. Nov 1 '11 at 9:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8980750441551208, "perplexity_flag": "head"}
http://mathoverflow.net/questions/52353/wanted-chain-of-nowhere-dense-subsets-of-the-real-line-whose-union-is-nonmeagre
## Wanted: chain of nowhere dense subsets of the real line whose union is nonmeagre, or even contains intervals ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a topological space. When I call a set nowhere dense, meagre or similar without qualification, I mean that it has this property as a subset of $X$. Call a subset of $X$ weager (for weakly meagre) if it is the union of a chain (wrt containment) of nowhere dense sets. Using that finite unions of nowhere dense sets are nowhere dense, it is easy to see that meagre implies weager. Call $X$ an Astaire space (for a stronger Baire space) if every weagre subset of $X$ has empty interior. Obviously Astaire implies Baire. Two natural, but rather silly (not just because of the terminology) questions are: Does weagre imply meagre? If not, does Baire imply Astaire? Unsurprisingly, the 2nd (and hence also the 1st) question has a negative answer. Let $X$ be uncountable. In fact, for convenience, take $X$ to be the well-ordered set of all countable ordinals. Topologize $X$ by putting open all sets which are either empty or have countable complement. Then $X$ is a Baire space - in fact the notions countable; closed and not $X$; nowhere dense; and meagre all coincide for subsets of $X$. However, $X$ is the union of the chain of all its countable initial segments so $X$ is not an Astaire space. The above example is somewhat unsatisfactory since the space is far from Hausdorff, but the ease with which it arose made me wonder whether my question had a positive answer even when $X = \mathbb{R}$. Adapting my example, it is at least possible to express an uncountable subset of $\mathbb{R}$ as the union of a chain of countable subsets of $\mathbb{R}$ but this is quite unhelpful because, in this context, there is no guarantee that countable implies nowhere dense, or that uncountable implies nonempty interior (or even nonmeagre for that matter). So that I don't spend too much more time today thinking about things I know nothing about and/or dreaming up silly names for concepts that probably already have much more respectable names - I pose to you the following question: Is the real line an Astaire space? If not, are there at least weagre subsets of $\mathbb{R}$ which are not meagre? Or, in plain English for those of you who only skimmed this nonsense: Does there exist a chain of nowhere dense subsets of $\mathbb{R}$ whose union has nonempty interior? If not, is there such a chain whose union is not meagre? Thank you, Michael. - 1 Andres, I do not see how this resolves my question. Although your set $T_\alpha$ is the union of a chain of sets with empty interior, this does not imply it is the union of a chain of nowhere dense sets. The rationals could very well precede $T_\alpha$ in your ordering but are certainly not nowhere dense. – Michael Jan 17 2011 at 23:20 Assuming CH, the real line is a union of a chain of finite sets, since this is true for the first uncountable ordinal. So the answer is yes in that case. – George Lowther Jan 18 2011 at 0:08 George, I think you mean to say that under CH, the real line is the union of a chain of countable sets. (Every union of a chain of finite sets can be seen to be countable.) – Joel David Hamkins Jan 18 2011 at 0:15 @Joel: yes, sorry, that was a dumb mistake. Not sure about my second comment now – George Lowther Jan 18 2011 at 0:18 ## 2 Answers Theorem. There is no chain of nowhere dense subsets of $\mathbb{R}$ whose union contains an interval. Proof. Suppose there was such a chain $\{\ B_i \mid i\in I\ \}$, where $\langle I,\lt\rangle$ is a linear order and $i\lt j$ implies $B_i\subset B_j$. First, I claim that this chain cannot have countable cofinality, since then we could find a countable cofinal subset of $I$, and the union of the $B_j$ from this cofinal subset would also contain an interval, violating the Baire category theorem. So every countable subset of $I$ is bounded. In this case, consider the set $Q$ of rational numbers $q$ in the interval from the union $\bigcup_i B_i$. Each of them appears in some $B_{i_q}$, and the set of all $i_q$ for $q\in Q$ is a countable set and hence bounded in $I$. Thus, there is some $j\in I$ beyond all $i_q$. So $B_j$ contains all those $q$ and thus is not nowhere dense. QED Edit. As George pointed out in the comments, essentially the same argument establishes the full property: Theorem. There is no chain of nowhere dense subsets of $\mathbb{R}$ whose union is non-meager. Proof. Suppose that $B_i$ for $i\in I$ is a chain of nowhere dense sets whose union $\bigcup_i B_i$ is non-meager. Again, we see that every countable subset of $I$ must be bounded, for otherwise the union would be a countable union of nowhere dense sets and hence meager. Since $\bigcup_i B_i$ is non-meager, it must be dense on an interval, and so it must have a countable subset $Q$ that is also dense on an interval. By the argument above, since $I$ has uncountable cofinality, this set $Q$ must be in some $B_j$ for large enough $j\in I$, contradicting that $B_j$ is nowhere dense. QED - Oh, that's nice! – Andres Caicedo Jan 18 2011 at 0:18 Yes! This is great, thank you. – Michael Jan 18 2011 at 0:23 Nice. Doesn't this give a full answer to the question? That is, the union of a chain of nowhere dense sets is always meagre? I.e., suppose it isn't, so it's union is not nowhere dense, and replace $\mathbb{Q}$ in your argument by any countable dense subset of the union. – George Lowther Jan 18 2011 at 0:27 Yes, George, you are right! I will edit. – Joel David Hamkins Jan 18 2011 at 0:52 @Joel: Your nice argument is very general. What can you say if the sets are meager (rather than nowhere dense)? – Andres Caicedo Jan 18 2011 at 1:55 show 3 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. (Joel's answer appeared as I was typing this.) I think the answer is no. Suppose to the contrary there exists a nonmeager set $A \subset \mathbb{R}$ which is the union of some chain $\{K_i\}_{i \in I}$ of nowhere dense sets. $A$ is separable, so we may enumerate a countable dense set $\{x_n\} \subset A$. Then we can find an increasing sequence $\{K_{i_n}\}$ with $x_n \in K_{i_n}$. Set $K = \bigcup_n K_{i_n}$. Since $K$ is meager $K \ne A$, so there exists $x \in A \backslash K$. Now there must be some $K_j$ with $x \in K_j$. Now for each $n$ we certainly don't have $K_j \subset K_{i_n}$, so we must have $K_{i_n} \subset K_j$ since the $K_i$ are a chain. Thus $K \subset K_j$, but then $K_j$ contains all the $x_n$ and so is not nowhere dense. Added: This indeed shows that a second countable Baire space cannot be the union of a chain of nowhere dense subsets. Here is a stab at a counterexample in the non-second countable case. Consider the non-separable complete metric space $\ell^\infty = \ell^\infty(\mathbb{N})$. I claim its Hamel dimension $\dim \ell^\infty$ is $\mathfrak{c}$. First, we have the natural inclusion $\ell^1 \subset \ell^\infty$; $\ell^1$ is a separable Banach space, so it is known that $\ell^1$ has Hamel dimension $\mathfrak{c}$, and thus $\dim \ell^\infty \ge \mathfrak{c}$. On the other hand, $\ell^\infty$ is the continuous dual of $\ell^1$, and thus is naturally included into the algebraic dual of $\ell^1$, which must also have Hamel dimension $\mathfrak{c}$; thus $\dim \ell^\infty \le \mathfrak{c}$. By Schroeder-Bernstein, $\dim \ell^\infty = \mathfrak{c}$. Now suppose we believe the continuum hypothesis $\mathfrak{c} = \aleph_1$. Pick a Hamel basis $B$ for $\ell^\infty$; since it is in bijection with the least uncountable ordinal, we can well-order it in such a way that for any $x \in B$, $B_x = \{y \in B : y < x\}$ is countable. Note $B$ has no greatest element, so $\bigcup_{x \in B} B_x = B$. Let $E_x = \mathrm{span } B_x$; clearly ${E_x}$ is a chain, and $\bigcup_{x \in B} E_x = \ell^\infty$. But each $E_x$ has countable Hamel dimension and therefore is separable, so it must be nowhere dense in $\ell^\infty$. - Nice answer too, along the same lines as Joel's, and works for any separable space with the Baire property. Interesting. – George Lowther Jan 18 2011 at 0:44 ...or should that just be "any second countable space with the Baire property"? – George Lowther Jan 18 2011 at 0:47 Yeah, second countable looks like a better condition than separable, since a subset of a separable space need not in general be separable. – Nate Eldredge Jan 18 2011 at 0:59 I'll reiterate while convincing myself. If $A$ is nonmeagre then, by definition of meagre rather than the Baire Category theorem, any countable subchain is bounded. Certainly $A$ is not nowhere dense so its closure contains an interval. $\mathbb{R}$ is 2nd countable, so $A$ is 2nd countable and hence separable (I'm not sure that separability always passes to subspaces directly). Let $C \subset A$ be countable and dense in $A$. $C$ is in the union of a countable subchain, so one of our supposedly nowhere dense $K_i$ contains $C$ - contradiction since $C$ is not nowhere dense... Looks good! – Michael Jan 18 2011 at 1:00 So, I suppose one concludes that, in a 2nd countable space, a set is meagre if and only if it is the union of some (possibly uncountable) chain of nowhere dense sets? – Michael Jan 18 2011 at 1:06 show 7 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 107, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364843964576721, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/67161-help-me-invariant-subspace-please-print.html
# help me on invariant subspace please Printable View • January 7th 2009, 05:12 AM Kat-M help me on invariant subspace please let T denote a linear operator on a vector space V. suppose that every subspace of V is invariant under T. prove that T is a scalar multiple of the identity map. • January 7th 2009, 06:54 AM Opalg Quote: Originally Posted by Kat-M let T denote a linear operator on a vector space V. suppose that every subspace of V is invariant under T. prove that T is a scalar multiple of the identity map. Step 1: Let x be a nonzero vector in V. Then the one-dimensional subspace spanned by x is invariant under T. So $Tx = c_xx$, for some scalar that (possibly) depends on x. Step 2: Let x and y be linearly independent vectors in V. Then $Tx = c_xx,\ Ty = c_yy, \ T(x+y) = c_{x+y}(x+y)$. Deduce from this that $c_x = c_y$. Thus the constant c is in fact the same for every vector, and hence T is c times the identity. • January 7th 2009, 07:57 AM Kat-M thanks thank you very much. you really helped me. All times are GMT -8. The time now is 12:36 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282931089401245, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/43511/force-of-a-particles-on-a-potential-barrier
# Force of a particles on a Potential Barrier [closed] A particle confined by a potential wall exerts some pressure on it. More specifically, suppose that the particle moves in this potential: $$V(x) ~=~\left\{ \begin{array}{lcc}\text{finite function}&\text{if}& x > b, \\ V_0 &\text{if} &x < b,\end{array}\right.$$ where $V_0 \to \infty$. In the limit of infinitely high wall, $\psi(b) = 0$, and the force $F$ depends on $\psi'(b)$. The exact expression can be found in the following way: Derive the expression for $F$ without making any ad hoc assumptions. Let $V_0$ be very large but finite. In this case, you can find the form of the wavefunction $\psi(x)$ in a small neighborhood of $b$ knowing only $\psi'(b)$. Now suppose that the wall shifts by an infinitely small distance $\delta b$. Consider the corresponding change of the potential, $\delta V(x)$, and calculate the variation of energy. I have tried solving this several other ways, such as $F = -\frac{d E_n}{dL}$ in an infinite square well of length $L$, but I can't get the above to work out. - 1 – Qmechanic♦ Nov 5 '12 at 23:24 1 you in the same class maybe with the above poster? – Dylan Sabulsky Nov 6 '12 at 3:45 2 – kendr Nov 6 '12 at 7:50 2 @kendr, thanks for point that out. Michael, this is not a site for getting people to do your homework problems or especially exam problems for you. – David Zaslavsky♦ Nov 7 '12 at 17:28 ## closed as too localized by David Zaslavsky♦Nov 7 '12 at 17:25 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, see the FAQ.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422656893730164, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/192461-only-non-abelian-simple-group-order-60-a_5-a.html
Thread: 1. The only non-abelian simple group of order 60 is A_5 Hi, assuming that a simple non-abelian group $G$ of order 60 can be embedded in $S_6$, I want to prove that $G$ cannot contain an odd permutation and is therefore a subgroup of $A_6$ (and then from that I can deduce that as it is order 60 it is $A_5$). How would I go about showing this? 2. Re: The only non-abelian simple group of order 60 is A_5 Originally Posted by alsn Hi, assuming that a simple non-abelian group $G$ of order 60 can be embedded in $S_6$, I want to prove that $G$ cannot contain an odd permutation and is therefore a subgroup of $A_6$ (and then from that I can deduce that as it is order 60 it is $A_5$). How would I go about showing this? Consider the sign map $\text{sgn}:G\to \mathbb{Z}_2$ where we're thinking of $G\hookrightarrow S_6$. Since $G$ is simple this is either zero or an embedding, and since $60\nmid 2$ I'm pretty sure it's clear that the map is zero. But, this tells us precisely that every element of $G$ is even. 3. Re: The only non-abelian simple group of order 60 is A_5 Originally Posted by Drexel28 Consider the sign map $\text{sgn}:G\to \mathbb{Z}_2$ where we're thinking of $G\hookrightarrow S_6$. Since $G$ is simple this is either zero or an embedding, and since $60\nmid 2$ I'm pretty sure it's clear that the map is zero. But, this tells us precisely that every element of $G$ is even. ahhh okay. so since all the elements of G map to zero and none map to 1, does this mean that all combinations of elements of G are even? think i get it. 4. Re: The only non-abelian simple group of order 60 is A_5 Are you intending to make a distinction between "elements of G" and "combinations of elements of G"? The nature of a group is that combinations of elements are just elements themselves. 5. Re: The only non-abelian simple group of order 60 is A_5 Originally Posted by alsn ahhh okay. so since all the elements of G map to zero and none map to 1, does this mean that all combinations of elements of G are even? think i get it. Like Tinyboss points out, this doesn't make sense per se. What I proved was that every element of $G$'s embedding in $S_6$ has even sign and so the embedding lives inside $A_6$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504854083061218, "perplexity_flag": "head"}
http://www.digplanet.com/wiki/Integer_factorization
digplanet beta 1: Athena Share digplanet: Integer factorization Computational hardness assumptions > Integer factorization Unsolved problems in computer science > Integer factorization Integer factorization algorithms > Integer factorization Sections Agriculture Applied sciences Arts Belief Chronology Culture Education Environment Geography Health History Humanities Language Law Life Mathematics Nature People Politics Science Society Technology "Prime decomposition" redirects here. For the prime decomposition theorem for 3-manifolds, see Prime decomposition (3-manifold). ‹ The template below (Unsolved) is being considered for possible deletion. See templates for discussion to help reach a consensus.› Can integer factorization be done in polynomial time? In number theory, integer factorization or prime factorization is the decomposition of a composite number into smaller non-trivial divisors, which when multiplied together equal the original integer. When the numbers are very large, no efficient, non-quantum integer factorization algorithm is known; an effort concluded in 2009 by several researchers factored a 232-digit number (RSA-768), utilizing hundreds of machines over a span of 2 years.[1] The presumed difficulty of this problem is at the heart of widely used algorithms in cryptography such as RSA. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, and quantum computing. Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are both large, for instance more than 2000 bits long, randomly chosen, and about the same size (but not too close, e.g. to avoid efficient factorization by Fermat's factorization method), even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical; that is, as the number of digits of the primes being factored increases, the number of operations required to perform the factorization on any computer increases drastically. Many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure. Prime decomposition This image demonstrates the prime decomposition of 864. A shorthand way of writing the resulting prime factors is $2^5 \times 3^3$ By the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. (A special case for 1 is not needed using an appropriate notion of the empty product.) However, the fundamental theorem of arithmetic gives no insight into how to obtain an integer's prime factorization; it only guarantees its existence. Given a general algorithm for integer factorization, one can factor any integer down to its constituent prime factors by repeated application of this algorithm. However, this is not the case with a special-purpose factorization algorithm, since it may not apply to the smaller factors that occur during decomposition, or may execute very slowly on these values. For example, if N is the number (2521 − 1) × (2607 − 1), then trial division will quickly factor 10N as 2 × 5 × N, but will not quickly factor N into its factors. Current state of the art See also: integer factorization records The most difficult integers to factor in practice using existing algorithms are those that are products of two large primes of similar size, and for this reason these are the integers used in cryptographic applications. The largest such semiprime yet factored was RSA-768, a 768-bit number with 232 decimal digits, on December 12, 2009.[1] This factorization was a collaboration of several research institutions, spanning two years and taking the equivalent of almost 2000 years of computing on a single-core 2.2 GHz AMD Opteron. Like all recent factorization records, this factorization was completed with a highly optimized implementation of the general number field sieve run on hundreds of machines. Difficulty and complexity If a large, b-bit number is the product of two primes that are roughly the same size, then no algorithm has been published that can factor in polynomial time, i.e., that can factor it in time O(bk) for some constant k. There are published algorithms that are faster than O((1+ε)b) for all positive ε, i.e., sub-exponential. The best published asymptotic running time is for the general number field sieve (GNFS) algorithm, which, for a b-bit number n, is: $O\left(\exp\left(\left(\begin{matrix}\frac{64}{9}\end{matrix} b\right)^{1\over3} (\log b)^{2\over3}\right)\right).$ For an ordinary computer, GNFS is the best published algorithm for large n (more than about 100 digits). For a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time. This will have significant implications for cryptography if a large quantum computer is ever built. Shor's algorithm takes only O(b3) time and O(b) space on b-bit number inputs. In 2001, the first seven-qubit quantum computer became the first to run Shor's algorithm. It factored the number 15.[2] When discussing what complexity classes the integer factorization problem falls into, it's necessary to distinguish two slightly different versions of the problem: • The function problem version: given an integer N, find an integer d with 1 < d < N that divides N (or conclude that N is prime). This problem is trivially in FNP and it's not known whether it lies in FP or not. This is the version solved by most practical implementations. • The decision problem version: given an integer N and an integer M with 1 ≤ M ≤ N, does N have a factor d with 1 < d < M? This version is useful because most well-studied complexity classes are defined as classes of decision problems, not function problems. This is a natural decision version of the problem, analogous to those frequently used for optimization problems, because it can be combined with binary search to solve the function problem version in a logarithmic number of queries. It is not known exactly which complexity classes contain the decision version of the integer factorization problem. It is known to be in both NP and co-NP. This is because both YES and NO answers can be verified in polynomial time given the prime factors (we can verify their primality using the AKS primality test, and that their product is N by multiplication). The fundamental theorem of arithmetic guarantees that there is only one possible string that will be accepted (providing the factors are required to be listed in order), which shows that the problem is in both UP and co-UP.[3] It is known to be in BQP because of Shor's algorithm. It is suspected to be outside of all three of the complexity classes P, NP-complete, and co-NP-complete. It is therefore a candidate for the NP-intermediate complexity class. If it could be proved that it is in either NP-Complete or co-NP-Complete, that would imply NP = co-NP. That would be a very surprising result, and therefore integer factorization is widely suspected to be outside both of those classes. Many people have tried to find classical polynomial-time algorithms for it and failed, and therefore it is widely suspected to be outside P. In contrast, the decision problem "is N a composite number?" (or equivalently: "is N a prime number?") appears to be much easier than the problem of actually finding the factors of N. Specifically, the former can be solved in polynomial time (in the number n of digits of N) with the AKS primality test. In addition, there are a number of probabilistic algorithms that can test primality very quickly in practice if one is willing to accept the vanishingly small possibility of error. The ease of primality testing is a crucial part of the RSA algorithm, as it is necessary to find large prime numbers to start with. Factoring algorithms Special-purpose A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. Exactly what the running time depends on varies between algorithms. An important subclass of special-purpose factoring algorithms is the Category 1 or First Category algorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors.[4] For example, trial division is a Category 1 algorithm. General-purpose A general-purpose factoring algorithm, also known as a Category 2, Second Category, or Kraitchik family algorithm (after Maurice Kraitchik),[4] has a running time depends solely on the size of the integer to be factored. This is the type of algorithm used to factor RSA numbers. Most general-purpose factoring algorithms are based on the congruence of squares method. Heuristic running time In number theory, there are many integer factoring algorithms that heuristically have expected running time $L_n\left[1/2,1+o(1)\right]=e^{(1+o(1))(\log n)^{\frac{1}{2}}(\log \log n)^{\frac{1}{2}}}$ in o and L-notation. Some examples of those algorithms are the elliptic curve method and the quadratic sieve. Another such algorithm is the class group relations method proposed by Schnorr,[5] Seysen,[6] and Lenstra[7] that is proved under of the Generalized Riemann Hypothesis (GRH). Rigorous running time The Schnorr-Seysen-Lenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance[8] to have expected running time $L_n\left[1/2,1+o(1)\right]$ by replacing the GRH assumption with the use of multipliers. The algorithm uses the class group of positive binary quadratic forms of discriminant Δ denoted by GΔ. GΔ is the set of triples of integers (a, b, c) in which those integers are relative prime. Schnorr-Seysen-Lenstra Algorithm Given is an integer n that will be factored, where n is an odd positive integer greater than a certain constant. In this factoring algorithm the discriminant Δ is chosen as a multiple of n, Δ= -dn, where d is some positive multiplier. The algorithm expects that for one d there exist enough smooth forms in GΔ. Lenstra and Pomerance show that the choice of d can be restricted to a small set to guarantee the smoothness result. Denote by PΔ the set of all primes q with Kronecker symbol $\left(\tfrac{\Delta}{q}\right)=1$. By constructing a set of generators of GΔ and prime forms fq of GΔ with q in PΔ a sequence of relations between the set of generators and fq are produced. The size of q can be bounded by $c_0(\log|\Delta|)^2$ for some constant $c_0$. The relation that will be used is a relation between the product of powers that is equal to the neutral element of GΔ. These relations will be used to construct a so-called ambiguous form of GΔ, which is an element of GΔ of order dividing 2. By calculating the corresponding factorization of Δ and by taking a gcd, this ambiguous form provides the complete prime factorization of n. This algorithm has these main steps: Let n be the number to be factored. 1. Let Δ be a negative integer with Δ = -dn, where d is a multiplier and Δ is the negative discriminant of some quadratic form. 2. Take the t first primes $p_1=2,p_2=3,p_3=5, \dots ,p_t$, for some $t\in{\mathbb N}$. 3. Let $f_q$ be a random prime form of GΔ with $\left(\tfrac{\Delta}{q}\right)=1$. 4. Find a generating set X of GΔ 5. Collect a sequence of relations between set X and {fq : q ∈ PΔ} satisfying: $\left(\prod_{x \in X_{}} x^{r(x)}\right).\left(\prod_{q \in P_\Delta} f^{t(q)}_{q}\right) = 1$ 6. Construct an ambiguous form (a, b, c) that is an element f ∈ GΔ of order dividing 2 to obtain a coprime factorization of the largest odd divisor of Δ in which Δ = -4a.c or a(a - 4c) or (b - 2a).(b + 2a) 7. If the ambiguous form provides a factorization of n then stop, otherwise find another ambiguous form until the factorization of n is found. In order to prevent useless ambiguous forms from generating, build up the 2-Sylow group S2(Δ) of G(Δ). To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and the Jacobi sum test. Expected running time The algorithm as stated is a probabilistic algorithm as it makes random choices. Its expected running time is at most $L_n\left[1/2,1+o(1)\right]$.[8] Notes 1. ^ a b Kleinjung, et al (2010-02-18). Factorization of a 768-bit RSA modulus. International Association for Cryptologic Research. Retrieved 2010-08-09. 2. LIEVEN M. K. VANDERSYPEN, et al (2007-12-27). NMR quantum computing: Realizing Shor's algorithm. Nature. Retrieved 2010-08-09. 3. 4. ^ a b David Bressoud and Stan Wagon (2000). A Course in Computational Number Theory. Key College Publishing/Springer. pp. 168–69. ISBN 978-1-930190-10-8. 5. Schnorr, Claus P. (1982). "Refined analysis and improvements on some factoring algorithms". Journal of Algorithms 3 (2): 101–127. doi:10.1016/0196-6774(82)90012-8. 6. Seysen, Martin (1987). "A probabilistic factorization algorithm with quadratic forms of negative discriminant". Mathematics of Computation 48 (178): 757–780. doi:10.1090/S0025-5718-1987-0878705-X. 7. Lenstra, Arjen K (1988). "Fast and rigorous factorization under the generalized Riemann hypothesis". Indagationes Mathematicae 50: 443–454. 8. ^ a b H.W. Lenstra, and C. Pomerance; Pomerance, Carl (July 1992). "A Rigorous Time Bound for Factoring Integers" (PDF). Journal of the American Mathematical Society 5 (3): 483–516. doi:10.1090/S0894-0347-1992-1137100-0. References • Richard Crandall and Carl Pomerance (2001). Prime Numbers: A Computational Perspective. Springer. ISBN 0-387-94777-9.  Chapter 5: Exponential Factoring Algorithms, pp. 191–226. Chapter 6: Subexponential Factoring Algorithms, pp. 227–284. Section 7.4: Elliptic curve method, pp. 301–313. • Donald Knuth. The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Third Edition. Addison-Wesley, 1997. ISBN 0-201-89684-2. Section 4.5.4: Factoring into Primes, pp. 379–417. 2520 videos foundNext > Factoring Numbers Large and SmallMethod for finding factors of any number, large or small. The future of cryptology: which 3 letters algorithm(s) could be our Titanic? [28C3]The future of cryptology: which 3 letters algorithm(s) could be our Titanic? RMS Olympic, RMS Titanic, HMHS Britannic vs Discrete Logarithm, Integer factoriz... Prime factoring of integers - first tutorialHow to prime factor an integer. Integer Factorisation of Any Length Part One Chris CurtisPart one of factorisation of integers of arbitrary length by Chris Curtis, Auckland New Zealand. Factoring an Integerhttp://demonstrations.wolfram.com/FactoringAnInteger/ The Wolfram Demonstrations Project contains thousands of free interactive visualizations, with new entr... SAT & ACT Math: How to Find Prime Factors and Factor Integers | Kaplan Test PrepKaplan ACT and SAT teacher and online tutor James provides a helpful lesson and tutorial on how to get more points on the Math section of the SAT & ACT using... Factor a positive integer up to 2 digits into prime factors up to 1 digit.In this video I show that all prime number that less than 11 square (121) can be detected by prime numbers that less than 10 (2,3,5, and 7). GMAT Sample Questions- Prime Factorisation Of The First 8 Integershttp://700gmatclub.com/ --- http://gmatlounge.com/ This is one of our many GMAT sample questions and this gmat video solution will show you how to do a prime... Algebra 1 Lesson 5.1 Factoring IntegersThis video looks at factoring integers, finding prime factorizations and greatest common factors. The P versus NP Problem (Part 4)This video is the fourth in a multi-part series on the P versus NP problem geared for a broad audience (i.e., one with a very basic understanding of mathemat... 2520 videos foundNext > We're sorry, but there's no news about "Integer factorization" right now. Limit to books that you can completely read online Include partial books (book previews) .gsc-branding { display:block; } Oops, we seem to be having trouble contacting Twitter Talk About Integer factorization You can talk about Integer factorization with people all over the world in our discussions. Support Wikipedia A portion of the proceeds from advertising on Digplanet goes to supporting Wikipedia. Please add your support for Wikipedia!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8878688216209412, "perplexity_flag": "middle"}
http://motls.blogspot.com.es/2012/08/atlas-released-10-papers-today.html?m=1
# The Reference Frame Our stringy Universe from a conservative viewpoint ## Monday, August 13, 2012 ### ATLAS released 10 papers today Update: Wow, by the end of the day, there would be 15 new papers posted but the extra five are on lead-lead collisions and will be ignored below. The ATLAS collaboration at the LHC released ten papers today: ATLAS Conf Notes Six of them are based on the $$7\TeV$$ data from 2011; four of them already use the $$8\TeV$$ data collected in 2012. But before you buy the cherished libertarian book with "ATLAS" in the name (one that happens to have stunning 2,651 reviews at amazon.com), you are surely interested in the results of these studies. Have they found something surprising? No significant deviation from the Standard Model was seen anywhere in the papers and I decided that the smaller deviations were small enough and don't deserve a detailed discussion at this moment. On the contrary, I have the feeling that some of the new papers show that some previous excesses, e.g. those with very many jets and MET, have probably been flukes. I haven't checked that it's exactly the same data that Nanopoulos and others were building their excitement upon but I do think that their excitement will diminish. The papers are full of excellent work which brings us results that confirm the integrity and legality of Nature – and the power of the human brain, especially the human brain that worked hard sometimes by the 1960s and early 1970s ;-) but many unusually competent contemporary people (and their state-of-the-art computers with sometimes new software) had to work hard to calculate the right predictions of the Standard Model for many detailed situations. The agreement between the theory and the experiment is truly impressive. It is misleading to describe the searches by saying that "those 3,000 folks have found nothing". It's more accurate to say that they're mostly searching for platinum, titanium, and perhaps also an elixir of life but during their search, they have found and collected tons of gold, silver, and other metals at places that were exactly predicted by the geologists – I mean particle theorists, of course. ;-) Given the bias and desire of most people to see hints that the brains in the 1960s and 1970s were inadequate, it's likely that the people feel disappointed. I wonder how many people read all the papers in detail. What percentage of the authors has read the full papers? What about the rest of ATLAS and the competitors at CMS? How many people outside the LHC Collaborations does so? Wouldn't it be a good idea to make the formatting of these papers more concise and unified so that they're viewed as aspects of one "megapaper" that could be presented via a universal user interface on the web? You may see that the LHC is still mostly releasing the data accumulated in 2011. It will take quite some time to evaluate the 2012 data. And there may still be many papers based on the 2011 data that are being completed – and waiting to be published. Of course, there may even be papers with exciting results based on the 2011 data. Every time the LHC quadruples the total number of collisions, the statistical significance of the signals of new physics has a chance to double. The signals (in the units of one standard deviation) may more than double or less than double (because the changes contain a random component) but the doubling of signals is the neutral expectation coming from the quadrupling of the data. So if you want a 5-sigma discovery based on the 2012 data, it's rather likely that the dataset based on the 2011 only data in the same channel should already show at least a 2-sigma deviation from the Standard Model. Or if I use the inverse language: if there's no 2-sigma deviation seen in the 2011 data, it is rather likely that there won't be any 5-sigma discovery in the 2012 data. But as I have mentioned, none of these two statements has been established yet. The counting will change after the 2013 break. In 2014, the LHC will start collisions at the center-of-mass energy $$13\TeV$$. Even one inverse femtobarn of those higher-energy data will have the potential to see many hypothetical new effects that will have been invisible in dozens of inverse femtobarns of the $$7\TeV$$ or $$8\TeV$$ data. In other words, the extrapolation of "no signal seen through 2012 and therefore no signal will be seen in 2014" will be pretty much indefensible because the new data will be independent of the old ones. Of course, the LHC may very well refuse to see any hint of physics beyond the Standard Model in the $$13\TeV$$ data, too, but there exists no truly convincing, scientific, or logical argument that would imply that it has to be so. #### 10 comments: 1. Pepa Salut Lubos, As I glanced at one paper - 104 - A search with the Atlas detector for Susy in final stages containing at least 4 hard jets, one isolated lepton (electron or muon) and missing transverse energy, 5.8 inverse femtobarns of data collected at centre-of mass energy of 8 TeV - "no excess above SM background expectation" is observed, in MSUGRA/CMSSM model, results in this signal region alone exclude squarks and gluino masses of 1.24 TeV. , etc now I'll go back to my vacation... :-) 2. Hi Pepo, nice, just be careful on the vacations. ;-) CMSSM/MSUGRA is still a very thin slice through the parameter space in which all the superpartners with the same spin are essentially assumed to be equally heavy - so the strong constraints on these models that have emerged pretty much say that "it it not possible that all superpartners are extremely light". However, it doesn't have to be the case that all the superpartners are equally heavy at all and there may still be lots of superpartners that are as light as 200 GeV or even less. 4. Luke Lea "It is misleading to describe the searches by saying that "those 3,000 folks have found nothing". It's more accurate to say that they're mostly searching for platinum, titanium, and perhaps also an elixir of life but during their search, they have found and collected tons of gold, silver, and other metals at places that were exactly predicted by the geologists – I mean particle theorists, of course. ;-)" You could have been a writer! Very nice. 5. CIP This comment has been removed by a blog administrator. 6. Casper But the redoubtable Ms Rand was half-Russian and therefore possibly mad. Readers who crave even more dystopian fantasy should look no further than 'Clusterfuck Nation', a blog that combines modern doomer philosophy with the finest of American humour. 7. anna v My experience with large groups and research is that if they have found something new and possibly exciting they will not tell us until they have combed it through umpteen times for mistakes and biases and will then come out with the most conservative option on the paper. It is unfortunate that the supeluminal neutrino happening came so close in time with the LHC analysis. People become doubly conservative and cautious when they see another one stumbling. Back in the late seventies I was in a neutrino BEBC group and on a sabbatical at CERN for a year. We had the misfortune of a fellow stumbling on a mu-pi 4 sigma resonance, which got everyone so excited they became incautious and publicized it :history shows that the other experiments found no such thing and the story fizzled. ( the look elsewhere effect was just being learned the hard way: too many cuts). As a result the collaboration became ultra cautious and only allowed platitudes to go to publication for some time. 8. do you actually have any actual arguments against the philosophy of the book? if you don't get the theory check which people happen to be libertarians. 9. SHOAIB JI The ATLAS collaboration at the LHC released ten papers today: Six of them are based on the $$7\TeV$$ data from 2011; four of them already use the $$8\TeV$$ data collected in 2012. this site 10. Confort12 It was a very great article that was written here on your blog. I just wanna say thank you for taking a lot of time making on this very unique blog that is really an eye catching to all of the people that can read your blog. ## Who is Lumo? Luboš Motl Pilsen, Czech Republic View my complete profile ← by date
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632869958877563, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/242527/how-to-prove-periodicity-is-a-class-property
# How to prove periodicity is a class property? It is said that in a Markov chain, if state $i$ has period $d$ and state $i$ and $j$ are communicate, then state $j$ also has period $d$. I wonder how to prove it? - ## 1 Answer One way to define the period of state $i$ is as the largest integer $p$ such that you can't go from $i$ to $i$ in any number of steps that isn't a multiple of $p$. Then you can show that you can go from $i$ to $i$ in $kp$ steps for all sufficiently large integers $k$. Suppose state $i$ has period $p$ and communicates with state $j$. You can go from $i$ to $j$ in a certain number of steps, say $m$, and you can go from $j$ to $i$ in a certain number of steps, say $n$. But that means you can go from $i$ to $j$ and back to $i$ in $m+n$ steps, so $m+n$ must be a multiple of $p$, say $m+n = rp$. And then if you can go from $i$ to $i$ in $kp$ steps, you can go from $j$ to $j$ in $(k+r)p$ steps by going first from $j$ to $i$ in $m$ steps, then from $i$ to $i$ in $kp$ steps, then back to $j$ in $n$ steps. So the period of $j$ is at most $p$. Interchange $i$ and $j$ to see that the period of $j$ is at least $p$. - I know there is another definition of period using the GCD of all returning times. I don't think this two definition is equivalent. – hxhxhx88 Nov 22 '12 at 12:57 They are equivalent (assuming, of course, that it is possible to return from $i$ to $i$). Suppose $p$ is the period in my definition and $q$ is the period in your definition. Since any sufficiently large prime times $p$ is a return time, $q$ divides $p$. On the other hand, $q = gcd(t_1, \ldots, t_m)$ is a linear combination over the integers of $t_1, \ldots, t_m$ for some return times $t_1, \ldots, t_m$, from which you can show that every sufficiently large multiple of $q$ is a return time, and therefore $p$ divides $q$. – Robert Israel Nov 22 '12 at 18:24 why 'any sufficiently large prime times p is a return time'? Why can we have the multiple is a prime? – hxhxhx88 Nov 25 '12 at 5:17 Hmm, that's a bit less obvious than I must have thought at the time. There is some return time, which must be a multiple of $p$, say $t = k_0 p$. Let $p_1, \ldots, p_m$ be the primes dividing $k_0$. For each $p_j$, there is a return time $k_j p$ such that $k_j$ is not divisible by $p_j$. So $gcd(k_0, k_1, \ldots, k_m) = 1$, which implies that every sufficiently large positive integer $k$ is the sum of nonnegative integer multiples of $k_0, \ldots, k_m$, and thus $kp$ is a return time. – Robert Israel Nov 25 '12 at 7:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9661269783973694, "perplexity_flag": "head"}
http://mathoverflow.net/questions/55080/probability-question-regarding-brownian-motion/55081
## probability question regarding brownian motion ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a BM $dX_t=\mu t+\sigma dB_t$, having started at $X_0=0$. What is the probability that $X_t$ does not hit 0 in the time interval $[a,T]$ where $0\le a\le T$? Here the hit level can be changed from 0 to any constant $b\gt 0$, or even to a space-time line $x=kt+b$. This is related to kind of "Global" distribution of $X_t$. I do not find the discussion in the references I have here, for example, Karatzas&Shreve. Would appreciate your suggestion and recommendation. - ## 1 Answer A straightforward approach is to simply integrate the density of $X_t$ at time $a$ (which will be normally distributed with mean $\mu$ and variance $\sigma^2 a$) against the probability of hitting 0 conditional on the value at time $a$ (which is also known in closed-form). This will give you a messy integral (with an exponential multiplied by a cumulative-normal) but it should be reducible to a (messy sum of) bivariate cumulative normal(s). The value we want to compute is $$\int_0^\infty \mathbb{P}[X_\xi>0 \text{ for } a\leq\xi\leq T\ |\ X_a=z] e^{-z^2/2\sigma^2T}\frac{dz}{\sigma\sqrt{T}\sqrt{2\pi}}$$ where I'm integrating the density at time $a$ for positive values against the non-hitting time. The next step is to observe that the probability $\mathbb{P}[X_\xi>0 \text{ for } a\leq\xi\leq T\ |\ X_a=z]$ is equal to the probability $\mathbb{P}[X_\xi>-z \text{ for } 0\leq\xi\leq T-a]$ but this probability is equal to a difference of (basically) cumulative normals (it's just a hitting time computation for a (scaled) Brownian motion with drift). Then plug that formula into the above integral. A quick calculation (might be wrong, so beware) gives me $$\mathbb{P}[X_\xi>-z \text{ for } 0\leq\xi\leq T-a] = \Phi\left[\frac{-z+\alpha (T-a)}{\sigma\sqrt{T-a}}\right] - e^{2\alpha z/\sigma^2}\Phi\left[\frac{z+\alpha (T-a)}{\sigma\sqrt{T-a}}\right]$$ where $\Phi[z]=\int_{-\infty}^z e^{-\xi^2/2}\frac{d\xi}{\sqrt{2\pi}}$ is the standard cumulative normal distribution function. (This follows from application of Girsanov to a reflection argument, a well-known result.) - @Apollo: could you please write down the integral? I am not sure I understand what you meant here. Does this way guarantee for all time in the interval $[a, T]$, no 0 is hit? – Qiang Li Feb 10 2011 at 22:08 Added a little more detail. – Apollo Feb 10 2011 at 22:18 @Apollo, looks great! Can I ask you: how to understand the integral? – Qiang Li Feb 10 2011 at 22:31 I mean: how to understand it is the required probability. – Qiang Li Feb 10 2011 at 22:31 We're integrating along the distribution of $X$ at time $a$ and multiplying by the conditional probability (given our location at time $a$) that we make it further to time $T$ without hitting zero. If we're already below zero then this probability is $0$ (so the lower bound of the integral starts at $0$) if we're above zero then we just need to keep the minimum of the remaining path above zero. – Apollo Feb 10 2011 at 22:34 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9040796756744385, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1740/stretching-a-random-seed-to-maximize-entropy?answertab=votes
# Stretching a random seed to maximize entropy I'm using a random number generator that requires me to pass it a big (several kilobytes) pool of random data for initialization. I've gathered entropy from various system metrics (free memory, system clock, mac address .etc) and generated a 64-bit seed value which I need to "stretch" over the big memory pool. Whats the best way to spread this small amount of entropy over the big pool? I've considered something like encrypting a null (0x00) buffer with a stream cipher using the seed as the key. Is that a safe approach? - ## 4 Answers What you ask for is a RNG to produce some output which another RNG will use as seed. This looks quite overly complex... The point of the seed is to be unknown to the attacker: the seed data should be such that "trying out" possible seed values should not match the actual seed except with negligible probability. With a 64-bit seed, even if the seed is "perfect" (chosen totally randomly and uniformly among the $2^{64}$ possible seed values), an attacker trying possible seed value still has a $2^{-64}$ probability of finding the correct seed at each try, and that's a bit high for comfort. We usually prefer attack probabilities of $2^{-128}$ or less. Regardless of how you generate your 64-bit seed, how you then expand that 64-bit seed into whatever ISAAC requires, and whether ISAAC is good or not, your security will never be higher than that provided by a 64-bit seed. How ISAAC is supposed to be seeded (with what, under which properties) is not clear; the ISAAC author himself says: I provided no official seeding routine because I didn't feel competent to give one. Seeding a random number generator is essentially the same problem as encrypting the seed with a block cipher. ISAAC should be initialized with the encryption of the seed by some secure cipher. I've provided a seeding routine in my implementations, which nobody has broken so far, but I have less faith in that initialization routine than I have in ISAAC. Come to think of it, this is a bit scary comment. Are you sure you want to trust the security of a system, a part of which being deemed by the author himself as being not trustworthy ? And more generally, ISAAC was designed at a time where the competition was RC4, a generator with known biases, and not that fast. Science has improved since. See the eSTREAM project: this is the result of a kind of open competition, where cryptographers proposed new stream cipher designs, and tried to break the proposed designs. The resulting "portfolio" consists in the designs which resisted cryptanalysis, and offer good performance. The good thing about these stream ciphers is that they work with keys of reasonable size, with no underspecified part as the seeding in ISAAC. For instance, consider Sosemanuk: it accepts a key of 1 to 256 bits, and a 128-bit IV, and produces pseudo-random bytes at an reasonably high speed (it should be competitive with ISAAC, possibly even a tad faster). This would lead to the following design: • Accumulate your source entropy in a buffer. It does not really matter how you encode each metric, as long as you do not lose information. • Hash the whole buffer with a secure hash function, preferably SHA-256. This results in a 256-bit value. • Use the 256-bit value as key for Sosemanuk (the IV can be 0). Produce random bytes. Enjoy. • (Alternatively, use the Sosemanuk output as seed for ISAAC, if you really need, for administrative reasons, to use ISAAC. But the under-specification of the seeding process could trigger weaknesses, so I would not recommend it at all.) Note that entropy gathering is a subtle thing. MAC address and system clock, for instance, are really bad entropy sources because they can be observed by attackers: the system clock is close to the current time, which (by definition) is public data, and the machine will write its MAC address on every ethernet frame it emits. Entropy is good only insofar as it is unknown to the attacker. The good thing about SHA-256 is that it does not matter if some of your entropy is bad, as long as there is also some good entropy somewhere in your buffer. Still, you are warmly encouraged to use as entropy sources the services specifically offered by the operating system to that effect (it is called `CryptGenRandom()` on Windows, `/dev/urandom` on Unix-like systems and MacOS X): since the OS directly manages the hardware, it is in ideal position to gather entropy from hardware sources. - There is no good way to stretch your 64-bit seed value without some secret material. Anything deterministic you do is bound to be vulnerable to enumeration of all 64-bit seed values. The least wrong option is to use a purposely slow derivation function designed for passwords, e.g. Scrypt. With some $Secret$ material assumed hidden from an adversary, you have more options. The basic idea is to mix $Seed$ with $Secret$ into an expanded $Seed'$ using a random-like function. The simple $Seed'=SHA_{256}(Secret||Seed)$ will do, other Key Derivation Function can be used, including Scrypt. Issues are that you must protect the confidentiality of $Secret$; further, if it remains constant, identical 64-bit $Seed$ will generate the same $Seed'$. Next steps are to store and vary $Secret$ from one execution to another; pretty soon we are reinventing a full-blown implementation of a cryptographically secure random source, such a Yarrow. - Correct me if I am wrong in the following claim and deduction. 1. You are trying to generate a large amount of random data that you need in a random number generator (which should be precisely pseudo-random number generator because if you know a construction of a random number generator from a small seed that you are already breaking some theoretical known result based on a very plausible complexity theoretic assumption!) 2. You are trying to do this with the help of small entropy. Now lets see both these steps. One possible way to do the second step is assuming that you have access to a small number of truly random bits (say $d$). Now from the entropy that you have can be seen as being the entropy of a distribution, say $\mathcal{X}$ with a min-entropy the size of your seed which for brevity I say $k$. Now the task that you are trying to achieve is actually extracting randomness from the arbitrary distribution $\mathcal{X}$. There has been known lower bound results which says that you will always lose some entropy if you are concerned with getting randomness which passes any statistical test. Precisely, the work is by Jaikumar and Ta-Shma (Bounds for Dispersers, Extractors, and Depth-Two Superconcentrators ) which says that you will lose about $2 \log (1/\varepsilon)$ entropy if you want the statistical test to pass by at most $\varepsilon$ probability. This said, you can see that the second step is impossible to do because you always lose some entropy even if you have an access to $d$ truly random bits. I hope this clears and settles the matter for you. Let me know if you have doubt on any of the points or I have missed something that you wanted to ask explicitly. - It is not safe in light of the recent cavity-compression attacks against key stretching (see Secure Applications of Low-Entropy Keys, J. Kelsey, B. Schneier, C. Hall, and D. Wagner (1997)) - 1 What's a "cavity-compression attack"? – fgrieu Jan 25 '12 at 18:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484241604804993, "perplexity_flag": "middle"}
http://etna.mcs.kent.edu/volumes/1993-2000/vol2/abstract.php?vol=2&pages=1-21
## An implicitly restarted Lanczos method for large symmetric eigenvalue problems D. Calvetti, L. Reichel, and D. C. Sorensen ### Abstract The Lanczos process is a well known technique for computing a few, say $k$, eigenvalues and associated eigenvectors of a large symmetric $n\times n$ matrix. However, loss of orthogonality of the computed Krylov subspace basis can reduce the accuracy of the computed approximate eigenvalues. In the implicitly restarted Lanczos method studied in the present paper, this problem is addressed by fixing the number of steps in the Lanczos process at a prescribed value, $k+p$, where $p$ typically is not much larger, and may be smaller, than $k$. Orthogonality of the $k+p$ basis vectors of the Krylov subspace is secured by reorthogonalizing these vectors when necessary. The implicitly restarted Lanczos method exploits that the residual vector obtained by the Lanczos process is a function of the initial Lanczos vector. The method updates the initial Lanczos vector through an iterative scheme. The purpose of the iterative scheme is to determine an initial vector such that the associated residual vector is tiny. If the residual vector vanishes, then an invariant subspace has been found. This paper studies several iterative schemes, among them schemes based on Leja points. The resulting algorithms are capable of computing a few of the largest or smallest eigenvalues and associated eigenvectors. This is accomplished using only $(k+p)n + O((k+p)^2)$ storage locations in addition to the storage required for the matrix, where the second term is independent of $n$. Full Text (PDF) [270 KB] ### Key words Lanczos method, eigenvalue, polynomial acceleration. 65F15. ### ETNA articles which cite this article Vol. 7 (1998), pp. 124-140 J. Baglama, D. Calvetti, and L. Reichel: Fast Leja points Vol. 7 (1998), pp. 163-181 Andreas Stathopoulos and Yousef Saad: Restarting techniques for the (Jacobi-)Davidson symmetric eigenvalue methods < Back
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8779541254043579, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/69440/blowing-up-along-an-ideal-in-the-product-of-projective-varieties
# Blowing up along an ideal in the product of projective varieties I am looking for information on blowing up along an ideal in a product of varieties. After extensive searching through several textbooks I cannot find an explicit method for doing so. Specifically, I am trying to blow up along the diagonal in the product of two projective varieties. To clarify, I attempting to perform explicit computations using Macaulay2. I have projective variety $X$ that lives in $\mathbb P^{12}$, and I am sending the tensor product $X\times X$ to $\mathbb P^{168}$ using the Segre embedding and attempting to blow up the diagonal of $X\times X$ there. However, I am running into physical memory problems due to the massive number of polynomials needed to describe the ideal in $\mathbb P^{168}$. So what I'm really looking for is a method for blowing up along an ideal in a product of varieties that doesn't rely on the Segre embedding. - 1 What exactly do you want to do with the result? Can you work locally, for example, in order to do what you want?ç – Mariano Suárez-Alvarez♦ Oct 3 '11 at 22:05 @Mariano I'm trying to compute the exceptional divisor in order to compute the Segre class of the diagonal in the product of varieties, following Corollary 4.2.2 in Fulton's Intersection Theory. – Michael Capps Oct 4 '11 at 13:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463181495666504, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=The_Golden_Ratio&diff=34261&oldid=31891
# The Golden Ratio ### From Math Images (Difference between revisions) | | | | | |----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | (Fixed hides. Forced the MME section to hide (as it should)) | | | (12 intermediate revisions not shown.) | | | | | Line 1: | | Line 1: | | | | {{Image Description Ready | | {{Image Description Ready | | | |ImageName=The Golden Ratio | | |ImageName=The Golden Ratio | | - | |Image=Goldenrectangleappwarp.jpg | + | |Image=Goldenratioapplet1.jpg | | - | |ImageIntro=The '''golden number,''' often denoted by lowercase Greek letter "phi", is <math>{\varphi}=\frac{1 + \sqrt{5}}{2} = 1.61803399...</math>. The term '''golden ratio''' refers to the ratio <math>\varphi</math> : 1. The image to the right is a warped representation of dividing and subdividing a rectangle into the golden ratio. The result is fractal-like. This page explores real world applications for the golden ratio, common misconceptions about the golden ratio, and multiple derivations of the golden number. | + | |ImageIntro=The '''golden number,''' often denoted by lowercase Greek letter "phi", is <br /><math>{\varphi}=\frac{1 + \sqrt{5}}{2} = 1.61803399...</math>. <br /> | | - | |ImageDescElem=The golden number, approximately 1.618, is called golden because many geometric figures involving this ratio are often said to possess special beauty. Be that true or not, the ratio has many beautiful and surprising mathematical properties. The Greeks were aware of the golden ratio, but did not consider it particularly significant with respect to aesthetics. It was not called the "divine" proportion until the 15th century, and was not called "golden" ratio until the 18th century. Since then, it has been claimed that the golden ratio is the most aesthetically pleasing ratio, and claimed that this ratio has appeared in architecture and art throughout history. Among the most common such claims are that the Parthenon and Leonardo Da Vinci's Mona Lisa uses the golden ratio. Even more esoteric claims propose that the golden ratio can be found in the human facial structure, the behavior of the stock market, and the Great Pyramids. However, such claims have been criticized in scholarly journals as wishful thinking or sloppy mathematical analysis. Additionally, there is no solid evidence that supports the claim that the golden rectangle is the most aesthetically pleasing rectangle. | + | The term '''golden ratio''' refers to any ratio which has the value phi. The image to the right illustrates dividing and subdividing a rectangle into the golden ratio. The result is [[Field:Fractals|fractal-like]]. This page explores real world applications for the golden ratio, common misconceptions about the golden ratio, and multiple derivations of the golden number. | | | | + | | | | | + | |ImageDescElem=[[Image:Monalisa01.jpg|Does the Mona Lisa exhibit the golden ratio?|thumb|400px|right]]The golden number, approximately 1.618, is called golden because many geometric figures involving this ratio are often said to possess special beauty. Be that true or not, the ratio has many beautiful and surprising mathematical properties. The Greeks were aware of the golden ratio, but did not consider it particularly significant with respect to aesthetics. It was not called the "divine" proportion until the 15th century, and was not called "golden" ratio until the 18th century. <ref>[http://en.wikipedia.org/wiki/Golden_ratio "Golden ratio"], Retrieved on 20 June 2012.</ref> | | | | + | <br /> [[Image:Finalpyramid1.jpg|Markowsky has determined the above dimensions to be incorrect.|thumb|400px|left]] | | | | + | Since then, it has been claimed that the golden ratio is the most aesthetically pleasing ratio, and claimed that this ratio has appeared in architecture and art throughout history. Among the most common such claims are that the Parthenon and Leonardo Da Vinci's Mona Lisa use the golden ratio. Even more esoteric claims propose that the golden ratio can be found in the human facial structure, the behavior of the stock market, and the Great Pyramids. <br /> | | | | + | However, such claims have been criticized in scholarly journals as wishful thinking or sloppy mathematical analysis. Additionally, there is no solid evidence that supports the claim that the golden rectangle is the most aesthetically pleasing rectangle.<ref>[http://www.math.nus.edu.sg/aslaksen/teaching/maa/markowsky.pdf "Misconceptions about the Golden Ratio"], Retrieved on 24 June 2012.</ref> | | | | + | | | | | | | | | ===Misconceptions about the Golden Ratio=== | | ===Misconceptions about the Golden Ratio=== | | - | Many rumors and misconceptions surround the golden ratio. There have been many claims that the golden ratio appears in art and architecture. In reality, man of these claims involve warped images and large margins of error. One claim is that the Great Pyramids exhibit the golden ratio in their construction. This belief is illustrated below (left). In his paper, ''Misconceptions about the Golden Ratio,'' George Markowsky disputes this claim, arguing that the dimensions assumed in the picture below are not anywhere close to being correct. Another belief is that a series of [[The Golden Ratio#Jump2|golden rectangles]] appears in the ''Mona Lisa''. However, the placing of the golden rectangles seems arbitrary. This claim is also illustrated below (right). Specifically, Markowsky claims that the golden ratio does not appear in the Parthenon or the Great Pyramids, two of the most common beliefs. He also disputes the belief that the human body exhibits the golden ratio. To read more, [http://www.math.nus.edu.sg/aslaksen/teaching/maa/markowsky.pdf click here!] | + | Many rumors and misconceptions surround the golden ratio. There have been many claims that the golden ratio appears in art and architecture. In reality, many of these claims involve warped images and large margins of error. One claim is that the Great Pyramids exhibit the golden ratio in their construction. This belief is illustrated below. | | - | :[[Image:Finalpyramid1.jpg|400px]] [[Image:Monalisa01.jpg|200px]] | + | | | | | + | | | | | + | In his paper, ''Misconceptions about the Golden Ratio,'' George Markowsky disputes this claim, arguing that the dimensions assumed in the picture are not anywhere close to being correct. Another belief is that a series of [[The Golden Ratio#Jump2|golden rectangles]] appears in the ''Mona Lisa''. | | | | + | However, the placing of the golden rectangles seems arbitrary. Markowsky also disputes the belief that the human body exhibits the golden ratio. To read more, [http://www.math.nus.edu.sg/aslaksen/teaching/maa/markowsky.pdf click here!] | | | | + | : | | | | + | : | | | ====''What do you think?''==== | | ====''What do you think?''==== | | - | George Markowsky argues that, similar to the ''Mona Lisa,'' the Parthenon does not exhibit a series of golden rectangles (discussed below). Do you think the Parthenon was designed with the golden ratio in mind or is the image below simply a stretch of the imagination? | + | George Markowsky argues that, like the ''Mona Lisa,'' the Parthenon does not exhibit a series of golden rectangles (discussed below). Do you think the Parthenon was designed with the golden ratio in mind or is the image below simply a stretch of the imagination? | | - | [[Image:Golden ratio parthenon.jpg|300px]] | + | :[[Image:Golden ratio parthenon.jpg|300px]]<ref>[http://lotsasplainin.blogspot.com/2008/01/wednesday-math-vol-8-phi-golden-ratio.html "Parthenon"], Retrieved on 16 May 2012.</ref> | | - | <ref>[http://lotsasplainin.blogspot.com/2008/01/wednesday-math-vol-8-phi-golden-ratio.html "Parthenon"], Retrieved on 16 May 2012.</ref> | + | | | | | | | | | ==A Geometric Representation== | | ==A Geometric Representation== | | Line 19: | | Line 29: | | | | | | | | | | | | | - | The golden number can be defined using a line segment divided into two sections of lengths a and b. If a and b are appropriately chosen, the ratio of a to b is the same as the ratio of a + b to a and both ratios are equal to φ. The line segment above (left) exhibits the golden proportion. The line segments above (right) are also examples of the golden ratio. In each case, | + | The golden number can be defined using a line segment divided into two sections of lengths ''a'' and ''b''. If ''a'' and ''b'' are appropriately chosen, the ratio of ''a'' to ''b'' is the same as the ratio of ''a'' + ''b'' to ''a'' and both ratios are equal to <math>\varphi</math>. The line segment above (left) exhibits the golden proportion. The line segments above (right) are also examples of the golden ratio. In each case, | | | | | | | | <math>\frac{{\color{Red}\mathrm{red}}+\color{Blue}\mathrm{blue}}{{\color{Blue}\mathrm{blue}} }= \frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} }= \varphi . </math> | | <math>\frac{{\color{Red}\mathrm{red}}+\color{Blue}\mathrm{blue}}{{\color{Blue}\mathrm{blue}} }= \frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} }= \varphi . </math> | | Line 25: | | Line 35: | | | | <div id="Jump2"></div> | | <div id="Jump2"></div> | | | ===The Golden Rectangle=== | | ===The Golden Rectangle=== | | - | A '''golden rectangle''' is any rectangle whose sides are proportioned in the golden ratio. When the sides lengths are proportioned in the golden ratio the rectangle is said to possess the '''golden proportions.''' A golden rectangle has sides of length <math>\varphi \times r</math> and <math>1 \times r</math> where <math>r</math> can be any constant. Remarkably, when a square is cut off of the golden rectangle, the remaining rectangle also exhibits the golden proportions. This continuing pattern is visible in the golden rectangle below. | + | A '''golden rectangle''' is any rectangle where the ratio between the sides is equal to phi. When the sides lengths are proportioned in the golden ratio, the rectangle is said to possess the '''golden proportions.''' A golden rectangle has sides of length <math>\varphi \times r</math> and <math>1 \times r</math> where <math>r</math> can be any constant. Remarkably, when a square with side length equal to the shorter side of the rectangle is cut off from one side of the golden rectangle, the remaining rectangle also exhibits the golden proportions. This continuing pattern is visible in the golden rectangle below. | | | :[[Image:Coloredfinalrectangle1.jpg]] | | :[[Image:Coloredfinalrectangle1.jpg]] | | | | | | | Line 33: | | Line 43: | | | | [[Image:1byrrectangle1.jpg|500px]][[Image:Pentagon_final.jpg|300px]] | | [[Image:1byrrectangle1.jpg|500px]][[Image:Pentagon_final.jpg|300px]] | | | | | | | - | The golden number, φ, is used to construct the '''golden triangle,''' an isoceles triangle that has legs of length <math>\varphi \times r</math> and base length of <math>1 \times r</math> where <math>r</math> can be any constant. It is above and to the left. Similarly, the '''golden gnomon''' has base <math>\varphi \times r</math> and legs of length <math>1 \times r</math>. It is shown above and to the right. These triangles can be used to form regular pentagons (pictured above) and <balloon title="A pentagram is a five pointed star made with 5 straight strokes">pentagrams.</balloon> | + | The golden number, <math>\varphi</math>, is used to construct the '''golden triangle,''' an isoceles triangle that has legs of length <math>\varphi \times r</math> and base length of <math>1 \times r</math> where <math>r</math> can be any constant. It is above and to the left. Similarly, the '''golden gnomon''' has base <math>\varphi \times r</math> and legs of length <math>1 \times r</math>. It is shown above and to the right. These triangles can be used to form regular pentagons (pictured above) and <balloon title="A pentagram is a five pointed star made with 5 straight strokes">pentagrams.</balloon> | | | | | | | | The pentgram below, generated by the golden triangle and the golden gnomon, has many side lengths proportioned in the golden ratio. | | The pentgram below, generated by the golden triangle and the golden gnomon, has many side lengths proportioned in the golden ratio. | | Line 43: | | Line 53: | | | | :[[Image:Penta1.jpg|400px]] | | :[[Image:Penta1.jpg|400px]] | | | :[[Image:Pent111.jpg|400px]] | | :[[Image:Pent111.jpg|400px]] | | - | |ImageDesc==Mathematical Representations of the Golden Ratio= | + | |ImageDesc==An Algebraic Derivation of Phi= | | - | | + | | | - | | + | | | - | ==An Algebraic Derivation of Phi== | + | | | - | | + | | | | | | | | - | {{SwitchPreview|ShowMessage=Click to expand|HideMessage=Click to hide|PreviewText=How can we derive the value of φ from its characteristics as a ratio? We may algebraically solve for the ratio (φ) by observing that ratio satisfies the following property by definition: | + | {{SwitchPreview|ShowMessage=Click to expand|hideMessage=Click to hide|PreviewText=How can we derive the value of <math>\varphi</math> from its characteristics as a ratio? We may algebraically solve for the ratio (<math>\varphi</math>) by observing that ratio satisfies the following property by definition: | | | :<math>\frac{b}{a} = \frac{a+b}{b} = \varphi</math>|FullText= | | :<math>\frac{b}{a} = \frac{a+b}{b} = \varphi</math>|FullText= | | | Let <math> r </math> denote the ratio : | | Let <math> r </math> denote the ratio : | | Line 84: | | Line 90: | | | | The golden ratio can also be written as what is called a '''continued fraction,'''a fraction of infinite length whose denominator is a quantity plus a fraction, which latter fraction has a similar denominator, and so on. This is done by using <balloon title="Recursion is the method of substituting an equation into itself">recursion</balloon>. | | The golden ratio can also be written as what is called a '''continued fraction,'''a fraction of infinite length whose denominator is a quantity plus a fraction, which latter fraction has a similar denominator, and so on. This is done by using <balloon title="Recursion is the method of substituting an equation into itself">recursion</balloon>. | | | | | | | - | {{SwitchPreview|ShowMessage=Click to expand|HideMessage=Click to hide|PreviewText= |FullText=We have already solved for φ using the following equation: | + | {{SwitchPreview|ShowMessage=Click to expand|hideMessage=Click to hide|PreviewText= |FullText=We have already solved for <math>\varphi</math> using the following equation: | | | | | | | | <math>{\varphi}^2-{\varphi}-1=0</math>. | | <math>{\varphi}^2-{\varphi}-1=0</math>. | | Line 116: | | Line 122: | | | | <math>\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\cdots}}}</math> | | <math>\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\cdots}}}</math> | | | | | | | - | This last infinite form is a continued fraction | + | Continuing this substitution forever, the last infinite form is a continued fraction | | | | | | | - | If we evaluate truncations of the continued fraction by evaluating only part of the continued fraction (the finite displays above it), replacing <math>\varphi</math> by 1, we produce the ratios between consecutive terms in the [[Fibonacci sequence]]. | + | If we evaluate truncations of the continued fraction by evaluating only part of the continued fraction, replacing <math>\varphi</math> with 1, we produce the ratios between consecutive terms in the [[Fibonacci sequence]]. | | | | | | | | <math>\varphi \approx 1 + \cfrac{1}{1} = 2</math> | | <math>\varphi \approx 1 + \cfrac{1}{1} = 2</math> | | Line 206: | | Line 212: | | | | | | | | | These are just the same truncated terms as listed above. Let's also denote the terms of the Fibonacci sequence as | | These are just the same truncated terms as listed above. Let's also denote the terms of the Fibonacci sequence as | | - | :<math> s_n=s_{n-1}+s_{n-2} </math> where <math>s_1=1</math>,<math>s_2=1</math> | + | :<math> s_n=s_{n-1}+s_{n-2} </math> where <math>s_1=1</math>,<math>s_2=1</math>,<math>s_3=2</math>,<math>s_4=3</math> etc. | | | <br> | | <br> | | | | | | | Line 221: | | Line 227: | | | | <br><br> | | <br><br> | | | | | | | - | By our definition of ''x<sub>n</sub>'', we have | + | By our assumptions about ''x<sub>1</sub>'',''x<sub>2</sub>''...''x<sub>n</sub>'', we have | | | | | | | | :<math> x_{k+1}=1+\frac{1}{x_k} </math>. | | :<math> x_{k+1}=1+\frac{1}{x_k} </math>. | | Line 237: | | Line 243: | | | | Noting the definition of <math>s_n=s_{n-1}+s_{n-2}</math>, we see that we have | | Noting the definition of <math>s_n=s_{n-1}+s_{n-2}</math>, we see that we have | | | | | | | - | <math> x_{k+1}=\frac{f_{k+2}}{f_{k+1}} </math> | + | <math> x_{k+1}=\frac{s_{k+2}}{s_{k+1}} </math> | | | | | | | | So by the principle of mathematical induction, we have shown that the terms in our continued fraction are represented by ratios of consecutive Fibonacci numbers. | | So by the principle of mathematical induction, we have shown that the terms in our continued fraction are represented by ratios of consecutive Fibonacci numbers. | | | | | | | | The exact continued fraction is | | The exact continued fraction is | | - | :<math> x_{\infty} = \lim_{n\rightarrow \infty}\frac{f_{n+1}}{f_n} =\varphi </math>. | + | :<math> x_{\infty} = \lim_{n\rightarrow \infty}\frac{s_{n+1}}{s_n} =\varphi </math>. | | | | | | | | }}|NumChars=75}} | | }}|NumChars=75}} | | Line 250: | | Line 256: | | | | ==Proof of the Golden Ratio's Irrationality== | | ==Proof of the Golden Ratio's Irrationality== | | | | | | | - | {{SwitchPreview|ShowMessage=Click to expand|HideMessage=Click to hide|PreviewText= |FullText= | + | {{SwitchPreview|ShowMessage=Click to expand|hideMessage=Click to hide|PreviewText= |FullText= | | | Remarkably, the Golden Ratio is <balloon title="A number is irrational if it cannot be expressed as the fraction between two integers.">irrational</balloon>, despite the fact that we just proved that is approximated by a ratio of Fibonacci numbers. | | Remarkably, the Golden Ratio is <balloon title="A number is irrational if it cannot be expressed as the fraction between two integers.">irrational</balloon>, despite the fact that we just proved that is approximated by a ratio of Fibonacci numbers. | | | We will use the method of <balloon title="A method of proving a statement true by assuming that it's false and showing this assumption would logically lead to a statement that is already known to be untrue.">contradiction</balloon> to prove that the golden ratio is irrational. | | We will use the method of <balloon title="A method of proving a statement true by assuming that it's false and showing this assumption would logically lead to a statement that is already known to be untrue.">contradiction</balloon> to prove that the golden ratio is irrational. | | | | | | | - | Suppose <math>\varphi </math> is rational. Then it can be written as fraction in lowest terms <math> \varphi = b/a</math>, where a and b are integers. | + | Suppose <math>\varphi </math> is rational. Then it can be written as fraction in lowest terms <math> \varphi = b/a</math>, where ''a'' and ''b'' are integers. | | | | | | | | Our goal is to find a different fraction that is equal to <math> \varphi </math> and is in lower terms. This will be our contradiction that will show that <math> \varphi </math> is irrational. | | Our goal is to find a different fraction that is equal to <math> \varphi </math> and is in lower terms. This will be our contradiction that will show that <math> \varphi </math> is irrational. | | Line 299: | | Line 305: | | | | http://www.mathopenref.com/rectanglegolden.html | | http://www.mathopenref.com/rectanglegolden.html | | | |InProgress=Yes | | |InProgress=Yes | | - | |HideMME=No | | | | | } | | } | | | |Field=Algebra | | |Field=Algebra | | Line 366: | | Line 371: | | | | |Field=Algebra | | |Field=Algebra | | | |InProgress=No | | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=No | | | | + | } | | | | + | |Field=Algebra | | | | + | |InProgress=Yes | | | } | | } | | | |Field=Algebra | | |Field=Algebra | ## Revision as of 13:42, 18 July 2012 The Golden Ratio Fields: Algebra and Geometry Image Created By: azavez1 Website: The Math Forum The Golden Ratio The golden number, often denoted by lowercase Greek letter "phi", is ${\varphi}=\frac{1 + \sqrt{5}}{2} = 1.61803399...$. The term golden ratio refers to any ratio which has the value phi. The image to the right illustrates dividing and subdividing a rectangle into the golden ratio. The result is fractal-like. This page explores real world applications for the golden ratio, common misconceptions about the golden ratio, and multiple derivations of the golden number. # Basic Description Does the Mona Lisa exhibit the golden ratio? The golden number, approximately 1.618, is called golden because many geometric figures involving this ratio are often said to possess special beauty. Be that true or not, the ratio has many beautiful and surprising mathematical properties. The Greeks were aware of the golden ratio, but did not consider it particularly significant with respect to aesthetics. It was not called the "divine" proportion until the 15th century, and was not called "golden" ratio until the 18th century. [1] Markowsky has determined the above dimensions to be incorrect. Since then, it has been claimed that the golden ratio is the most aesthetically pleasing ratio, and claimed that this ratio has appeared in architecture and art throughout history. Among the most common such claims are that the Parthenon and Leonardo Da Vinci's Mona Lisa use the golden ratio. Even more esoteric claims propose that the golden ratio can be found in the human facial structure, the behavior of the stock market, and the Great Pyramids. However, such claims have been criticized in scholarly journals as wishful thinking or sloppy mathematical analysis. Additionally, there is no solid evidence that supports the claim that the golden rectangle is the most aesthetically pleasing rectangle.[2] ### Misconceptions about the Golden Ratio Many rumors and misconceptions surround the golden ratio. There have been many claims that the golden ratio appears in art and architecture. In reality, many of these claims involve warped images and large margins of error. One claim is that the Great Pyramids exhibit the golden ratio in their construction. This belief is illustrated below. In his paper, Misconceptions about the Golden Ratio, George Markowsky disputes this claim, arguing that the dimensions assumed in the picture are not anywhere close to being correct. Another belief is that a series of golden rectangles appears in the Mona Lisa. However, the placing of the golden rectangles seems arbitrary. Markowsky also disputes the belief that the human body exhibits the golden ratio. To read more, click here! #### What do you think? George Markowsky argues that, like the Mona Lisa, the Parthenon does not exhibit a series of golden rectangles (discussed below). Do you think the Parthenon was designed with the golden ratio in mind or is the image below simply a stretch of the imagination? [3] ## A Geometric Representation ### The Golden Ratio in a Line Segment The golden number can be defined using a line segment divided into two sections of lengths a and b. If a and b are appropriately chosen, the ratio of a to b is the same as the ratio of a + b to a and both ratios are equal to $\varphi$. The line segment above (left) exhibits the golden proportion. The line segments above (right) are also examples of the golden ratio. In each case, $\frac{{\color{Red}\mathrm{red}}+\color{Blue}\mathrm{blue}}{{\color{Blue}\mathrm{blue}} }= \frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} }= \varphi .$ ### The Golden Rectangle A golden rectangle is any rectangle where the ratio between the sides is equal to phi. When the sides lengths are proportioned in the golden ratio, the rectangle is said to possess the golden proportions. A golden rectangle has sides of length $\varphi \times r$ and $1 \times r$ where $r$ can be any constant. Remarkably, when a square with side length equal to the shorter side of the rectangle is cut off from one side of the golden rectangle, the remaining rectangle also exhibits the golden proportions. This continuing pattern is visible in the golden rectangle below. ### Triangles The golden number, $\varphi$, is used to construct the golden triangle, an isoceles triangle that has legs of length $\varphi \times r$ and base length of $1 \times r$ where $r$ can be any constant. It is above and to the left. Similarly, the golden gnomon has base $\varphi \times r$ and legs of length $1 \times r$. It is shown above and to the right. These triangles can be used to form regular pentagons (pictured above) and pentagrams. The pentgram below, generated by the golden triangle and the golden gnomon, has many side lengths proportioned in the golden ratio. $\frac{{\color{SkyBlue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} } = \frac{{\color{Red}\mathrm{red}} }{{\color{Green}\mathrm{green}} } = \frac{{\color{Green}\mathrm{green}} }{{\color{Magenta}\mathrm{pink}} } = \varphi .$ These triangles can be used to form fractals and are one of the only ways to tile a plane using pentagonal symmetry. Pentagonal symmetry is best explained through example. Below, we have two fractal examples of pentagonal symmetry. Images that exhibit pentagonal symmetry have five symmetry axes. This means that we can draw five lines from the image's center, and all resulting divisions are identical. # A More Mathematical Explanation Note: understanding of this explanation requires: *Algebra, Geometry [Click to view A More Mathematical Explanation] # An Algebraic Derivation of Phi <span class="_togglegroup _toggle_initshow _toggle _toggler toggle- [...] [Click to hide A More Mathematical Explanation] # An Algebraic Derivation of Phi [Click to expand] How can we derive the value of $\varphi$ from its characteristics as a ratio? We may algebraically solve for the ratio ($\varphi$) by observing that ratio satisfies the following property by definition: $\frac{b}{a} = \frac{a+b}{b} = \varphi$ [{{{HideMessage}}}] Let $r$ denote the ratio : $r=\frac{a}{b}=\frac{a+b}{a}$. So $r=\frac{a+b}{a}=1+\frac{b}{a}$ which can be rewritten as $1+\cfrac{1}{a/b}=1+\frac{1}{r}$ thus, $r=1+\frac{1}{r}$ Multiplying both sides by $r$, we get ${r}^2=r+1$ which can be written as: $r^2 - r - 1 = 0$. Applying the quadratic formula An equation, $\frac{-b \pm \sqrt {b^2-4ac}}{2a}$, which produces the solutions for equations of form $ax^2+bx+c=0$ , we get $r = \frac{1 \pm \sqrt{5}} {2}$. The ratio must be positive because we can not have negative line segments or side lengths. Because the ratio has to be a positive value, $r=\frac{1 + \sqrt{5}}{2} = 1.61803399... =\varphi$. The golden ratio can also be written as what is called a continued fraction,a fraction of infinite length whose denominator is a quantity plus a fraction, which latter fraction has a similar denominator, and so on. This is done by using recursion. [Click to expand] [{{{HideMessage}}}] We have already solved for $\varphi$ using the following equation: ${\varphi}^2-{\varphi}-1=0$. We can add one to both sides of the equation to get ${\varphi}^2-{\varphi}=1$. Factoring this gives $\varphi(\varphi-1)=1$. Dividing by $\varphi$ gives us $\varphi -1= \cfrac{1}{\varphi }$. Adding 1 to both sides gives $\varphi =1+ \cfrac{1}{\varphi }$. Substitute in the entire right side of the equation for $\varphi$ in the bottom of the fraction. $\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{\varphi } }$ Substituting in again, $\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\varphi}}}$ $\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\cdots}}}$ Continuing this substitution forever, the last infinite form is a continued fraction If we evaluate truncations of the continued fraction by evaluating only part of the continued fraction, replacing $\varphi$ with 1, we produce the ratios between consecutive terms in the Fibonacci sequence. $\varphi \approx 1 + \cfrac{1}{1} = 2$ $\varphi \approx 1 + \cfrac{1}{1+\cfrac{1}{1}} = 3/2$ $\varphi \approx 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1} } } = 5/3$ $\varphi \approx 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1+\cfrac{1}{1}}}} = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{2}}} =1 + \cfrac{1}{1 + \cfrac{2}{3}} = 8/5$ Thus we discover that the golden ratio is approximated in the Fibonacci sequence. $1,1,2,3,5,8,13,21,34,55,89,144...\,$ | | | | |-----------------------------------------------------------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------| | $1/1$ | $=$ | $1$ | | $2/1$ | $=$ | $2$ | | $3/2$ | $=$ | $1.5$ | | $8/5$ | $=$ | $1.6$ | | $13/8$ | $=$ | $1.625$ | | $21/13$ | $=$ | $1.61538462...$ | | $34/21$ | $=$ | $1.61904762...$ | | $55/34$ | $=$ | $1.61764706...$ | | $89/55$ | $=$ | $1.61818182...$ | $\varphi = 1.61803399...\,$ As you go farther along in the Fibonacci sequence, the ratio between the consecutive terms approaches the golden ratio. Many real-world applications of the golden ratio are related to the Fibonacci sequence. For more real-world applications of the golden ratio click here! In fact, we can prove that the ratio between terms in the Fibonacci sequence approaches the golden ratio by using mathematical Induction. [Click to show proof] [Click to hide proof] Since we have already shown that $\varphi = 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{\cdots}}}$, we only need to show that each of the terms in the continued fraction is the ratio of Fibonacci numbers as shown above. First, let $x_1=1$, $x_2=1+\frac{1}{1}=1+\frac{1}{x_1}$, $x_3= 1+\frac{1}{1+\frac{1}{1}}=1+\frac{1}{x_2}$ and so on so that $x_n=1+\frac{1}{x_{n-1}}$. These are just the same truncated terms as listed above. Let's also denote the terms of the Fibonacci sequence as $s_n=s_{n-1}+s_{n-2}$ where $s_1=1$,$s_2=1$,$s_3=2$,$s_4=3$ etc. We want to show that $x_n=\frac{s_{n+1}}{s_n}$ for all n. First, we establish our base case. We see that $x_1=1=\frac{1}{1}=\frac{s_2}{s_1}$, and so the relationship holds for the base case. Now we assume that $x_k=\frac{s_{k+1}}{s_{k}}$ for some $1 \leq k < n$ (This step is the inductive hypothesis). We will show that this implies that $x_{k+1}=\frac{s_{(k+1)+1}}{s_{k+1}}=\frac{s_{k+2}}{s_{k+1}}$. By our assumptions about x1,x2...xn, we have $x_{k+1}=1+\frac{1}{x_k}$. By our inductive hypothesis, this is equivalent to $x_{k+1}=1+\frac{1}{\frac{s_{k+1}}{s_{k}}}$. Now we only need to complete some simple algebra to see $x_{k+1}=1+\frac{s_k}{s_{k+1}}$ $x_{k+1}=\frac{s_{k+1}+s_k}{s_{k+1}}$ Noting the definition of $s_n=s_{n-1}+s_{n-2}$, we see that we have $x_{k+1}=\frac{s_{k+2}}{s_{k+1}}$ So by the principle of mathematical induction, we have shown that the terms in our continued fraction are represented by ratios of consecutive Fibonacci numbers. The exact continued fraction is $x_{\infty} = \lim_{n\rightarrow \infty}\frac{s_{n+1}}{s_n} =\varphi$. ## Proof of the Golden Ratio's Irrationality [Click to expand] [{{{HideMessage}}}] Remarkably, the Golden Ratio is irrational, despite the fact that we just proved that is approximated by a ratio of Fibonacci numbers. We will use the method of contradiction to prove that the golden ratio is irrational. Suppose $\varphi$ is rational. Then it can be written as fraction in lowest terms $\varphi = b/a$, where a and b are integers. Our goal is to find a different fraction that is equal to $\varphi$ and is in lower terms. This will be our contradiction that will show that $\varphi$ is irrational. First note that the definition of $\varphi = \frac{b}{a}=\frac{a+b}{b}$ implies that $b > a$ since clearly $b+a>b$ and the two fractions must be equal. Now, since we know $\frac{b}{a}=\frac{a+b}{b}$ we see that $b^2=a(a+b)$ by cross multiplication. Foiling this expression gives us $b^2=a^2+ab$. Rearranging this gives us $b^2-ab=a^2$, which is the same as :$b(b-a)=a^2$. Dividing both sides of the equation by a(b-a) gives us $\frac{b}{a}=\frac{a}{b-a}$. Since $\varphi=\frac{b}{a}$, this means $\varphi=\frac{a}{b-a}$. Since we have assumed that a and b are integers, we know that b-a must also be an integer. Furthermore, since $a<b$, we know that $\frac{a}{b-a}$ must be in lower terms than $\frac{b}{a}$. Since we have found a fraction of integers that is equal to $\varphi$, but is in lower terms than $\frac{b}{a}$, we have a contradiction: $\frac{b}{a}$ cannot be a fraction of integers in lowest terms. Therefore $\varphi$ cannot be expressed as a fraction of integers and is irrational. # For More Information • Markowsky. “Misconceptions about the Golden Ratio.” College Mathematics Journal. Vol 23, No 1 (1992). pp 2-19. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References 1. ↑ "Golden ratio", Retrieved on 20 June 2012. 2. ↑ "Misconceptions about the Golden Ratio", Retrieved on 24 June 2012. 3. ↑ "Parthenon", Retrieved on 16 May 2012. # Future Directions for this Page -animation? http://www.metaphorical.net/note/on/golden_ratio http://www.mathopenref.com/rectanglegolden.html Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 117, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8852786421775818, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/121105/which-functor-does-the-projective-space-represent
# Which functor does the projective space represent? I hope this question isn't too silly. It is certainly fundamental, so the answer is likely contained at least implicitly in most sources out there, but I haven't seen it done this way (that is, in this particular functorial manner) in a way which is overt enough for me to catch on. I am familiar with the classical $Proj$-construction for a graded ring, so that's not quite what I'm looking for. Let $k$ be a ring. Let's call a covariant functor of sets on some category of $k$-algebras an algebraic functor (over $k$). The affine $I$-space over $k$ is the algebraic functor $\mathbb{A}^I:(k−alg)→(set)$ which takes a $k$-algebra $R$ to the set $\mathbb{R}^I$ of $I$-tuples of elements of $R$. This functor is (co)representable by the ring $k[T_i],i∈I$, so $\mathbb{A}^I$ is (represented by) an affine scheme. I want the projective space over $k$ in terms of an algebraic functor over $k$. I'm thinking something like $R↦\{\mathbb{R}^{I+∞}/\mathbb{G}_m(R)\}$ (where $\mathbb{G}_m(R)$ is the multiplicative group of $R$), or as a functor sending $R$ to some set of modules of rank $1$. One should then be able to show that it has a cover by four copies of the affine $I$-space over $k$. Alternatively, it would likely make sense to consider it a functor on the category of graded $k$-algebras. - ## 2 Answers Classically, if $K$ is a field, then $\mathbb P^n(K)$ is the set of lines $L\subset K^{n+1}$ of the vector space $K^{n+1}$. If $R$ is a ring, the correct generalization of a line in $R^{n+1}$ is a projective submodule $L\subset R^{n+1}$ of rank one, which is also a direct summand : $R^{n+1}= L\oplus E$, where $E$ is projective of rank $n$. We call such submodules supplemented line bundles and $\mathbb P^n(R)$ is the set of these. Beware that it is not automatic that a projective submodule of rank one of $R^{n+1}$ is a direct summand, even if it is free : for example the submodule $2\mathbb Z\oplus 0\subset \mathbb Z^2$ is free of rank one but not a direct summandand and is thus not an element of $\mathbb P^1(\mathbb Z)$ . However a free $R$-module $R(r_0,r_1,\cdots ,r_n)\subset R^{n+1}$ is a supplemented line bundle iff the $r_i$'s generate $R$ i.e. $\Sigma Rr_i=R$ Because of Grothendieck's vast generalization mentioned by Martin one also considers the dual definition of $\mathbb P^n(R)$ as equivalence classes of projective $R$-modules of rank one $Q$ equipped with a surjective morphism $R^{n+1}\to Q$. - Thank you! Another good answer. Hard to know which one to pick, but I think I'll go for this one because it is closer to the generality I asked for. I also found this treated in a set of notes by Strickland on formal schemes. – Eivind Dahl Mar 17 '12 at 22:11 Ah, the condition that the $r_i$ generate $R$ got much clearer when considering that this means that they do not all vanish at any point. This plugs nicely into thinking about projective schemes as looking for strictly non-trivial solutions to homogeneous equations. Neat :-) – Eivind Dahl Apr 14 '12 at 21:19 If $S$ is a scheme and $\mathcal{E}$ is a locally free module on $S$, then the projective space bundle $\mathbb{P}(\mathcal{E}) \to S$ represents the following functor: $\mathrm{Sch}/S \to \mathrm{Set}, (f:X \to S) \mapsto \{\text{invertible quotients of } f^* \mathcal{E}\}$ You can find this in every introduction to algebraic geometry, for example EGA I (1970), § 9. Actually this is the definition of $\mathbb{P}(\mathcal{E})$ and then it can be shown with a general principle (every Zariski sheaf, which is locally representable, is representable) that this functor is representable by a scheme. In the special case $\mathcal{E} = \mathcal{O}_S^{d+1}$, one writes $\mathbb{P}^d_S$ for this $S$-scheme and it represents the functor $\mathrm{Sch}/S \to \mathrm{Set}, (f:X \to S) \mapsto \{\text{invertible quotients of } \mathcal{O}_X^{d+1}\}.$ Of course you can even specify to $S=\mathrm{Spec}(k)$ for some ring $k$ and restrict to $k$-algebras. But in my opinion it is hard to really understand the projective space when only defined as a functor on $k$-algebras. By the way, I first understood Grassmannians when I learned the general "global" definition in loc. cit. - these chart definitions in topology are only confusing ... - Thank you! I agree charts might not be all too great for understanding projective spaces, but the first thing I really understood and liked was the projective line, patched together by pieces of $k[t]$. It makes it clear to me why a meromorphic function on $X$ is the same thing as a morphism $X\to\mathbb{P}^1$. This complements nicely the notion of a regular function as a morphism $X\to\mathbb{A}^1$. If I was able to see from the definition of $\mathbb{P}^I$, that this was the case I would be every happy. At least you've told me what I need to stare at until it makes sense :-) – Eivind Dahl Mar 16 '12 at 23:02 1 Charts can be confusing, but they are useful for doing local calculations. It's important to understand both approaches (as well as the homogeneous space point of view). – Michael Joyce Mar 17 '12 at 0:11 Michael: I agree, but often computations with charts are made without any global data. This is always confusing and clumsy. @Eivind: You can derive this from the functorial definition. – Martin Brandenburg Mar 18 '12 at 1:09 This got way clearer with time. Thanks again :-) – Eivind Dahl Apr 14 '12 at 21:19 @Eivind: I am very glad to hear that. – Martin Brandenburg Mar 26 at 0:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427683353424072, "perplexity_flag": "head"}
http://mathoverflow.net/questions/30220/abstract-thought-vs-calculation/32106
## Abstract Thought vs Calculation ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Jeremy Avigad and Erich Reck in their remarkable historical paper "Clarifying the nature of the infinite: the development of metamathematics and proof theory" claim that one of the factors of becoming of abstract mathematics in the late 19 century (as opposed to concrete mathematics or hard analysis) was the fact, that using more abstract notions we can avoid a lot of calculations to obtain the same result. Let me quote them: "The gradual rise of the opposing viewpoint, with its emphasis on conceptual reasoning and abstract characterization, is elegantly chronicled by Stein[110], as part and parcel of what he refers to as the “second birth” of mathematics. The following quote, from Dedekind, makes the difference of opinion very clear: A theory based upon calculation would, as it seems to me, not offer the highest degree of perfection; it is preferable, as in the modern theory of functions, to seek to draw the demonstrations no longer from calculations, but directly from the characteristic fundamental concepts, and to construct the theory in such a way that it will, on the contrary, be in a position to predict the results of the calculation (for example, the decomposable forms of a degree). In other words, from the Cantor-Dedekind point of view, abstract conceptual investigation is to be preferred over calculation." So, my question is: do you know some concrete examples from concrete fields of avoiding calculation mass by the use of abstract notions? (term "calculation" here means any type of routine technicality). I can't remember where I read it but some examples one can find in category theory and topoi (not sure). Thanks in advance - 2 There's the puzzle of the bird darting back and forth between two oncoming trains, and asking how far the bird traveled up to the moment of impact. Unless clearer examples are given, I submit that alternative and simpler calculations may be examples of what you ask (rate*time vs summing an infinite series). Gerhard "Ask Me About System Design" Paseman, 2010.07.01 – Gerhard Paseman Jul 1 2010 at 18:59 8 Galois wrote, before Dedekind: "Since the beginning of the [19th] century, computational procedures have become so complicated that any progress by those means has become impossible, without the elegance which modern mathematicians have brought to bear on their research, and by means of which the spirit comprehends quickly and in one step a great many computations. [...] Classify [operations] according to their complexities rather than their appearances! This, I believe, is the mission of future mathematicians. This is the road on which I am embarking in this work. – KConrad Jul 1 2010 at 19:46 2 Community-wiki? – Harry Gindi Jul 1 2010 at 19:49 22 While I entirely agree that there are some excellent examples where abstract thinking is vastly superior to local, computational thinking, I also get the feeling that this has been pushed too far''. More precisely, abstraction has so taken over mainstream mathematics that "computational thinking" is becoming a rarer skill (and resurfaced in Computer Science instead?). I applaud the deeper understanding gained by abstract thinking, but deplore the lack of 'intuition' gained by a definite facility with computation. I wish the 'balance point' between these was less skewed. – Jacques Carette Jul 1 2010 at 21:43 5 @Jacques: Well, I didn't ask for comparing these two approaches. I did ask about some nice sides of abstract approach. – Sergei Tropanets Jul 1 2010 at 23:57 show 3 more comments ## 15 Answers Hilbert's finiteness theorem, which arguably "killed classical invariant theory" and resulted in the creation of abstract algebra. Hilbert's first work on invariant functions led him to the demonstration in 1888 of his famous finiteness theorem. Twenty years earlier, Paul Gordan had demonstrated the theorem of the finiteness of generators for binary forms using a complex computational approach. The attempts to generalize his method to functions with more than two variables failed because of the enormous difficulty of the calculations involved. Hilbert realized that it was necessary to take a completely different path. As a result, he demonstrated Hilbert's basis theorem: showing the existence of a finite set of generators, for the invariants of quantics in any number of variables, but in an abstract form. That is, while demonstrating the existence of such a set, it was not a constructive proof — it did not display "an object" — but rather, it was an existence proof and relied on use of the Law of Excluded Middle in an infinite extension. - 2 This is a canonical example. – Victor Protsak Jul 1 2010 at 20:43 5 It is strange that so many know the Hilbert vs Gordan story but so few know what is in the 1868 paper by Gordan. It is not a computational approach but a very deep analysis of the graph theoretical structure of the invariants of binary forms. – Abdelmalek Abdesselam Jul 9 2010 at 3:40 1 @Abdelmalek Abdesselam: Great comment. Thank you! – Andrey Rekalo Jul 9 2010 at 13:57 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. My favorite theorem, the Atiyah-Singer index theorem, seems to have the desired property. The theorem states that the Fredholm index of the Dirac operator on a compact spin manifold $M$ is equal to the $\hat{A}$ genus. There are two essentially different types of proofs: a global, conceptual argument based on little or no calculation; and a detailed local proof involving working with explicit solutions to PDE's. There are many variations and elaborations on the two approaches; here is a basic overview. Global Proof: One considers the notion of an index map $K(T^*M) \to \mathbb{Z}$ from the K-theory of the tangent bundle of $M$ to the integers, uniquely characterized by a few key axioms. One then constructs two maps which satisfy the axioms (and hence are equal): an analytic index map built using functional analysis and a topological index map built out of an embedding of $M$ into $\mathbb{R}^n$ and the Thom isomorphism in K-theory. The symbol of an elliptic operator $D$ gives rise to an element of $K(T^*M)$; its analytic index is simply the Fredholm index of $D$, while in the case where $D$ is the Dirac operator the topological index can be identified with the $\hat{A}$ genus (upon taking Chern characters). Local Proof: One first proves that the Fredholm index of $D$ is given by $Tr_s(e^{-t D^2})$, the supertrace of the solution operator for the heat equation for $D$. A standard iterative method for solving the heat equation yields an asymptotic expansion for the smoothing kernel $k_t$ of the heat operator, so since the Fredholm index is independent of $t$ one is lead to try to calculate the constant term in the asymptotic expansion of $tr_s(k_t)$. The strategy (as simplified by Getzler) is to develop a symbol calculus for $D$ which rescales away everything but the constant term. One shows that the appropriate symbol satisfies a certain explicit differential equation (the "quantum mechanical harmonic oscillator") which one can explicitly solve (Mehler's formula). The $\hat{A}$ class manifests itself, as if by magic. I think the Atiyah-Singer index theorem is a particularly great example of what you are referring to because it is very difficult to see why the explicit calculations accomplish the same thing as the global, conceptual arguments. At least, nobody has explained it to me to my satisfaction. For example, Bott periodicity plays an essential role in the construction of both the analytic and topological index maps, but if it makes an appearance in the local proof then it is heavily disguised. - Thank you very much for the example! – Sergei Tropanets Jul 1 2010 at 23:34 A toy example, using the Yoneda lemma: Claim: There are two canonical bialgebra structures (the “additive” and “multiplicative” structures) on $k[x]$, and one of them (the additive one) in fact makes it a Hopf algebra. Proof 1: (Calculation.) Write down the formulas; check the axioms! This isn't an especially long calculation, but it's a bit tedious; while seeing the formulas is nice, checking the axioms isn't (to my taste) especially enlightening. Proof 2: (Abstract.) “Bialgebra” = “comonoid in ($k$-Alg,$\otimes$)”. We know $k[x]$ is the free $k$-algebra on one generator, so there's a natural isomorphism $\mathrm{Hom}(k[x],A) \cong A$, for any $k$-algebra $A$. So $\mathrm{Hom}(k[x],A)$ is naturally an algebra — so it has two natural monoid structures, + and $\cdot$, and under + it's moreover a group. By Yoneda, these must correspond to two comonoid structures on $k[x]$, and the one corresponding to + must be Hopf! Now, what I really like about this proof is that it still connects closely to the computations. By the way that the Yoneda lemma works, you can read off what the two coalgebra structures actually are; but now you don't have to check the axioms, since you already know they hold! Also, you now know there'll be a “co-distributive law” connecting the two, which you might never have thought of just from the first approach… And also, this gives a way of looking for bialgebra structures on other algebras: look at what they classify/represent! This shows up, I think, a lot of the power of abstract approaches. They put formulas and calculations into a bigger picture; they can help you do interesting calculations, while letting you skip tedious ones; and they can suggest calculations you might not have thought of doing otherwise. But (as you can probably guess from that) I love calculation too: I wouldn't want either without the other. If abstract nonsense is the garden, concrete computations are the flowers. - 3 To my taste, this is actually the best example yet. – Jacques Carette Jul 2 2010 at 12:25 4 If you do the "computational" proof nicely enough, it is in fact exactly the same as the other one: it would be silly, rather than "computational", to prove, say, coassocativity by actually checking that the required identity it holds for all elements in $k[x]$. – Mariano Suárez-Alvarez Jul 15 2010 at 16:20 Arguments by mathematical induction seem to provide an entire class of examples of the phenomenon, where computation is replaced by a higher level of reasoning. With induction, one uses a comparatively abstract understanding of how a property propogates from smaller instances to larger instances, in order to arrive at a fuller understanding of the property in particular cases, without need for explicit calculation. Thus, one can see that a particular finite graph or group or whatever kind of structure has a property, not by calculating it in that instance, but by an abstract inductive argument, on size or degree or rank or whatever. A complex graph-theoretic calculation is avoided by understanding what happens in general when a point is deleted. And there are, of course, extremely concrete elementary instances. We all know, for example, how to use induction to prove that $1+2+\cdots+n=n(n+1)/2$. Thus, the comparatively abstract inductive argument predicts definite values for concrete sums $1+2+\cdots+105$. Similarly, we often understand the iterates of a function $f^n(x)$ without calculating them, or the powers of a matrix $A^n$, or the successive derivatives of a function, all without calculation, by understanding the inductive relationship in effect at each step. Surely mathematics is covered with dozens or hundreds of similar examples, of every degree of complexity and every level of abstraction. - The first proof I ever saw of the orthogonality relations for characters of finite groups was computational: it did a lot of matrix computations and manipulations of sums, which I didn't like at all. There is a much more conceptual proof which begins by observing that Schur's lemma is equivalent to the claim that $$\text{dim Hom}(A, B) = \delta_{ab}$$ for irreducible representations $A, B$, where $\text{Hom}$ denotes the set of $G$-module homomorphisms. One then observes that $\textbf{Hom}(A, B) = A^{*} \otimes B$ is itself a $G$-module and $\text{Hom}$ is precisely the submodule consisting of the copies of the trivial representation. Finally, the projection from $\textbf{Hom}$ to $\text{Hom}$ can be written $$v \mapsto \frac{1}{|G|} \sum_{g \in G} gv$$ and the trace of a projection is the dimension of its image. I particularly like this proof because the statement of the orthogonality relations is concrete and not abstract, but this proof shows exactly where the abstract content (Schur's lemma, Maschke's theorem) is made concrete (the trace computation). It also highlights the value of viewing the category of $G$-modules as an algebraic object in and of itself: a symmetric monoidal category with duals. In addition, this interpretation of Schur's lemma suggests that $\text{Hom}(A, B)$ behaves like a categorification of the inner product in a Hilbert space, where the contravariant/covariant distinction between $A, B$ corresponds to the conjugate-linear/linear distinction between the first and second entries of an inner product. This leads to 2-Hilbert spaces and is a basic motivation for the term "adjoint" in category theory, as explained for example by John Baez here. It is also related to quantum mechanics, where one thinks of the inner product as describing the amplitude of a transition between two states occurs and of $\text{Hom}(A, B)$ as describing the way those transitions occur. John Baez explains related ideas here. - One striking example that comes to mind is Nathan Jacobson's proof that rings satisfying the identity $X^m = X$ are commutative. This is model-theoretic and proceeds by a certain type of factorization which reduces the problem to the (subdirectly) irreducible factors of the variety. These turn out to be certain finite fields, which are commutative, as desired. By (Birkhoff) completeness there must also exist a purely equational proof (in the language of rings) but even for small $m$ this is notoriously difficult, e.g. $m = 3$ is often posed as a difficult exercise. It's only recently that such a general non-model-theoretic equational proof was discovered by John Lawrence (as Stan Burris informed me). I don't know if it has been published yet, but see their earlier work [1] So here, by "higher-order" conceptual structural reasoning, one is able to escape the confines of first-order equational logic and give a more conceptual proof than the brute-force equational proofs - arguments so devoid of intuition that they can been discovered by an automatic theorem prover. 1 S. Burris and J. Lawrence, Term rewrite rules for finite fields. International J. Algebra and Computation 1 (1991), 353-369. http://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PAPERS/fields3.pdf - mathoverflow.net/questions/29590/… – J. H. S. Jul 2 2010 at 6:41 Jacobson's Theorem says that the subset $S$ of $\mathbb Z[X]$ formed by the polynomials $X^n-X$, $n > 1$, has the following property. If, for every $a$ in any given ring $A$, there is an $f$ in $S$ such that $f(a)=0$, then $A$ is commutative. Is $S\cup-S$ maximal for this property? – Pierre-Yves Gaillard Jul 2 2010 at 16:31 Some of the prettiest examples of Dedekind's structuralism arise from revisiting proofs in elementary number theory from a highbrow viewpoint, e.g. by reformulating them after noticing hidden structure (ideals, modules, etc). A striking example of such is the generalization and unification of elementary irrationality proofs of n'th roots by way of Dedekind's notion of conductor ideal. This gem seems to be little-known (even to some number theorists, e.g. Estermann and Niven). Since I've already explained this at length elsewhere I'll simply link [1] to it. At first glance the various "elementary" proofs seem to be magically pulled out of a hat since the crucial structure of the conductor ideal is obfuscated by the descent "calculations" of various lemmas (that have all been inlined vs. abstracted out). However, once one abstracts out the hidden innate structure the proof becomes a striking one-liner: simply remark that in a PID a conductor ideal is principal so cancelable, thus PIDs are integrally closed. Here, the complexity of the calculations verifying the descent (induction) etc are abstracted out and tidily encapsulated once-and-for-all in the lemma that Euclidean domains are PIDs. Following Dedekind's ground-breaking insight, we recognize in many number-theoretical contexts the innate structure of an ideal, and we exploit that structure whenever possible. For much further detail and discussion see all of my posts in the thread 1 (click on the thread's title/subject at the top of the frame to see a threaded view in the Google Groups usenet web interface) When I teach such topics I emphasize that one should always look for "hidden ideals" and other obfuscated innate structure. Alas, too many students cannot resist the urge to dive in and "calculate" before pursuing conceptual investigations. It was such methodological principles that led Dedekind to discover most all of the fundamental algebraic structures. Nowadays we often take for granted such structural abstractions and methodology. But it was certainly a nontrivial task to discover these in the rarefied mathematical atmosphere of Dedekind's day (and it remains so even nowadays for students when first learning such topics). Emmy Noether wasn't joking when she said "it's all already in Dedekind". It deserves emphasis that this remark also remains true for methodological principles. 1 sci.math, 20 May 2009, Irrationality of sqrt (n^2 - 1) - Thanks very much for the answer! – Sergei Tropanets Jul 2 2010 at 15:39 4 I don't think the proof of irrationality by descent is non-conceptual, since it is illustrating descent itself as a technique which is worthwhile in other settings, not all of which have conductors lying around. – KConrad Jul 15 2010 at 23:09 2 But here I'm concerned with the teaching of number theory, not induction. As such, deliberately obfuscating key innate structure such as an ideal is poor pedagogy. Do you think it would be pedagogically wise to present only the Lindemann-Zermelo proof of unique factorization in a number theory course? – Bill Dubuque Jul 16 2010 at 2:25 A beautiful classical example from Functional Analysis is the Hausdorff moment problem: characterize the sequences $m:=(m_0,m_1,\dots)$ of real numbers that are moments of some positive, finite Borel measure on the unit interval $I:=[0,1]$: $$m_k=\int_I x^kd\mu(x).$$ A necessary condition immediately comes from $\int_I x^{\ j}(1-x)^{\ k} d\mu(x)\geq0$, and is expressed saying that $m$ has to be a "completely monotone" sequence, that is $$(I-S)^k m\ge0,$$ where $S$ is the shift operator acting on sequences (in other words, the $k$-th discrete difference of $m$ has the sign of $(-1)^k$: $m$ is positive, decreasing, convex,...). The nontrivial fact is that this is also a sufficient condition, thus caracterizing the sequences of moments. Moreover, the measure is then unique. I'll quote two proofs, both very nice. The first is close to the original one by Hausdorff; the second is a consequence of the Choquet's theorem. Proof I, with computation (skipped). Bernstein polynomials give a sequence of linear positive operators strongly convergent to the identity $$B_n:C^0(I)\to C^0(I).$$ Therefore the transpose operators $$B_n ^*:C^0(I)^ *\to C^0(I)^ *$$ give a sequence of operators weakly convergent to the identity. If you write down what is $B_n^ *(\mu)$ for a Radon measure $\mu\in C^0(I)^ *$ you'll observe that it is a linear combinations of Dirac measures located at the points $\{k/n\}_{0\leq k\leq n}$, and with coefficients only depending on the moments of $\mu$. This gives a uniqueness result and a heuristic argument: if $m$ is a sequence of moments for some measure $\mu$, then $\mu$ can be reconstructed by its moments as a weak* limit of discrete measures $\mu_n:=B_n^*(\mu)$. This observation leads to a constructive solution of the problem. Indeed, given a completely monotone sequence $m$, consider the corresponding sequence of measures $\mu_n$ suggested by the experssion of $B_n^*(\mu)$ in terms of the $(m_k)$. Due to the assumption of complete monotoniticy they turns out to be positive measures, and with some more computations one shows that they converges weakly* to a measure $\mu$ with moment's sequence $m$. Proof II, no or little computation. Completely monotone sequences with $m_0=1$ are a closed convex, thus weakly* compact and metrizable subset $M$ of $l^\infty$. A one-line, smart computation shows that the extremal points of $M$ are exactly the exponential sequences, $m^{ (t)}:=(1,t,t^2,\dots)$, for $0\leq t \leq1$ (these turn out to be the moments of Dirac measures in points of $I$, of course). By the Choquet's theorem, for any given $m\in M$ there exists a probability measure on $\mathrm{ex}(M),$ that we identify with $I,$ such that $m=\int_I m^{ (t) } d\mu(t).$ But this exactly means $m_k=\int_I t^{\ k} d \mu(t)$ for all $k\in\mathbb{N}.$ - A wonderful example is the proof of the Poincare Lemma I sketch here, as compared to the proof in e.g. Spivak's Calculus on Manifolds. The latter is extremely computational and, IIRC, not illuminating; it proves the de Rham cohomology of a star-shaped domain vanishes. The former proof shows (the stronger result) that the de Rham complex $\Lambda_{DR}(M)$ is null-homotopic for $M$ contractible; while this does involve some computation, it is very simple and conceptual. The proof is about half a page long in total, and could probably be shortened. It was shown to me by Professor Dennis Gaitsgory; I haven't seen it elsewhere, though I'm sure it is in the literature. You can skip to the end of the paper (page 26) to see the proof; much of it is aimed at an undergraduate audience that has not yet seen any homological algebra, or even more conceptual linear algebra. Essentially, the proof works by 1) Noting that the de Rham complex construction is functorial, via pullback of differential forms; 2) Noting that a homotopy of maps $M\to N$ induces a homotopy of maps of chain complexes; and 3) Noting that for $M$ contractible, $\operatorname{id}_M$ is homotopic to a constant map, and thus the pullback via $\operatorname{id}_M$ is both zero and the identity on cohomology. - @Daniel: I looked briefly at your article and it seems very nice. I did notice that there are some missing left parentheses, especially on page 3. One of the merits of online journals is that it is very easy to correct typos like this -- you might want to contact the editors in this regard. – Pete L. Clark Jul 2 2010 at 7:35 @Pete: Thanks! There are actually also one or two substantive errors as well, as I wrote this in my first year doing "real" mathematics--unfortunately, the journal seems to be defunct. I do wish I could rewrite one or two portions to reflect my current understanding of the subject. – Daniel Litt Jul 2 2010 at 12:15 @Daniel: someone in or around Harvard must be maintaining the site. In my experience (I was a grad student there, a little before your time) the faculty and staff there are quite helpful, especially if you show a little proactivity. It might be worth sending an email to, e.g. Dennis Gaitsgory. – Pete L. Clark Jul 2 2010 at 16:10 1 @Pete: That's a good call; and I should rewrite the paper and post it somewhere else even if I can't get it replaced there. The argument for step (2) is quite beautiful and deserves better motivation than I give it. And Professor Gaitsgory is great -- I all but one course he offered during my four years there. It's also worth reading this brief and hilarious note (thehcmr.org/issue1_1/gaitsgory.pdf) that he wrote for the same journal. – Daniel Litt Jul 2 2010 at 17:42 @Daniel: the link to your note appears to be defunct now. Perhaps you could repost this article elsewhere? – Charles Rezk Dec 10 2010 at 19:39 show 1 more comment An example of a slightly different kind -- not eliminating all calculation, but showing that "all calculations are easy" -- is Dehn's algorithm in combinatorial group theory. Dehn showed, using the combinatorics of hyperbolic tessellations, that the word problem for surface groups is solvable using only obvious word reductions. In this case, calculation is avoided not so much by abstraction, but by thinking geometrically rather than algebraically. - What I find amazing about this is that the "first calculation, then conception" thing happened in reverse. Namely, Dehn gave a beautiful geometric argument, and then combinatorial group theorists spent the next 50 years replacing the beautiful geometry with messy algebra. It took Gromov to fix things... – Andy Putman Jul 2 2010 at 2:11 1 Andy: Although there is some truth to that, and some people may even believe it, surely it is just another legend made up in order to simplify the messy history: for example, what do you make of the work of Coxeter and Tits? – Victor Protsak Jul 2 2010 at 2:33 2 Observe the modifier "combinatorial" before group theory. Certainly I am not claiming that there was no interaction between group theory and geometry in this period! I am referring more modestly to the school of group theory exemplified by the classic books of Magnus-Karass-Solitar and Lyndon-Schupp. Much of this was directly inspired by Dehn -- for instance, small cancellation theory was in part an attempt to find algebraic analogues for Dehn's geometric arguments. Indeed, Magnus himself was a student of Dehn. Coxeter and Tits belonged to a rather different tradition. – Andy Putman Jul 2 2010 at 3:00 Well, I meant it rather literally, that they used the same method (going back to Poincare) to solve an analogous problem in Coxeter groups. If you want to say that specifically MKS (1966) and LS (1977) were computational and influential, it's hard to disagree on both counts, but then your comment loses much of its force, especially since Gromov also wasn't part of the same tradition... – Victor Protsak Jul 2 2010 at 19:57 1 It's true that some of the small cancellation conditions allow flats. However, Dehn's solution doesn't! In fact, it is a theorem that a group is Gromov hyperbolic if and only if it has a presentation such that the "Dehn algorithm" (namely, if you see more than half a relation, then replace it with the other half) solves the word problem. – Andy Putman Jul 7 2010 at 19:38 show 4 more comments In computability theory, it is often necessary to prove some particular function is a "computable function". Until the 1960s, this was most commonly done by actually demonstrating a formal algorithm for the function in a kind of pseudocode, or giving a set of recursion equations. Needless to say this style of presentation was heavily symbolic and conveyed little intuition about why the function was defined the way it was. The more modern style of presentation relies instead on having a good sense of the closure properties of computable functions, and identifying a large class of basic computable functions (the "primitive recursive functions"). So one can simply explain how to obtain the function at hand from primitive recursive functions using operations that preserve computability. This style of proof allows for much more detailed exposition of the intuition behind the definition of a computable function. Everyone in the field understands how, in principle, to take this kind of proof and obtain a formal algorithm, if it is ever necessary. - Is it ever necessary? – KConrad Jul 1 2010 at 22:10 1 An even more succinct way to prove that a function is computable is to give an informal description of its computation and then appeal to Church's thesis. Maybe this is frowned upon today, but it was the style of some classic papers of the 1940s and 1950s (Post and Friedberg). – John Stillwell Jul 2 2010 at 0:33 2 @John Stillwell: Yes, that is what I am referring to. But "informal" though it may be, it's just as rigorous as any other mathematical proof. The only reason it was ever considered "informal" is by comparison with previous work in which authors actually wrote down formal definitions of every function they defined. – Carl Mummert Jul 2 2010 at 0:56 1 People have differing standards here. I once wrote a paper in which I needed to prove that a certain language was regular (en route to a result about group theory). I gave an informal description of the program accepting it which made it clear that it only needed a universally bounded amount of memory to run no matter what the input size. However, the referee insisted that I write out a detailed description of the state graph, etc. Actually writing out the details was a nightmare... – Andy Putman Jul 2 2010 at 2:16 1 John, showing that a function is computable via appeal to the Church-Turing thesis is absolutely essential to current research in computability theory. No-one would dream of giving fully formalised constructions in their papers; it would take thousands of hours, run to ridiculous numbers of pages and be utterly unreadable. It's definitely not "frowned upon today" by people working in the area. – Phil Ellison Jul 3 2010 at 21:49 show 1 more comment When I was a student, I once watched a professor (a famous and brilliant mathematician) spend a whole class period proving that the functor $M\otimes-$ is right exact. (This was in the context of modules over a commutative ring.) He was working from the generators-and-relations definition of the tensor product. With what I'd consider the "right" definition of $M\otimes-$, as the left adjoint of a Hom functor, the proof becomes trivial: Left adjoints preserve colimits, in particular coequalizers. Since the functors in question are additive, $M\otimes-$ also preserves 0 and therefore preserves cokernels. And that's what right-exactness means. - 2 This isn't quite fair : to give that proof, you have to define a fair amount of categorical terminology and prove several categorical lemmas. I don't think it really is any easier... – Andy Putman Jul 16 2010 at 3:40 1 I don't see anyone objecting to the Yoneda lemma on the same grounds. – Victor Protsak Jul 17 2010 at 1:28 1 Doesn't this also depend how exactness was defined? in the sense that in class it might have been defined in terms of certain things being surjections (rather than epimorphisms or cokernels)? I bring this up because people in Banach spaces/algebras tend to talk about exact sequences when they/we mean "exact after applying the forgetful functor vector spaces", and this is not the same as the categorical ker=coker formulation... – Yemon Choi Jul 18 2010 at 7:03 I'm pretty sure that, by this point in the course, we knew that surjections of modules are the same thing, up to isomorphism, as quotients (i.e., cokernels). But, considering that this happened more than 40 years ago, I can't absolutely guarantee that we were taught basic things like that before we ever saw a tensor product. – Andreas Blass Jul 18 2010 at 21:45 I mentioned this somewhere else too. Many general statements in algebraic geometry can be proved via direct tedious verification or by abstract thought. In fact, the notions of abstract algebraic variety and scheme is created precisely for this purpose. I will illustrate this with an example of showing that the elliptic curve is a group. Method 1: Define an elliptic curve over a field as a curve in the Weierstrass form with nonzero determinant. Upon this define the addition and inverse laws using the chord-and-tangent process, obtaining algebraic expressions. To show that the elliptic curve is a group, you have to show the addition is associative. Then do a very tedious verification of the identities. Method 2: Another way is to use elliptic functions to prove the identity in the complex case. Since the algebraic group law holds true over the complex numbers, it is satisfied by an infinite number of algebraically independent solutions, and therefore the group law must be true in universality, over any field whatsoever. Of course this needs to be made precise with Lefschetz principle . Method 3:(My favorite) Later algebraic geometry developed and it was possible to prove statements without relying on the Lefschetz principle. For instance, the group law on elliptic curve is always a consequence of the Riemann-Roch, which was proved in its full power by Weil, Hirzebruch and Grothendieck. But this might be seen as a sledgehammer by some; in any case it is a remarkable sledgehammer. - 1 More generally, the algebraic theory of abelian varieties (using line bundles and Riemann-Roch, as laid down in Mumford's book) is a conceptual reworking of the theory of theta series over the complex numbers, which had a more computational taste. – Simon Pepin Lehalleur Aug 6 2010 at 6:19 All I see here are calculations. It only changed the nature of the object which you calculate with and its relation to the the final goal. For this reason I still can not make clear sense of the question. - I think that Gauss Theorem on constructible polygons fit this cathegory. For more than 2000 years the actual construction only lead to 4 classes: $2^n; 2^n\cdot 3; 2^n \cdot 5; 2^n \cdot 15$. Gauss' abstract aproach solved the problem. The interesting case $n=17$ becomes easy to understand, and easy to construct once one understands the abstract approach, but hard to attack otherwise. $n=257$ and esspecially $n=65537$ and the ones derived from these are the perfect examples of easy abstract proof vs. extremelly complicated calculations. - Nick, thanks for the nice answer! – Sergei Tropanets Dec 11 2010 at 15:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 110, "mathjax_display_tex": 6, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412059187889099, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/asymptotics?page=1&sort=votes&pagesize=15
# Tagged Questions Questions involving asymptotic analysis, including growth of functions, Big-O notation, Big-Omega and Big-Theta notations. 1answer 6k views ### How many fours are needed to represent numbers up to $N$? The goal of the four fours puzzle is to represent each natural number using four copies of the digit $4$ and common mathematical symbols. For example, \$165=(\sqrt{4} + \sqrt{\sqrt{{\sqrt{4^{4!}}}}}) ... 3answers 368 views ### Sequence of numbers with prime factorization $pq^2$ I've been considering the sequence of natural numbers with prime factorization $pq^2$, $p\neq q$; it begins 12, 18, 20, 28, 44, 45, ... and is A054753 in OEIS. I have two questions: What is the ... 1answer 545 views ### How many primes does Euclid's proof account for? This is a passing curiosity, and I haven't found any duplicates, so I thought I'd share my thoughts. In the most basic (or at least the most famous) proof of the infinitude of prime numbers, due to ... 2answers 652 views ### How to show that $\sum\limits_{k=1}^{n-1}\frac{k!k^{n-k}}{n!}$ is asymptotically $\sqrt{\frac{\pi n}{2}}$? According to "Concrete Mathematics" on page 434, elementary asymptotic methods show that $\displaystyle \sum_{k=1}^{n-1}\frac{k! \; k^{n-k}}{n!}$ is asymptotically $\sqrt{\frac{\pi n}{2}}$. Does ... 1answer 672 views ### How does $\sum_{p<x} p^{-s}$ grow asymptotically for $\text{Re}(s) < 1$? Note the $p < x$ in the sum stands for all primes less than $x$. I know that for $s=1$, $$\sum_{p<x} \frac{1}{p} \sim \ln \ln x ,$$ and for $\mathrm{Re}(s) > 1$, the partial sums ... 9answers 2k views ### What is the purpose of Stirling's approximation to a factorial? Stirling approximation to a factorial is $$n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n.$$ I wonder what benefit can be got from it? From computational perspective (I admit I don't ... 2answers 825 views ### What are the rules for equals signs with big-O and little-o? This question is about asymptotic notation in general. For simplicity I will use examples about big-O notation for function growth as $n\to\infty$ (seen in algorithmic complexity), but the issues that ... 2answers 621 views ### Proof $\sum\limits_{k=1}^n \binom{n}{k}(-1)^k \log k = \log \log n + \gamma +\frac{\gamma}{\log n} +O\left(\frac1{\log^2 n}\right)$ More precisely, $$\sum_{k=1}^n \binom{n}{k}(-1)^k \log k = \log \log n + \gamma +\frac{\gamma}{\log n} -\frac{\pi^2 + 6 \gamma^2}{12 \log^2 n} +O\left(\frac1{\log ^3 n}\right).$$ This is Theorem 4 ... 3answers 390 views ### Asymptotic expression of an oscillatory integral Consider the integral $$f(\alpha,\beta)= \int_0^{2\pi}\,dx \sqrt{1- \cos(\alpha x ) \cos(\beta x)}$$ as a function of the two parameters $\alpha,\beta$. I am interested in the asymptotic behavior ... 5answers 618 views ### Asymptotics of $1^n + 2^{n-1} + 3^{n-2} +\cdots + (n-1)^2 + n^1$ Suppose $n\in\mathbb{Z}$ and $n > 0$. Let $$H_n = 1^n + 2^{n-1} + 3^{n-2} +\cdots + (n-1)^2 + n^1.$$ I would like to find a Big O bound for $H_n$. A Big $\Theta$ result would be even better. 3answers 3k views ### Prove that this function is bounded This is an exercise from Problems from the Book by Andreescu and Dospinescu. When it was posted on AoPS a year ago I spent several hours trying to solve it, but to no avail, so I am hoping someone ... 6answers 936 views ### Is there a formula for $\sum_{n=1}^{k} \frac1{n^3}$? I am searching for the value of $$\sum_{n=k+1}^{\infty} \frac1{n^3} \stackrel{?}{=} \sum_{n = 1}^{\infty} \frac1{n^3} - \sum_{n=1}^{k} \frac1{n^3} = \zeta(3) - \sum_{n=1}^{k} \frac1{n^3}$$ For which ... 2answers 694 views ### please solve a 2013 th derivative question? $f(x) = 6x^7\sin^2(x^{1000}) e^{x^2}$ Find $f^{(2013)}(0)$ A math forum friend suggest me to use big O symbol, however have no idea what that is, so how does that helping? 2answers 350 views ### Asymptotic behaviour of sums of consecutive powers Let $S_k(n)$, for $k = 0, 1, 2, \ldots$, be defined as follows $$S_k(n) = \sum_{i=1}^n \ i^k$$ For fixed (small) $k$, you can determine a nice formula in terms of $n$ for this, which you can then ... 2answers 265 views ### Asymptotic analysis of the integral $\int_0^1 \exp\{n (t+\log t) + \sqrt{n} wt\}\,dt$ The integral I'm trying to study is $$F(n) = \int_0^1 \exp\left\{n(t+\log t)+\sqrt{n}wt\right\}\,dt, \tag{1}$$ where $w$ is a fixed complex number with $\Re(w) < 0$ and $\Im(w) > 0$. As ... 2answers 469 views ### Asymptotics of sum of binomials How can you compute the asymptotics of $$S=n + m - \sum_{k=1}^{n} k^{k-1} \binom{n}{k} \frac{(n-k)^{n+m-k}}{n^{n+m-1}}\;?$$ We have that $n \geq m$ and $n,m \geq 1$. A simple application of ... 3answers 674 views ### How do you prove that $n^n$ is $O(n!^2)$? It seems obvious that: $$n^n \in O(n!^2)$$ But I can't seem to find a good way to prove it. 6answers 2k views ### Stirling's formula: proof? Suppose we want to show that $$n! \sim \sqrt{2 \pi} n^{n+(1/2)}e^{-n}$$ Instead we could show that $$\lim_{n \to \infty} \frac{n!}{n^{n+(1/2)}e^{-n}} = C$$ where $C$ is a constant. Maybe \$C = ... 2answers 327 views ### On the Limit of Stirling's Approximation I have recently proven the following curious identity: For real $x \geqslant 1$, \begin{align} \lfloor x \rfloor! = x^{\lfloor x \rfloor} e^{1-x} e^{\int_{1}^{x} \text{frac}(t)/t \ dt} \end{align} ... 2answers 305 views ### Can a function “grow too fast” to be real analytic? Does there exist a continuous function $\: f : \mathbf{R} \to \mathbf{R} \:$ such that for all real analytic functions $\: g : \mathbf{R} \to \mathbf{R} \:$, for all real numbers $x$, there exists ... 3answers 367 views ### A recurrence that wiggles? Consider the following sequence $a_n$: $a_1 = 0$ $a_n = 1 + \frac{1}{2^n-2} \sum_{i=1}^{n-1} \binom{n}{i} a_i$ The first few terms are $0,1,\frac{3}{2},\frac{13}{7},\frac{15}{7}$. The sequence ... 2answers 309 views ### What's the lower bound of the sum $S(n) = \sum_{k=1}^n \prod_{j=1}^k(1-\frac j n)$? If we have $$S(n) = \sum_{k=1}^n \prod_{j=1}^k(1-\frac j n)$$ What the lower bound of $S(n)$ when $n\to\infty$? PS: If I didn't make any mistake when I calculate $S(n)$, then it should be ... 6answers 563 views ### A question on the Stirling approximation, and $\log(n!)$ In the analysis of an algorithm this statement has come up:$$\sum_{k = 1}^n\log(k) \in \Theta(n\log(n))$$ and I am having trouble justifying it. I wrote \sum_{k = 1}^n\log(k) = \log(n!), \ \ ... 3answers 467 views ### Euler's Constant: The asymptotic behavior of $\left(\sum\limits_{j=1}^{N} \frac{1}{j}\right) - \log(N)$ I want to show that there exists a constant $C\in\mathbb{R}$ such that $$\sum_{j=1}^N \frac1{j} = \log(N)+C+O(1/N).$$ I know how to prove that the Euler-Mascheroni constant exists (which I ... 4answers 341 views ### Large $n$ asymptotic of $\int_0^\infty \left( 1 + x/n\right)^{n-1} \exp(-x) \, \mathrm{d} x$ While thinking of 71432, I encountered the following integral: $$\mathcal{I}_n = \int_0^\infty \left( 1 + \frac{x}{n}\right)^{n-1} \mathrm{e}^{-x} \, \mathrm{d} x$$ Eric's answer to the linked ... 3answers 249 views ### Asymptotic formula for $\sum_{n \le x} \frac{\varphi(n)}{n^2}$ Here is yet another problem I can't seem to do by myself... I am supposed to prove that \sum_{n \le x} \frac{\varphi(n)}{n^2}=\frac{\log x}{\zeta(2)}+\frac{\gamma}{\zeta(2)}-A+O \left(\frac{\log ... 3answers 165 views ### Sufficient bound to conclude limit has certain value. $\lim {\left( {\int_0^1 {{{dx} \over {1 + {x^n}}}} } \right)^n}=\frac 1 2$ I am trying to show that $$\lim {\left( {\int\limits_0^1 {{{dx} \over {1 + {x^n}}}} } \right)^n}=\frac 1 2$$ Now, this can be done as follows. Using $x\mapsto x^{-1}$ we get that \int\limits_0^1 ... 1answer 200 views ### Estimating the integral $\int_0^1 (1-t^2)^{-1/2} e^{-nt} \,dt$ for large $n$. I would like to find the asymptotic behavior of the integral $$\int_0^1 (1-t^2)^{-1/2} e^{-nt} \,dt$$ for large $n$. It seems reasonably obvious that the integral goes to zero. At least it is ... 3answers 215 views ### Order of the smallest group containing all groups of order $n$ as subgroups. Let $n\in \Bbb N$ be fixed and $m\in \Bbb N$ be the least number such that there exists a group of order $m$ in which all groups of order $n$ can be (isomorphically) embedded. Can we deduce $n!=m$? 2answers 360 views ### A (non-artificial) example of a ring without maximal ideals As a brief overview of the below, I am asking for: An example of a ring with no maximal ideals that is not a zero ring. A proof (or counterexample) that $R:=C_0(\mathbb{R})/C_c(\mathbb{R})$ is a ... 1answer 146 views ### If $\lambda_n \sim \mu_n$, is it true that $\sum \exp(-\lambda_n x) \sim \sum \exp(-\mu_n x)$ as $x \to 0$? If $\lambda_n,\mu_n \in \mathbb{R}$, $\lambda_n \sim \mu_n$ as $n \to +\infty$, and $\mu_n \to +\infty$ as $n \to +\infty$, is it true that \sum_{n=1}^{\infty} \exp(-\lambda_n x) \sim ... 3answers 249 views ### Closed form for $\sum_{k=0}^{n} k\binom{n}{k}\log\binom{n}{k}$ Is it possible to write this in closed form: $$\sum_{k=0}^{n} k\binom{n}{k}\log\binom{n}{k}$$ Can you get something like $$n2^{n-1}\log(2^{n-1})$$ 5answers 467 views ### Bounding the integral $\int_{2}^{x} \frac{\mathrm dt}{\log^{n}{t}}$ If $x \geq 2$, then how do we prove that $$\int_{2}^{x} \frac{\mathrm dt}{\log^{n}{t}} = O\Bigl(\frac{x}{\log^{n}{x}}\Bigr)?$$ 8answers 232 views ### Limit of $\frac{\log(n!)}{n\log(n)}$ as $n\to\infty$. I can't seem to find a good way to solve this. I tried using L'Hopitals, but the derivative of $\log(n!)$ is really ugly. I know that the answer is 1, but I do not know why the answer is one. Any ... 2answers 633 views ### Good upper bound for $\sum\limits_{i=1}^{k}{n \choose i}$? I want an upper bound on $$\sum_{i=1}^k \binom{n}{i}.$$ $O(n^k)$ seems to be an overkill -- could you suggest a tighter bound ? 2answers 299 views ### Positive integers $k = p_{1}^{r_{1}} \cdots p_{n}^{r_{n}} > 1$ satisfying $\sum_{i = 1}^{n} p_{i}^{-r_{i}} < 1$ A divisor $d$ of $k = p_{1}^{r_{1}} \cdots p_{n}^{r_{n}}$ is unitary if and only if $d = p_{1}^{\varepsilon_{1}} \cdots p_{n}^{\varepsilon_{n}}$, where each exponent $\varepsilon_{i}$ is either $0$ or ... 2answers 364 views ### Asymptotics of LCM Let $\operatorname{LCM}(x_1,x_2,\ldots,x_n)$ be the least common multiple of the integers $x_i$. How can one find the asymptotics of $\operatorname{LCM}(f(1),f(2),\dots,f(n))$ as $n$ approaches ... 1answer 267 views ### Asymptotic estimate for Riemann-Lebesgue Lemma Let $f$ be a real-valued, $L^1$ integrable function on the interval $[a,b]$. Then the Riemann-Lebesgue Lemma tells us that: \int_a^bf(x)\sin(2\pi nx)dx\rightarrow0 \text{ as } ... 2answers 185 views ### Laplace's method I'm still having a little trouble applying Laplace's method to find the leading asymptotic behavior of an integral. Could someone help me understand this? How about with an example, like: ... 2answers 175 views ### Asymptotics of the sum of squares of binomial coefficients We are trying to estimate the cardinality $K(n,p)$ of so-called Kuratowski monoid with $p$ positive and $n$ negative linearly ordered idempotent generators. In particular, we are interesting in the ... 1answer 139 views ### Calculate Asymptotics of Integral? Let $f$ be a continuous function on $[0,1]$. How do I calculate the asymptotics, as $n\rightarrow\infty$, of \$\displaystyle \int_{[0,1]^n}f\left(\frac{x_1+...+x_n}{n}\right)\text d x_1...\text d ... 5answers 346 views ### Prove that $1 + \dfrac{1}{2} + \dfrac{1}{3} + \cdots + \dfrac{1}{n} = \mathcal{O}(\log(n))$. Prove that $1 + \dfrac{1}{2} + \dfrac{1}{3} + \cdots + \dfrac{1}{n} = \mathcal{O}(\log(n))$, with induction. I get the intuition behind this question. Clearly, the given function isn’t even growing ... 4answers 271 views ### Singular asymptotics of Gaussian integrals with periodic perturbations At the bottom of page 5 of this paper by Giedrius Alkauskas it is claimed that, for a $1$-periodic continuous function $f$, \int_{-\infty}^{\infty} f(x) e^{-Ax^2}\,dx = \sqrt{\frac{\pi}{A}} ... 3answers 290 views ### Approximation of elements in arithmetic progressions by logarithms of integers For fixed $a,b,c \in \mathbb{R}$ with $ac \neq 0$, it seems to me that one can find an increasing sequence of integers $\{\alpha_n\}$ such that the quantity $c \log \alpha_n$ becomes arbitrarily close ... 0answers 221 views ### Asymptotic related to the infinite product of sine The amount is somewhat complicated ($x$ is a constant): $$S_n=\sum_{k=1}^n\ln\left(1-\frac{\sin^2\big(x/(2n+1)\big)}{\sin^2\big(k\pi/(2n+1)\big)}\right)\tag{*}$$ I want to enrich my handy powerful ... 1answer 226 views ### What is $Θ(f(n)) - Θ(f(n))$? $$\Theta(f(n)) - \Theta(f(n)) =\; ?$$ I find this exercise from my algorithm analysis book very confusing because it's subtracting 2 function sets. Any hints/answers are welcome. Thanks! 2answers 334 views ### How does Lambert's W behave near ∞? How does $W$ behave near $+\infty$ compared to $\log$? In particular, I'm interested in the asymptotic expansion of $$\frac{W(x)}{\ln(x)}$$ near $\infty$ (but along the positive real line, if that ... 1answer 754 views ### Derivation of asymptotic solution of $\tan(x) = x$. An equation that seems to come up everywhere is the transcendental $\tan(x) = x$. Normally when it comes up you content yourself with a numerical solution usually using Newton's method. However, ... 1answer 138 views ### Mean Value of a Multiplicative Function close to $n$ in Terms of the Zeta Function. Let $f(n)$ be a multiplicative function defined by $f(p^a)=p^{a-1}(p+1)$, where $p$ is a prime number. How could I obtain a formula for $$\sum_{n\leq x} f(n)$$ with error term $O(x\log{x})$ and ... 1answer 226 views ### How do I prove $\sum_{n \leq x} \frac{\mu (n)}{n} \log^2{\frac{x}{n}}=2\log{x}+O(1)$? Can I use Abel summation? I am wondering if it is possible to solve this problem using Abel summation: $$\sum_{n \leq x} \frac{\mu (n)}{n} \log^2{\frac{x}{n}}=2\log{x}+O(1)$$ Or maybe I am on the wrong track?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 133, "mathjax_display_tex": 27, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201831221580505, "perplexity_flag": "head"}
http://mathoverflow.net/questions/81553/orthogonal-polynomials-functions-on-the-interval-0-1-but-with-same-weight-as-ge/81555
## Orthogonal polynomials/functions on the interval [0,1] but with same weight as Gegenbauer polynomials ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am looking for an othogonal basis of functions over the interval $[0,1]$ with weight function $(1-x^2)^{\alpha-1/2}$. Gegenbauer polynomials are frustratingly close to what I need, but they are defined over the interval $[-1,1]$, and a change of variables ends up changing the weight function. None of the orthogonal polynomial families I have looked at (Chebyshev, Gegenbauer, Legendre, Laguerre, Jacobi, Hermite) have this property. Does anyone know of a family that does? Suggestions for references that may point the way would also be very helpful. Thanks! - The orthogonal polynomials for this weight function probably won't be anything familiar: already for $\alpha=0$ and degree $1$ the polynomial is a multiple of $\pi x - 2$ ... – Noam D. Elkies Nov 21 2011 at 22:12 Familiarity is perhaps not so important. What I really want in the end is to be able to derive triple-integral identities such as the ones for spherical harmonics (so that I can represent the product of two such basis elements as a sum over this basis). – Marcus P S Nov 21 2011 at 22:22 That should be "triple-product integral identities". – Marcus P S Nov 22 2011 at 1:10 ## 4 Answers There is this paper and this paper which treat the special case of "half-range Chebyshev polynomials" (both kinds, corresponding to the weights $\dfrac1{\sqrt{1-x^2}}$ and $\sqrt{1-x^2}$ over $[0,1]$) to deal with Fourier expansions of nonperiodic functions. I have a feeling that half-range Gegenbauer polynomials have been treated before, and I'll try to see what I can dig up. In the meantime, one can use the Stieltjes procedure to build up the recursion relations for these half range Gegenbauers. Letting $$\langle f(x),g(x) \rangle^{(\alpha)}=\int_0^1 (1-t^2)^{\alpha-1/2} f(t)g(t)\mathrm dt$$ be the associated inner product, the Stieltjes procedure for generating monic orthogonal polynomials $\phi_k(x)$ uses the formulae `$$\begin{align*}b_k&=\frac{\langle x\phi_k(x),\phi_k(x)\rangle^{(\alpha)}}{\langle\phi_k(x),\phi_k(x)\rangle^{(\alpha)}}\\ c_k&=\frac{\langle\phi_k(x),\phi_k(x)\rangle^{(\alpha)}}{\langle\phi_{k-1}(x),\phi_{k-1}(x)\rangle^{(\alpha)}}\end{align*}$$` to give the coefficients $b_k,c_k$ for the recursion relation $$\phi_{k+1}(x)=(x-b_k)\phi_k(x)-c_k\phi_{k-1}(x)$$ Here, the result $$\int_0^1 (1-t^2)^{\alpha-1/2}t^k \mathrm dt=\frac{\Gamma\left(\frac{1+k}{2}\right)\Gamma\left(\alpha+\frac12\right)}{2\Gamma\left(\alpha+\frac{k}{2}+1\right)}$$ is useful. I might as well throw this in. There is an algorithm due to Chebyshev (1859) for determining recursion coefficients from the moments. I've already talked about the algorithm here, so I shall not repeat myself. Instead, I'll reproduce the Mathematica routine I gave in that answer: ````chebAlgo[mom_?VectorQ, prec_: MachinePrecision] := Module[{n = Quotient[Length[mom], 2], si = mom, ak, bk, np, sp, s, v}, np = Precision[mom]; If[np === Infinity, np = prec]; ak[1] = mom[[2]]/First[mom]; bk[1] = First[mom]; sp = PadRight[{First[mom]}, 2 n - 1]; Do[ sp[[k - 1]] = si[[k - 1]]; Do[ v = sp[[j]]; sp[[j]] = s = si[[j]]; si[[j]] = si[[j + 1]] - ak[k - 1] s - bk[k - 1] v; , {j, k, 2 n - k + 1}]; ak[k] = si[[k + 1]]/si[[k]] - sp[[k]]/sp[[k - 1]]; bk[k] = si[[k]]/sp[[k - 1]]; , {k, 2, n}]; N[{Table[ak[k], {k, n}], Table[bk[k], {k, n}]}, np] ] ```` Here for instance is how to use `chebAlgo[]` to generate recursion coefficients for the monic half-range Chebyshev polynomials of the first kind: ````With[{a = 0}, chebAlgo[Table[Gamma[(k + 1)/2] Gamma[a + 1/2]/Gamma[a + k/2 + 1], {k, 0, 10}]/2, Infinity]] // FullSimplify ```` - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Actually, quite a lot is known about such polynomials, at least in the asymptotic regime $n \rightarrow \infty$, where $n$ is the index of the $n$th orthogonal polynomial. In the paper there are asymptotic formulae for not only the polynomials themselves, but also the coefficients of the recurrence relation. On could, in theory, use such formulae to compute recurrence coefficients (for large n), combined with a standard algorithm (such as the one posted by J. M.) for coefficients corresponding to small n. - The low order coefficients and polynomials are actually the most relevant for the application I have in mind. The reference you pointed to seems to focus on orthogonal polynomials over [-1,1] --- or am I missing something? – Marcus P S Nov 24 2011 at 15:14 You can map $[0,1]$ to $[-1,1]$ in the obvious way to use the results of the paper. What kind of application are you looking at? I'd be interested to see instances where these polynomials naturally arise (I've also done some work in the past with the "half-range Chebyshev polynomials" mentioned above). – Ben Adcock Nov 24 2011 at 17:08 I am interested in performing Bayesian estimation of a quantum state. A natural way to parameterize pure quantum states is by using angles in a way very similar to spherical coordinates -- see for example eqn 4.69 of "Geometry of quantum states" by Ingemar Bengtsson and Karol Życzkowski. The resulting metric leads one to consider the problem above. There are known ways to do Bayesian estimation using special functions/polynomials families that are related to group representations (as r0b0t suggested below), but I was looking for a different approach. – Marcus P S Dec 30 2011 at 5:57 If your polynomial is related to symmetries in one way or another it may be a matrix coefficient of a Lie group representation. - The even (degree) Gegenbauer polynomials $C_{2n}^{(\alpha )}$, $n=0,1,2,\ldots$ form an orthogonal basis for the space of square integrable functions over [0,1] with respect to the weight function $(1-x^2)^{\alpha -1/2}$, and so do the odd (degree) Gegenbauer polynomials $C_{2n+1}^{(\alpha )}$, $n=0,1,2,\ldots$ - Notice that the classical orthogonal families that you tried have weight functions that are singular or vanishing at the endpoints of the interval whereas your weight function does not at the endpoint $x=0$. This explains why such strategies of transforming the interval fail. However, since the weight function in question is even, the Hilbert space over the interval [0,1] is isomorphic to the even and odd subspaces of the Hilbert space over the interval $[-1,1]$, so you can get an orthogonal basis through projection as indicated above. – JFvD Jan 3 at 13:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8922670483589172, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/103395/isotopy-in-3-manifolds
## Isotopy in 3-manifolds ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Assume $\Sigma_1$ and $\Sigma_2$ are two embedded compact surfaces (say orientable) in an orientable 3-manifold $M$. Assume $\Sigma_1$ and $\Sigma_2$ are homotopic in $M$. Then are they isotopic? - ## 2 Answers No, generally they're not. For example, there's only one homotopy class $S^2 \to \mathbb R^3$ but there's two isotopy classes of embeddings (given via how the embedding orients the compact 3-manifold it bounds). edit: I think if your 3-manifold is irreducible and if your maps $S^2 \to M$ are not null homotopic then the answer is likely yes. But if your 3-manifold is say a connect sum of lens spaces then I suspect it's false but I haven't come up with a nice example yet. As Allen points out in the comments below, a connect-sum of lens spaces won't work, at least not when your surface is a sphere. edit2: As Misha Kapovich points out, for irreducible 3-manifolds and incompressible surfaces homotopy implies isotopy. This is an old theorem of Waldhausen's. "On Irreducible 3-manifolds which are sufficiently large" Ann. of Math (2) 87 (1968) 56--88. - 1 Hi Ryan, thanks! In your $S^2\rightarrow {\mathbb R}^3$ example, if we ignore the orientation, will they be isotopic? I am saying that for embeddings $\iota_1, \iota_2$, there exists an embedding $\iota_3$ so that $\iota_3(S^2)=\iota_1(S^2)$, and $\iota_3$ is isotopic to $\iota_2$. Is there a chance this kind of thing is true in general? – DaveK Jul 28 at 18:20 9 Instead of 2-spheres, consider 2-tori in the 3-space: They are all homotopic, but you have infinitely many isotopy classes corresponding to knot neighborhoods. The right assumption is incompressibility of surfaces and irreducibility of the 3-manifold. Then Waldhausen proved that homotopy implies isotopy. – Misha Jul 28 at 18:41 2 For spheres embedded in 3-manifolds the fact that homotopy implies isotopy is a theorem of Laudenbach in the 1973 Annals. He had to assume the manifolds in question contained no counterexamples to the Poincaré conjecture (i.e. no fake 3-balls) since "homotopic spheres are isotopic" implies the Poincaré conjecture. – Allen Hatcher Jul 28 at 20:05 1 A theorem from 68 is an «old theorem»? :-) – Mariano Suárez-Alvarez Jul 29 at 1:02 1 Operationally, anything that came before me is old, and anything after is young. – Ryan Budney Jul 29 at 1:12 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If $\Sigma_1 \hookrightarrow M$ is an embedded $\pi_1$-injective surface, then any homotopic embedded surface will be isotopic to $\Sigma$. As Ryan and Allen point out, this is due to Waldhausen for incompressible surfaces of genus $>0$, and to Laudenbach for 2-spheres, together with the Poincare conjecture. If $\Sigma_1$ is not incompressible in $M$, then there exists $\Sigma_2$ which is homotopic to $\Sigma_1$ but not isotopic. The point is that one may compress $\Sigma_1$ to get a surface $\Sigma'\hookrightarrow M$ which has smaller genus, and then reembed the 1-handle (in the same homotopy class) in a knotted fashion to get a non-isotopic surface $\Sigma_2$. Misha observed this in the comments on Ryans question for tori in $S^3$, but it holds more generally. There's an intermediate case of $\Sigma_1\hookrightarrow M$ which is incompressible and not $\pi_1$-injective. By the loop theorem, this can only occur if $\Sigma_1$ is 1-sided in $M$, which implies that the surface is non-orientable, so does not fall under the purview of your question. I'm not sure if homotopy implies isotopy in this case - I suspect there are 1-sided Heegaard surfaces which are homotopic but not isotopic, but I don't know examples off the top. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248538017272949, "perplexity_flag": "middle"}
http://mathhelpforum.com/number-theory/116487-quadratic-residue.html
# Thread: 1. ## Quadratic Residue Prove that if c is odd, then $(\frac{2}{c}) = (-1)^{\frac{(c^{2}-1)}{8}}$. I know I'm supposed to use the theorem that states $(\frac{2}{p}) = (-1)^{\frac{p^{2}-1}{8}}$, where p is an odd prime. But I'm not confident that stating that c is simply an odd prime is enough to satisfy this. 2. Originally Posted by kyldn6 Prove that if c is odd, then $(\frac{2}{c}) = (-1)^{\frac{(c^{2}-1)}{8}}$. I know I'm supposed to use the theorem that states $(\frac{2}{p}) = (-1)^{\frac{p^{2}-1}{8}}$, where p is an odd prime. But I'm not confident that stating that c is simply an odd prime is enough to satisfy this. I think this is false: check $\binom{2}{9}=-1\neq (-1)^{\frac{9^2-1}{8}}$ ... Tonio 3. What is the symbol? The Jacobi symbol? Because the Legendre symbol is defined only for primes. If it's the Jacobi symbol, then it's true and you can prove it using the definition of the Jacobi symbol and the corresponding property of the Legendre symbol. 4. Originally Posted by tonio I think this is false: check $\binom{2}{9}=-1\neq (-1)^{\frac{9^2-1}{8}}$ ... Tonio $\left(\frac{2}{9}\right)=1$ if the symbol is Jacobi's. Note that when the Jacobi symbol $\left(\frac{m}{n}\right)$ is 1, it does not mean that $m$ is a square (mod $n$). 5. Yes it is the Jacobi symbol. 6. Let $c=p_1^{\alpha_1}\hdots p_k^{\alpha_k}$ . By definition, $\left(\frac{2}{c}\right)=\left(\frac{2}{p_1}\right )^{\alpha_1}\hdots\left(\frac{2}{p_k}\right)^{\alp ha_k}$. Using the corresponding property of the Legendre symbol we have that this is $\left((-1)^{(p_1^2-1)/8}\right)^{\alpha_1}\hdots \left((-1)^{(p_k^2-1)/8}\right)^{\alpha_k} = (-1)^{(\alpha_1(p_1^2-1)+\hdots +\alpha_k(p_k^2-1))/8}$. Now what you want to show is ${(\alpha_1(p_1^2-1)+\hdots +\alpha_k(p_k^2-1))/8} \equiv (c^2-1)/8 \mod 2$. Does this help a bit? 7. Yes thank you so much. 8. If its not too much trouble and if someone feels like it, could someone post the last half of the proof, I'm fairly confident I have it here but I'm not 100% confident.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315239191055298, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/7436/lebesgue-integral-basics?answertab=oldest
# Lebesgue integral basics I'm having trouble finding a good explanation of the Lebesgue integral. As per the definition, it is the expectation of a random variable. Then how does it model the area under the curve? Let's take for example a function $f(x) = x^2$. How do we find the integral of $f(x)$ under $[0,1]$ using the Lebesgue integral? - 4 What definition gives "the expectation of a random variable"? Which book does this? – Arturo Magidin Oct 21 '10 at 18:19 Can you point me to a source that best explains it with some examples? – user957 Oct 21 '10 at 18:32 The random variable is the height of the curve. – Qiaochu Yuan Oct 21 '10 at 23:08 I suspect he's learning from a mathematical finance book. – Raskolnikov Dec 7 '10 at 19:35 ## 7 Answers As has been noted, the usual definition of the Lebesgue integral has little to do with probability or random variables (though the notions of measure theory and the integral can then be applied to the setting of probability, where under suitable interpretations it will turn out that the (Lebesgue) integral of (a certain) functions corresponds to the expectation of (a certain) random variable). But this is not the origin of the Lebesgue integral. Here is an intuitive idea of what the Lebesgue integral is, as compared to the Riemann integral. Recall from Calculus the idea behind the Riemann integral: the integral $\int_a^b f(x)\,dx$ is meant to represent the net signed area between the $x$-axis, the graph of $y=f(x)$, and the lines $x=a$ and $x=b$. The way we attempt to do this is by breaking up the domain, $[a,b]$, into subintervals $[a=x_0,x_1]$, $[x_1,x_2],\ldots,[x_{n-1},x_n=b]$. Then, on each subinterval $[x_i,x_{i+1}]$ we pick a point $x_i^*$, and we estimate the area under the graph of the function with the rectangle of height $f(x_i^*)$ and base $[x_i,x_{i+1}]$. This leads to the Riemann sums $$\sum_{i=0}^{n-1} f(x_i^*)(x_{i+1}-x_i)$$ as estimates of the area under the graph. We then consider finer and finer partitions of $[a,b]$ and take limits to estimate the area. Lebesgue's idea was that instead of partitioning the domain, we will partition the range; if the function takes values between $c$ and $d$, we can divide the range $[c,d]$ into subintervals $[c=y_0,y_1]$, $[y_1,y_2],\ldots,[y_{m-1},y_m=d]$. Then, we let $E_i$ be the set of all points in $[a,b]$ whose value under $f$ lies between $y_i$ and $y_{i+1}$. That is, $$E_i = f^{-1}([y_i,y_{i+1}]) = \{ x\in[a,b]\,|\, y_i \leq f(x) \leq y_{i+1}\}.$$ If we have a way of assigning a "size" to $E_i$, call it its "measure" $\mu(E_i)$, then the portion of the graph of $y=f(x)$ that lies between the horizontal lines $y=y_i$ and $y=y_{i+1}$ will be $A$, where, $$y_i\mu(E_i) \leq A \leq y_{i+1}\mu(E_i).$$ So Lebesgue suggests to approximate the the area by picking a number $y_i^*$ between $y_i$ and $y_{i+1}$, and considering the sums $$\sum_{i=0}^{n-1} \mu(E_i)y_i^*.$$ Then consider finer and finer partitions of $[c,d]$, and this gives finer and finer approximations of of the area by these sums. The Lebesgue integral will be the limit of these sums. (The analogy given by Mike Spivey is very apt for the distinction between partitioning the domain and partitioning the range to find the sum.) But in order for this to make sense, we need to develop a way of measuring fairly intricate subsets of the line, so that we can compute $\mu(E_i)$. So we first develop a way of doing this; turns out that if you accept the Axiom of Choice, then it is impossible to come up with a way of measuring that will (i) assign to an interval $[a,b]$ the "measure" $b-a$; (ii) will be invariant under translation, so so that if $F=E+c = \{e+c | e\in E\}$ then $\mu(F)=\mu(E)$; (iii) will be countably additive: if $E = \cup_{i=1}^{\infty}E_i$ and the $E_i$ are pairwise disjoint, then $\mu(E) = \sum\mu(E_i)$; and (iv) every subset of the line will have a well-defined (possibly infinite) measure. (If you don't accept the Axiom of Choice, then there are models of the reals where we can achieve this). So one drops the restriction (iv), and constructs a measure for which some sets will be "too weird" to have a measure. We then restrict attention to certain kinds of functions (called the measurable functions), which are the ones for which the sets we get in the process described above are all measurable sets. And then we define the Lebesgue integral for those functions, following the idea described above (but one does not define it exactly that way; instead the usual way is to describe $f$ as a limit of functions for which the integral is easy, and then compute the integral of $f$ as a limit of the integrals that are easy). For your function, $f(x)=x^2$, this is fairly easy: the value all lie between $0$ and $1$, so say that we break up the range into subintervals of length $1/n$, so $y_i = i/n$, $i=0,\ldots,n$. Then $$f^{-1}([y_i,y_{i+1}]) = f^{-1}([i/n, (i+1)/n]) = [\sqrt{i/n},\sqrt{(i+1)/n}],$$ so the $n$th estimate, picking $y_i^* = y_i = i/n$ is just $$\sum_{i=0}^n (i/n)\left(\sqrt{(i+1)/n} - \sqrt{i/n}\right).$$ Take the limit as $n\to\infty$, and you will get that the limit is $\frac{1}{3}$, as expected. (I will spare you the details; see the end of this answer for a high-power way of getting the answer similar to the way you do it with the Riemann integral). It turns out that not every function is Lebesgue-integrable, just like not every function is Riemann-integrable. But every function that is Riemann-integrable will also be Lebesgue integrable, and the value of its Lebesgue integral will be the same as the value of its Riemann integral. But there are functions that are not Riemann-integrable but are Lebesgue-integrable (for example, the characteristic function of the rationals is Lebesgue-integrable, with integral $0$ over any interval, but is not Riemann-integrable). We also have a "Fundamental Theorem of Calculus" for the Lebesgue Integral: Theorem. If $F$ is a differentiable function, and the derivative $F'$ is bounded on the interval $[a,b]$, then $F'$ is Lebesgue integrable on $[a,b]$ and $$\int_a^x F'\,d\mu = F(x) - F(a).$$ Here, the integral is the Lebesgue integral. In particular, to finally answer the question you ask about your example, since $F(x)=\frac{x^3}{3}$ is a differentiable function whose derivative is bounded over any finite interval, in particular over $[0,1]$, then from this theorem you can deduce that the integral over the interval $[0,1]$ of the derivative $F'(x)=x^2$ is equal to $F(1)-F(0)$; that is, $$\int_0^1 x^2\,d\mu = \int_0^1 \left(\frac{x^3}{3}\right)'\,d\mu = \frac{1}{3} - \frac{0}{3} =\frac{1}{3}.$$ I recommend the book A Garden of Integrals by Frank E. Burk (Dolciani Mathematical Expositions 31, MAA, 2007, ISBN 9-780883-853375); it discusses and compares the Cauchy integral, the Riemann integral, the Riemann-Stieltjes integral, the Lebesgue integral, the Lebesgue-Stieltjes integral, and the Henstock-Kurzweil integral; it also discusses the Wiener and Feynman integral. I just finished reading it recently. - Well said, Arturo. – Mike Spivey Oct 21 '10 at 19:32 @Mike: Thanks; I like the analogy you give, too. – Arturo Magidin Oct 21 '10 at 19:40 Thanks. I wish I could claim the analogy was original. :) – Mike Spivey Oct 21 '10 at 19:43 +1, very complete post. – Jonas Teuwen Oct 22 '10 at 12:39 excellent description! very helpful thanks! – user957 Oct 24 '10 at 21:18 The definition here doesn't mention probability or expectation or random variable. Intuitively, it just says the measure (in $\mathbb R^2$) is the area of the smallest set of rectangles that will cover the set. Then for an area the Lebesgue integral is just the integral of 1 over the set. - The Lebesgue integral is a generalization of the usual Riemann integral taught in basic calculus. If the Riemann integral of a function over a set exists then it equals the Lebesgue integral. So the Lebesgue integral of $x^2$ over $[0,1]$ is just the old $(1/3) 1^3-(1/3)0^3$ The Lebesgue integral has the benefit of being defined for many more functions than the Riemann integral. Even more importantly the Lebesgue integral has useful limit properties: The expectation of a random variable is a particular application of the Lebesgue integral where the function to be integrated is the random variable (viewed as a function on the sample space) and the integration is with respect to a probability measure. You need to look at one of the many probability and measure books for the details. My own favourites are: • Pollard, A User's Guide to Measure-Theoretic Probability • Dudley, Real Analysis and Probability Terence Tao has some online lecture notes: - One of my graduate school professors, Erhan Cinlar, used to give the following analogy to explain the intuitive difference between the Lebesgue integral and the Riemann integral. Suppose you have a pile of coins of different denominations, and you want to know how much money you have. The Riemann integral is like picking up the coins, one-by-one, and adding the denomination of each to a running total. The Lebesgue integral is like sorting the coins by denomination first, and then getting the total by multiplying each denomination by how many you have of that denomination and then adding up those numbers. The methods are different, but you obtain the same result by either method. In the same way, when both the Riemann integral and the Lebesgue integral are defined, they give the same value. As others have said, though, there are functions for which the Lebesgue integral is defined but the Riemann integral is not, and so in that sense the Lebesgue integral is more general than the Riemann. - 3 One way to see your analogy in the context of integrals is to notice that Rieman integrals approximate the area by sums of vertical rectangles, while the Lebesgue integral instead uses horizontal rectangles... – Mariano Suárez-Alvarez♦ Oct 21 '10 at 21:22 1 @Mariano: Or, you have a bunch of piles of coins. Riemann adds up each pile separately, then adds up the totals. Lebesgue counts how many pennies are in all the piles, and gets a partial total; then counts how many nickels; then how many dimes; etc. And then adds up the totals – Arturo Magidin Oct 22 '10 at 4:45 1 This analogy was actually given by Lebesgue himself (according to Dunham, The Calculus Gallery). – Michael Greinecker Jan 24 '12 at 7:39 You may also like to refer, these two books: • A radical approach to Lebesgue Theory of Integration: Bressoud • Real Analysis by G.B. Folland. As Jyotirmoy pointed out, Lebesgue integral is the generalization of Riemann Integral. There are shortcomings of the Riemann Integral, due to which the Lebesgue integral, was discovered. A rigorous definition of the Lebesgue integral needs, you to know what a Simple Function is, and you can read more on this at http://en.wikipedia.org/wiki/Lebesgue_integration - The Riemann integral is pretty good and very intuitive, however the main reason to consider other types of integrals is that "the space of functions that are Riemann integrable", say $R(I)$ where $I\subset\mathbb{R}^n$ is compact, is to small (it is a linear space in the sense you can add them and multiply by constants). If you just look at a piecewise continuous function that vanish outside a bounded region and then you can go on with the Riemann integral. In mathematical analysis we look at various kinds of limits of functions and we would like the limit functions to stay in "the space" (we want the space to be complete). About the best we can do in the Riemann case is to look at uniformly convergent sequences $f_n$ on a compact interval $I\subset\mathbb{R}^n$ - in that case the limit $\lim f_n\in R(I)$ and $\lim\int f_n =\int \lim f_n$. However, uniform convergent is very rare! (Many Fourier series are not continuous even though there partial sums are, etc..). The Lebesgue integral can be constructed in several ways (ending up with the same space though). A first try might be to start with norming $R(I)$, $\|f\|=\int|f|$ and then we would get a distance between $f,g\in R(I)$ by $\|f-g\|$, thus turning up with a metric space which we may complete by adding all possible limits - this will not work however because even though $R(I)$ is small it is to large (there are unbounded functions such that $\|f\|=\infty$). A better start would be to look at $C(I)$ = the space of continuous functions on (the compact set) $I$, (certainly each $f\in R(I)$ is a point-wise limit of $C(I)$ functions) if we norm $C(I)$ in the same manner we would be a normed space and the completion of that space is $L^1(I)$. In $L^1$ you can sure take limits in norm and moreover, as has already been pointed out in other answers, you have many other better limit theorems such as Lebesgue dominated theorem or the monotone convergence theorem. Also, bounded functions of $R(I)$ do belong to $L^1(I)$. - You may want to consider the following sources: • Lebesgue's theory of integration: its origins and development - Thomas Hawkins • The calculus gallery: masterpieces from Newton to Lebesgue - William Dunham • The Lebesgue-Stieltjes integral: a practical introduction - Michael Carter, Bruce Van Brunt -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 92, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.914421558380127, "perplexity_flag": "head"}
http://mathoverflow.net/questions/30898/ways-to-prove-an-inequality/30906
## Ways to prove an inequality ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It seems that there are three basic ways to prove an inequality eg $x>0$. 1. Show that x is a sum of squares. 2. Use an entropy argument. (Entropy always increases) 3. Convexity. Are there other means? Edit: I was looking for something fundamental. For instance Lagrange multipliers reduce to convexity. I have not read Steele's book, but is there a way to prove monotonicity that doesn't reduce to entropy? And what is the meaning of positivity? Also, I would not consider the bootstraping method, normalization to change additive to multiplicative inequalities, and changing equalities to inequalities as methods to prove inequalities. These method only change the form of the inequality, replacing the original inequality by an (or a class of) equivalent ones. Further, the proof of the equivalence follows elementarily from the definition of real numbers. As for proofs of fundamental theorem of algebra, the question again is, what do they reduce too? These arguments are high level concepts mainly involving arithmetic, topology or geometry, but what do they reduce to at the level of the inequality? Further edit: Perhaps I was looking too narrowly at first. Thank you to all contributions for opening to my eyes to the myriad possibilities of proving and interpreting inequalities in other contexts!! - 7 How would you call bootstrapping arguments where, for example, to prove $A\le B$ you show $A\le B+\epsilon$ for all $\epsilon$? Or (what Tao refers to as the tensor product trick) you show that for all $n$, $A^n\le CB^n$ for some constant $C$ independent of $n$? – Andres Caicedo Jul 7 2010 at 15:27 ## 9 Answers I don't think your question is a mathematical one, for the question about what do all inequalities eventually reduce to has a simple answer: axioms. I interpret it as a metamathematical question and still I believe the closest answer is the suggestion above about using everything you know. An inequality is a fairly general mathematical term, which can be attributed to any comparison. One example is complexity hierarchies where you compare which of two problems has the highest complexity, can be solved faster etc. Another one is studying convergence of series, that is comparing a quantity and infinity, here you find Tauberian theory etc. Even though you did not specify in your question which kind of inequalities are you interested in primarily, I am assuming that you are talking about comparing two functions of several real/complex variables. I would be surprised if there is a list of exclusive methods that inequalities of this sort follow from. It is my impression that there is a plethora of theorems/principles/tricks available and the proof of an inequality is usually a combination of some of these. I will list a few things that come to my mind when I'm trying to prove an inequality, I hope it helps a bit. First I try to see if the inequality will follow from an equality. That is to recognize the terms in your expression as part of some identity you are already familiar with. I disagree with you when you say this shouldn't be counted as a method to prove inequalities. Say you want to prove that $A\geq B$, and you can prove $A=B+C^2$, then, sure, the inequality follows from using "squares are nonnegative", but most of the time it is the identity that proves to be the hardest step. Here's an example, given reals $a_1,a_2,\dots, a_n$, you want to prove that $$\sum_{i,j=1}^n \frac{a_ia_j}{1+|i-j|} \geq 0.$$ After you realize that sum is just equal to $$\frac{1}{2\pi}\cdot\int_{0}^{2\pi}{\int_{0}^{1}{\frac{1-r^{2}}{1-2r\cos(x)+r^{2}}\cdot |\sum_{k=1}^{n}{a_{k}e^{-ikx}}|^{2}dx dr}}$$ then, yes, everything is obvious, but spotting the equality is clearly the nontrivial step in the proof. In some instances it might be helpful to think about combinatorics, probability, algebra or geometry. Is the quantity $x$ enumerating objects you are familiar with, the probability of an event, the dimension of a vector space, or the area/volume of a region? There is plenty of inequalities that follow this way. Think of Littlewood-Richardson coeficients for example. Another helpful factor is symmetry. Is your inequality invariant under permuting some of its variables? While I don't remember right now the paper, Polya has an article where he talks about the "principle of nonsufficient reason", which basically boils down to the strategy that if your function is symmetric enough, then so are it's extremal points (there is no sufficient reason to expect assymetry in the maximal/minimal points, is how he puts it). This is similar in vein to using Langrange multipliers. Note however that sometimes it is the oposite of this that comes in handy. Schur's inequality, for example is known to be impossible to prove using "symmetric methods", one must break the symmetry by assuming an arbitrary ordering on the variables. (I think it was sent by Schur to Hardy as an example of a symmetric polynomial inequality that doesn't follow from Muirhead's theorem, see below.) Majorization theory is yet another powerful tool. The best reference that comes to mind is Marshall and Olkin's book "Inequalities: Theory of Majorization and Its Applications". This is related to what you call convexity and some other notions. Note that there is a lot of literature devoted to inequalities involving "almost convex" functions, where a weaker notion than convexity is usually used. Also note the concepts of Schur-convexity, quasiconvexity, pseudoconvexity etc. One of the simplest applications of majorization theory is Muirhead's inequality which generalizes already a lot of classical inequalities and inequalities such as the ones that appear in competitions. Sometimes you might want to take advantage of the duality between discrete and continuous. So depending on which tools you have at your disposal you may choose to prove, say the inequality $$\sum_{n=1}^{\infty}\left(\frac{a_1+\cdots+a_n}{n}\right)^p\le \left(\frac{p}{p-1}\right)^p \sum_{n=1}^{\infty}a_n^p$$ or it's continuous/integral version $$\int_{0}^{\infty}\left(\frac{1}{x}\int_0^x f(t)dt\right)^p dx \le \left(\frac{p}{p-1}\right)^p \int_{0}^{\infty} f(x)^p dx$$ I've found this useful in different occasions (in both directions). Other things that come to mind but that I'm too lazy to describe are "integration preserves positivity", uncertainity principle, using the mean value theorem to reduce the number of variables etc. What also comes in handy, sometimes, is searching if others have considered your inequality before. This might prevent you from spending too much time on an inequality like $$\sum_{d|n}d \le H_n+e^{H_n}\log H_n$$ where $H_n=\sum_{k=1}^n \frac{1}{k}$. - Minor comment about your first example, it is "easily" seen to be nonnegative, because it is the sum of all entries of the positive-definite matrix: $B = aa^T \circ L$, where $L_{ij} = 1/(1+|i-j|)$ and $\circ$ denotes the Hadamard product. The only semi-trivial part is to prove posdef of $L$, but that can be done in numerous ways, the more advanced of which might use an integral representation. – S. Sra Sep 28 2010 at 7:47 How would you prove the posdef of L? – Gjergji Zaimi Sep 28 2010 at 19:20 Hmm, the first idea that comes to my mind is: use $\varphi(x) = 1/(1+|x|)$ is a positive-definite function (it is in fact infinitely divisible), which itself can be proved using $|x|$ is conditionally negative-def. (though proving the latter might require a simple integral!); – S. Sra Oct 1 2010 at 13:00 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Enumerative combinatorics also provides an important source of inequalities. The most basic is that if you can show that $X$ is the cardinality (or dimension) of some set $A$, then you automatically have $X \geq 0$. This can become non-trivial if one also possesses a very different description of $X$, e.g. as an alternating sum. Similarly, if you can establish a surjection (resp. injection) from a set of cardinality (or dimension) $X$ to a set of cardinality (or dimension) $Y$, then you have proven that $X \geq Y$ (resp. $X \leq Y$). (The dimension version of this argument is the basis for the polynomial method in extremal combinatorics.) The integrality gap is also an important way to improve an inequality by exploiting discreteness. For instance, if you know that $X > Y$, and that $X, Y$ are integers, then this automatically implies the improvement $X \geq Y+1$. More generally, if you know that $X, Y$ are both divisible by $q$, then we have the further improvement $X \geq Y+q$. A good example of this principle is in applying the Chevalley-Warning theorem, that asserts that the number $X$ of roots of a low-degree polynomial over a finite field $F_p$ is divisible by $p$. If one has one trivial solution ($X \geq 1$), then this automatically boosts to $X \geq p$, which implies the existence of at least one non-trivial solution also (and in fact gives at least $p-1$ such solutions). - 8 There's a real equivalent to the enumerative combinatorics method of showing that the quantity counts something. This would be: show that the quantity represents some probability. Then it's automatically non-negative (and in fact, it's between 0 and 1, so you get two inequalities) – Peter Shor Jul 8 2010 at 15:07 Steele in his book Cauchy-Schwarz Master Class identifies three pillars on which all inequalities rest 1. Monotonicity 2. Positivity 3. Convexity, which he says is a second-order effect (Chap 6) These three principles apply to inequalities whether they be 1. discrete or integral or differential 2. additive or multiplicative 3. in simple or multi-dimensional spaces (matrix inequalities). In Chap 13 of the book, he shows how majorization and Schur's convexity unify the understanding of multifarious inequalities. I am still not done reading the book but it also mentions a normalization method which can convert an additive inequality to a multiplicative one. - 2 Steele's book should be required reading for all mathematics students regardless of level. One of the reasons calculus classes are in such a sorry state in this country is the complete inability of students to work with basic inequalities. I know it was definitely my main weakness when beginning to learn real analysis. – Andrew L Jul 7 2010 at 16:48 18 @Andrew L : Can you please tone down the relentless negativity? I'd assume that by "this country" you mean the US (somehow I can't imagine a resident of Canada, Germany, China, etc assuming that everyone was from there). Calculus classes in the US are hardly "in a sorry state". Many people that post here both live in the US and teach calculus on a regular basis. I don't see why you have to insult us. – Andy Putman Jul 7 2010 at 20:29 To prove that $A\leq B$, maximize the value of $A$ subject to the condition that $B$ is constant using, for example, Lagrange multipliers. This does wonders on most classical inequalities. - there is an argument that such a maximization (so called fastest descend) is actually a heat flow or entropy argument. Just from what I heard. – Colin Tan Jul 7 2010 at 15:07 4 I have no idea what 'heat flow' or 'entropy argument' mean in this context. Lagrange multipliers, on the other hand, are known to every undergraduate... – Mariano Suárez-Alvarez Jul 7 2010 at 15:11 Mariano: Yes, but the question is not about how to effectively teach inequalities to undergraduates, but about the tools we have to prove them. And most classical inequalities can be deduced from sum of squares arguments, or some form of convexity results. I actually think this question shows a nice insight. – Andres Caicedo Jul 7 2010 at 15:30 Well, most classical inequalities also follow more or less effortlessly from a little, straightforward, idealess computation using Lagrange multipliers, too. That is my point. – Mariano Suárez-Alvarez Jul 7 2010 at 15:32 I agree with your point (not the "idealess" part). I am reading "convexity arguments" loosely, so that inequalities proved using first (and sometimes second) derivative arguments are included here. In that sense, Lagrange multipliers is part of that framework. – Andres Caicedo Jul 7 2010 at 16:59 I don't think the question has a meaningful answer unless the OP specifies a class of inequalities he has in mind. The problem is that almost any mathematical statement can be restated as an inequality. Take, for instance, the fundamental theorem of algebra. It is equivalent to the inequality "the number of roots of a non-constant polynomial with complex coefficients is greater than zero". Over ten different proofs of this inequality are discussed in this thread. It seems that none of them has anything to do with positivity, convexity or entropy arguments. - 1 You're right; but given the question, I think we can infer that the OP means inequalities of the form $A\leq B$ where $A$ and $B$ are functions of (possibly several) real variables written in some fixed language, e.g. $(\times, +, -, \operatorname{sin}, ...)$. – Daniel Litt Jul 7 2010 at 15:47 Daniel, probably you are right. Still I think it would help if the question were a bit more specific. – Andrey Rekalo Jul 7 2010 at 16:17 This doesn't seem like a real question, but here's an answer anyway. Every mathematician should pick up "Inequalities" by Hardy, Littlewood, and Polya. The book lays out a systematic approach to proving "elementary" inequalities, and it was a surprise to me just how much commonness and beauty there is in the field. It's an old book, but all the more readable for it. - I have recently been working on stuff related to the Golod-Shafarevich inequality. So here is a crazy way to prove an inequality. Let $G$ be a finitely generated group and $\left< X|R \right>$ a presentation of $G$ with $|X|$ finite. Let $r_i$ be the number of elements in $R$ with degree $i$ with respect to the Zassenhaus $p$-filtration. Assume $r_i$ is finite for all $i$. Let $H_R(t)=\sum_{i=1}r_it^i$. A group is called Golod -Shafarevich (GS) if there is $0 < t_0 < 1$ such that $1-|X|t_0+H_R(t_0)<0$. Golod and Shafarevich proved that GS groups are infinite. Zelmanov proved their pro-$p$ completion contains a non-abelian free pro-$p$ group. So suppose $G$ is a group with such a presentation and suppose you know that its pro-$p$ completion does not contain a non-abelian free pro-$p$ group or for some other reason $G$ is not GS. Then $1-|X|t+H_R(t) \geq 0$ for all $0 < t <1$. Now, I am sure no one ever used the Golod-Shafarevich this way and I doubt anyone will. But maybe I am wrong. In any case, this does not seem to fit any of the methods that were mentioned before. - 1 +1 the answer is really cute. But shouldn't one of your inequalities have an equal sign appended? The negation of $<$ is $\geq$, not $>$. – Willie Wong Jul 8 2010 at 20:09 Thanks! I fixed it. – Yiftach Barnea Jul 8 2010 at 22:00 Look at the proofs of known inequalities and solutions to related problems: http://en.wikipedia.org/wiki/Category:Inequalities http://mathworld.wolfram.com/topics/Inequalities.html I believe the best approach to studying inequalities is proving as many of them as possible. There is a section at ArtOfProblemSolving forum that is a good source of them: http://www.artofproblemsolving.com/Forum/viewforum.php?f=32 One may also like to read a classic book on inequalities by Hardy, Littlewood, and Pólya: http://www.amazon.com/gp/product/0521358809 - 1 There's also this somewhat underdeveloped Tricki page: tricki.org/article/… – Mark Meckes Jul 7 2010 at 16:45 There's also Wikipedia's article titled "list of inequalities", which, unlike the category, is organized. en.wikipedia.org/wiki/List_of_inequalities – Michael Hardy Jul 7 2010 at 21:59 Use other known inequalities. e.g. re-arrangement inequality, Cauchy-Schwartz, Jensen, Hölder. - 8 There is a mild generalization of this technique: use everything you know! :) – Mariano Suárez-Alvarez Jul 7 2010 at 15:12 But aren't these inequalities fundamentally a result of either convexity or sum of squares? – Colin Tan Jul 7 2010 at 15:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 70, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458480477333069, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/13089/why-do-so-many-textbooks-have-so-much-technical-detail-and-so-little-enlightenmen/13090
## Why do so many textbooks have so much technical detail and so little enlightenment? [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I think/hope this is okay for MO. I often find that textbooks provide very little in the way of motivation or context. As a simple example, consider group theory. Every textbook I have seen that talks about groups (including some very basic undergrad level books) presents them as abstract algebraic structures (while providing some examples, of course), then spends a few dozen pages proving theorems, and then maybe in some other section of the book covers some Galois Theory. This really irks me. Personally I find it very difficult to learn a topic with no motivation, partly just because it bores me to death. And of course it is historically backwards; groups arose as people tried to solve problems they were independently interested in. They didn't sit down and prove a pile of theorems about groups and then realize that groups had applications. It's also frustrating because I have to be completely passive; if I don't know what groups are for or why anyone cares about them, all I can do is sit and read as the book throws theorems at me. This is true not just with sweeping big picture issues, but with smaller things too. I remember really struggling to figure out why it was supposed to matter so much which subgroups were closed under conjugation before finally realizing that the real issue was which subgroups can be kernels of homomorphisms, and the other thing is just a handy way to characterize them. So why not define normal subgroups that way, or at least throw in a sentence explaining that that's what we're really after? But no one does. I've heard everyone from freshmen to Fields Medal recipients complain about this, so I know I'm not alone. And yet these kinds of textbooks seem to be the norm. So what I want to know is: Why do authors write books like this? And: How do others handle this situation? Do you just struggle through? Get a different book? Talk to people? (Talking to people isn't really an option for me until Fall...) Some people seem legitimately to be able to absorb mathematics quite well with no context at all. How? - 2 Community wiki? – Akhil Mathew Jan 27 2010 at 1:59 1 I found myself thinking the exact same as you on multiple occasions. I find it most annoying in proofs to be honest: the solution is pulled out of a hat and then checked, with no insight about how the solution was found. It's as if someone was explaining how to come up with elliptic curves of high rank just by giving an example by Elkies and checking that it is of high rank. – Sam Derbyshire Jan 27 2010 at 2:01 2 Why did the comments go away? – Mariano Suárez-Alvarez Jan 27 2010 at 6:16 7 We decided that a discussion of the merits of Bourbaki was not a good use of the comments box (it had drifted far from the topic at hand), so we killed them. – Andy Putman Jan 27 2010 at 6:33 1 Most history textbooks? Where? Why are all questions and answers happening in the USA? – Yemon Choi Sep 23 at 9:10 show 2 more comments ## 22 Answers By now the advice I give to students in math courses, whether they are math majors or not, is the following: a) The goal is to learn how to do mathematics, not to "know" it. b) Nobody ever learned much about doing something from either lectures or textbooks. The standard examples I always give are basketball and carpentry. Why is mathematics any different? c) Lectures and textbooks serve an extremely important purpose: They show you what you need to learn. From them you learn what you need to learn. d) Based on my own experience as both a student and a teacher, I have come to the conclusion that the best way to learn is through "guided struggle". You have to do the work yourself, but you need someone else there to either help you over obstacles you can't get around despite a lot of effort or provide you with some critical knowledge (usually the right perspective but sometimes a clever trick) you are missing. Without the prior effort by the student, the knowledge supplied by a teacher has much less impact. A substitute for a teacher like that is a working group of students who are all struggling through the same material. When I was a graduate student, we had a wonderful working seminar on Sunday mornings with bagels and cream cheese, where I learned a lot about differential geometry and Lie groups with my classmates. ADDED: So how do you learn from a book? I can't speak for others, but I have never been able to read a math book forwards. I always read backwards. I always try to find a conclusion (a cool definition or theorem) that I really want to understand. Then I start working backwards and try to read the minimum possible to understand the desired conclusion. Also, I guess I have attention deficit disorder, because I rarely read straight through an entire proof or definition. I try to read the minimum possible that's enough to give me the idea of what's going on and then I try to fill the details myself. I'd rather spend my time writing my own definition or proof and doing my own calculations than reading what someone else wrote. The honest and embarrassing truth is that I fall asleep when I read math papers and books. What often happens is that as I'm trying to read someone else's proof I ask myself, "Why are they doing this in such a complicated way? Why couldn't you just....?" I then stop reading and try to do it the easier way. Occasionally, I actually succeed. More often, I develop a greater appreciation for the obstacles and become better motivated to read more. WHAT'S THE POINT OF ALL THIS? I don't think the solution is changing how math books are written. I actually prefer them to be terse and to the point. I fully agree that students should know more about the background and motivation of what they are learning. It annoys me that math students learn about calculus with understanding its real purpose in life or that math graduate students learn symplectic geometry without knowing anything about Hamiltonian mechanics. But it's not clear to me that it is the job of a single textbook to provide all this context for a given subject. I do think that your average math book tries to cover too many different things. I think each math book should be relatively short and focus on one narrowly and clearly defined story. I believe if you do that, it will be easier to students to read more different math books. - @Deane I think from this and other posts you've made here,you were a lot better then most of us as students,Deane. I seriously doubt most of us could learn mathematics by reading any sophisticated text backwards.I DO agree with the spirit of what you're saying and (a)-(c) above. Active learning is the best teacher.Myself-I take extremely detailed notes I then convert to bite-size pieces on study cards WITHOUT MEMORIZATION.This last part is critical.I memorize definitions AND NOTHING ELSE. I try and reproduce everything else.Halmos' Dictum:"Don't just read it-FIGHT IT!" No better advice. – Andrew L May 28 2010 at 22:48 5 Andrew, I don't think I have any real disagreement with you. I probably overstated things when I said "backwards". The idea is to starting flipping through the book until you find something that actually seems interesting. If it's at the very beginning, then just start reading there. But sometimes I have to jump to somewhere a little farther ahead and then work backwards from there. And, to be honest, I doubt I've ever been able to read the entire contents of a really sophisticated math book. At best I learn fragments that I'm able to understand and I'm interested in. – Deane Yang May 29 2010 at 0:47 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here are some words by Gromov that might be relevant. This common and unfortunate fact of the lack of an adequate presentation of basic ideas and motivations of almost any mathematical theory is, probably, due to the binary nature of mathematical perception: either you have no inkling of an idea or, once you have understood it, this very idea appears so embarrassingly obvious that you feel reluctant to say it aloud; moreover, once your mind switches from the state of darkness to the light, all memory of the dark state is erased and it becomes impossible to conceive the existence of another mind for which the idea appears nonobvious. Source: M. Berger, Encounter with a geometer. II, Notices Amer. Math. Soc. 47 (2000), no. 3, 326--340. - 4 This is so true! – Martin Brandenburg May 30 2010 at 9:59 27 I think it is true for some people. But if you're forced to teach a lot then you have to become sensitive to what the difficulty is -- either by remembering your own experience or by observing carefully the difficulties that others have. When I have this darkness-to-light experience, I almost always know what it was that I was missing before. Sometimes it was a key example, sometimes an interesting concrete problem that demands the abstract idea in question, etc. – gowers Jul 27 2010 at 17:45 I absolutely agree that this is a question worth asking. I have only recently come to realize that all of the abstract stuff I've been learning for the past few years, while interesting in its own right, has concrete applications in physics as well as in other branches of mathematics, none of which was ever mentioned to me in an abstract algebra course. For example, my understanding is that the origin of the term "torsion" to refer to elements of finite order in group theory comes from topology, where torsion in the integral homology of a compact surface tells you whether it's orientable or not (hence whether, when it is constructed by identifying edges of a polygon, the edges must be twisted to fit together or not). Isn't this a wonderful story? Why doesn't it get told until so much later? For what it's worth, I solve this problem by getting a different book. For example, when I wanted to learn a little commutative algebra, I started out by reading Atiyah-Macdonald. But although A-M is a good and thorough reference in its own right, I didn't feel like I was getting enough geometric intuition. So I found first Eisenbud, and then Reid, both of which are great at discussing the geometric side of the story even if they are not necessarily as thorough as A-M. As for the first question, I have always wanted to blame this trend on Bourbaki, but maybe the origin of this style comes from the group of people around Hilbert, Noether, Artin, etc. Let me quote from the end of Reid, where he discusses this trend: The abstract axiomatic methods in algebra are simple and clean and powerful, and give essentially for nothing results that could previously only be obtained by complicated calculations. The idea that you can throw out all the old stuff that made up the bulk of the university math teaching and replace it with more modern material that had previously been considered much too advanced has an obvious appeal. The new syllabus in algebra (and other subjects) was rapidly established as the new orthodoxy, and algebraists were soon committed to the abstract approach. The problems were slow to emerge. I discuss what I see as two interrelated drawbacks: the divorce of algebra from the rest of the math world, and the unsuitability of the purely abstract approach in teaching a general undergraduate audience. The first of these is purely a matter of opinion - I consider it regrettable and unhealthy that the algebra seminar seems to form a ghetto with its own internal language, attitudes, criterions for success and mechanisms for reproduction, and no visible interest in what the rest of the world is doing. To read the rest of Reid's commentary you'll have to get the book, which I highly recommend doing anyway. - 7 Related to this theme, I like this essay by Arnol'd: pauli.uni-muenster.de/~munsteg/arnold.html – Qiaochu Yuan Jan 27 2010 at 2:30 23 I think that assessment is really unfair. Before Bourbaki, Noether, Hilbert, Artin, etc., and the other early algebraists, the connections between mathematical subjects were so convoluted and difficult to follow that obvious connections weren't really being made. What the early algebraists did was reduce complicated mathematical objects to their fundamental properties and structures. While it's true that some geometric intuition is lost, in the process, it is a worthwhile trade. – Harry Gindi Jan 27 2010 at 2:35 9 I am told that the lectures of the elder Artin were extremely polished but given with a minimum of motivation. Later, when talking to him one-on-one, he would happily give all the motivation behind what he had just lectured about. The problem I see with imitating the abstract approach too thoroughly from a pedagogical point of view is that one has to strike a balance between the power of abstraction and its point, and I don't think most modern textbooks strike this balance successfully. – Qiaochu Yuan Jan 27 2010 at 3:00 11 @Qiaochu -- "I am told" = "I have read Rota's _Indiscrete Thoughts_"? :) Excellent book. – Pete L. Clark Jan 27 2010 at 4:17 8 OMFG! I'd always wondered where the word "torsion" came from! Upvoted. :) – Vectornaut Jan 27 2010 at 5:15 show 12 more comments This is a consequence of the following fact: One simply cannot communicate what one understands, but can only communicate what one knows. This does not mean that it is impossible to provide motivation and/or context. But, ultimately, the fact kicks in. - 28 I find it especially true when I teach analysis (at any level, from calculus to topics graduate courses). Forget about motivation or context, just try to pass directly the structure of the proofs as you see and remember them without routine technical details that you just add on the fly. You'll find it impossible. So, instead of a few ideas in their pure form that constitute the argument for you, you are forced to present tedious technicalities that do not really matter in the hope that the students will be able to find their way through them and recover the underlying ideas themselves. – fedja Jan 27 2010 at 4:04 So true! We call that the Monad tutorial fallacy, as explained in byorgey.wordpress.com/2009/01/12/… – Zsbán Ambrus Sep 23 at 15:09 I also suffer from this problem -- I used to learn best from books, but in grad school, I'm having real trouble finding any book I can learn from in some subjects. There are a few reasons for this sad state of affairs that come to my mind. I'll list them first and expand on them below. 1. Providing real enlightenment well is very, very hard, and requires a very intimate relationship with a subject. 2. Different mathematicians need vastly different motivations for the same subject. 3. Mathematics needs to age before it can be presented well. 4. Good writing is not valued enough in the mathematical community. The first of these is true to such a strong degree that it surprises me. Even for well-established subjects, like undergraduate mathematics, where there are a million mathematicians who know the subject very well, I find that all the really good books are written by the true titans of the field -- like Milnor, Serre, Kolmogorov, etc. They understand the underlying structure and logical order of the subject so well that it can be presented in a way that it basically motivates itself -- basically, they can explain math the way they discovered it, and it's beautiful. Every next theorem you read is obviously important, and if it isn't then the proof motivates it. The higher-level the subject, the fewer the number of people who are so intimate with it that they can do this. It's interesting how all the best books I know don't have explicit paragraphs providing the motivation - they don't need them. (Of course there are exceptions -- some amazing mathematicians are terrible writers, and there are people with exceptional writing ability, but the point stands). Regarding the second point, different people want completely different things for motivation. The questions that pop into our heads when we read the theorems, the way we like to think, the kind of ideas we accept as interesting, important, etc., is different for all of us. For this reason, when people try to explicitly describe the motivation behind the subject they almost always fail to satisfy the majority of readers. Here, I'm thinking of books like Hatcher, Gullemin & Polluck, Spivak, etc., where some people find that they finally found the book that explains all the motivation perfectly, and others are surprised at the many paragraphs of text that dilute the math and make finding the results/proofs they want harder and reading slower. At the same time, the amount of effort each of these authors must have spent on organization of their book seems absolutely immense. For this reason, unless there are 50 books written on a subject, the chances that you will find a book that seems well-motivated for you are low. The third reason is simple: it takes time for a new subject to stop being ugly, for people to iron out all the kinks, and to figure out some accepted good way to present it. Finally, it seems to me that good writing, especially expository writing, is not particularly valued in the community, and is valued less now than it was before. Inventing new results seems to be the most respectable thing to do for a mathematician, teaching is second-best, and writing has the third place. People like Hatcher & co. seem to be rare, and I don't know of many modern titans of mathematics who write any books at all, especially on a level more elementary than their current research. So what do we do? I think what algori said in his answer is the only way to go. - And some of the most prolific writers are terrible writers (Serge Lang, although I can't say I don't have some affection for some of his books..). – Harry Gindi Jan 27 2010 at 5:31 6 "...the chances that you will find a book that seems well-motivated for you are low." That really is an important point! And if it's a choice between a) a book with lots of motivation which makes no sense to me; and b) a book with no motivation, but which is consequently shorter, I'll pick (b) every time. – Matthew Daws Jan 27 2010 at 12:19 19 The fourth point is especially unfortunate. Shouldn't it be recognized in the mathematical community that a mathematician who produces an exemplary undergraduate textbook could influence the development of mathematics much more than a mathematician who proves a few obscure results, however brilliant? – Qiaochu Yuan Jan 27 2010 at 14:50 5 @Qiaochu I couldn't agree more.The student of John Milnor's who probably ended up making the single greatest impact on modern mathematics through his writings was one that published very little research;yet argueably had as great if not a greater influence them Milnor himself:Micheal Spivak.This however is the elephant in the room a research-based profession like ours refuses to acknowledge. – Andrew L May 28 2010 at 22:53 While I agree that good writing is undervalued, etc., I find the words about Michael Spivak pretty exaggerated ("single greatest impact on modern mathematics" -- really??). – Todd Trimble Dec 30 at 13:21 To answer the question in title of the posting (here I am rephrasing what I learnt from philosophical writings by several great mathematicians; Vladimir Arnold and Andre Weil are two names that come to mind, but there surely are others who said something similar, although I can't give you a reference now): because mathematics is discovered in one way and written in a very different way. A mathematical theory may start with a general picture, vague and beautiful, and intriguing. Then it gradually begins to take shape and turn into definitions, lemmas, theorems and such. It may also start with a trivial example, but when one tries to understand what exactly happens in this example, one comes up with definitions, lemmas, theorems and such. But whichever way it starts, when one writes it down, however, only definitions and lemmas remain and the general picture is gone, and the example it all started with is banned to page 489 (or something like that). Why does this happen? This is the real question, more difficult than the original one, but for now let me concentrate on the practical aspects: what can be done about it? Here is an answer that I found works for myself: try to study a mathematical theory the way it is discovered. Try to find someone who understands the general picture and talk to that person for some time. Try to get them to explain the general picture to you and to go through the first non-trivial example. Then you can spend weeks and even months struggling through the "Elements of XXX", but as you do that you'll find that this conversation you had was incredibly helpful. Even if you don't understand anything much during this conversation, later at some point you'll realize that it all fits into place and then you'll say "aha!". Unfortunately, books and papers aren't nearly as good. For some reason there are many people who explain things wonderfully in a conversation, but nevertheless feel obliged to produce a dreadfully tedious text when they write one. No names shall be named. Here is another thought: when one is an undergraduate or a beginning graduate student, one usually doesn't yet have a picture of the world and as a result, one is able to learn any theory, no questions asked. Especially when it comes to preparing for an exam. This precious little time should be used to one's advantage. This is an opportunity to learn several languages (or points of view), which can be very helpful whatever one does in the future. - 17 At the risk of being overly repetitive, I will repeat the advice of David Kazhdan as reported to me by one of his students: "You should know everything in this book, but don't read it!" – Deane Yang Jan 27 2010 at 16:09 2 In antiquity, knowledge was passed down by oral tradition. Gradually knowledge became what could be written down, rather than what could be passed on orally. – Colin Tan Apr 25 2010 at 14:28 Bourbaki volumes are certainly not the sort of textbooks one puts into the hands of young students. but an advaced student, familiar with the most important classical disciplines and eager to move on, could provide himself with a sound and lasting foundation by studying Bourbaki. Bourbaki's method of going from general to specific is, of course, a bit dangerous for a beginner whose store of concrete problems is limited, since he could be led to believe generality is a goal for itself. But that is not Bourbaki's intention. For Bourbaki, a general concept is useful if applicable of more special problems and really saves time and effort. -Cartan, "Nicolas Bourbaki and Contemporary Mathematics" Bourbaki probably had some unintended influence on textbook writers, however, during the 20th century. More motivation, examples, applications, diagrams and illustrations, informal scholia to go with formal proofs, etc. than are found in the typical Bourbaki-inspired would be great. The "from the general to the specific" approach of bourbaki was adopted for specific, non-pedagogic reasons. - 4 So many people tend to miss this point. There are so many points that become much clearer when one goes from general to specific. Take, for example, the intermediate value theorem. It is a very specific and unenlightening proof. However, if one proves it from the context of general topology, one ends up with the much stronger statement that connectedness is a topological invariant. – Harry Gindi Jan 27 2010 at 4:18 4 One could say that the IVT motivates the connectedness property. – Steve Huntsman Jan 27 2010 at 4:56 2 I'd say the notion of connectedness doesn't really need motivation! (The definition might, though.) – Sam Lichtenstein Jan 27 2010 at 5:12 16 Harry, topological invariance of connectedness is not "stronger than" the IVT. I would say that the main point of IVT is that intervals in ℝ are connected, however unenlightening that may be. – Jonas Meyer Jan 27 2010 at 5:36 4 Actually, getting accustomed to the formal definition of connectedness and manipulations with it, is IMHO an important part of learning point-set topology. If one starts looking at, say, connectedness of invertible groups in certain topological algebras wrt different topologies, then having precision and not just intuition becomes invaluable. – Yemon Choi Jan 27 2010 at 5:37 show 5 more comments Good question, but perhaps a little unfairly stated? With a topic like group theory, for example, it is true that, historically speaking, topics such as Galois theory played a crucial motivating role in the development of the theory, however, a posteriori, Galois theory is a more sophisticated topic than (elementary) group theory, and a student can profitably learn about groups as natural mathematical incarnations of symmetry, before he/she learns about Galois theory. Therein lies, I think, a core issue: while explanation of the motivation behind a part of mathematics is very enlightening to those who have a rich enough background to appreciate it, it is not so clearly helpful to be given that motivation as one is first learning the subject: to be able to appreciate torsion as a phenomenon in the homology of manifolds, for example, requires considerably more sophistication than I would require of someone to explain (rigourously) what a finite (abelian) group was. To put it another way, if I have thought hard about a piece of mathematics, and over time realized a good way to describe it, then it's not at all clear to me that telling you all the motivations I had, and the failed attempts I made, will ease your path to understanding what I have figured out, and therefore why should I burden you with all that baggage? The same verdict is I expect made more brutally by people who clean up the work of those that have come before them. - 2 Good point. But rather than the difference between group theory with and without Galois theory I think the OP is complaining about the difference between a group as a group of automorphisms of an object and a group as a binary operation satisfying certain axioms. (I may have misrepresented his concerns in my own response.) – Qiaochu Yuan Jan 27 2010 at 14:53 9 I think that what people really want/need as motivation is not the actual history of the subject but an idealized history: how it should have been developed. For example, when learning topology, I think it's appropriate to start the rigorous development with open/closed sets (motivated perhaps by $\varepsilon-\delta$ proofs), rather than explaining what definitions people used before open sets. – Ilya Grigoriev Jan 27 2010 at 16:27 1 @Qiaochu: of course I agree, I just meant the more math someone knows, the easier it is for me to give motivation for a topic, but if someone knows relatively little, it's harder to know what contexts are helpful. The danger is perhaps that people react to this by writing only what is logically essential. @Ilya: I agree, but it's surely a subjective question to decide what the "idealized history" should be no? – Kevin McGerty Jan 27 2010 at 17:15 This is a quote from a beautiful little book by D. Knuth called Surreal Numbers. B: I wonder why this mathematics is so exciting now, when it was so dull in school. Do you remember old Professor Landau's lectures? I used to really hate that class: Theorem, proof, lemma, remark, theorem, proof, what a total drag. A: Yes, I remember having a tough time staying awake. But look, wouldn't our beautiful discoveries be just about the same? B: True. I've got this mad urge to get up before a class and present our results: Theorem, proof, lemma, remark. I'd make it so slick, nobody would be able to guess how we did it, and everyone would be so impressed. A: Or bored. B: Yes, there's that. I guess the excitement and the beauty comes in the discovery, not the hearing. A: But it is beautiful. And I enjoyed hearing your discoveries at most as much as making my own. So what's the real difference? B: I guess you're right at that. I was able to really appreciate what you did, because I had already been struggling with the same problem myself. ... and so on. - To play devil's advocate for a moment: sometimes, it is worth learning how to do some things in generality and abstraction early on in one's mathematical education. I'm not a group theorist, but sometimes there is merit in learning the abstract stuff and then seeing how it applies -- because then one sees just how much can be done "formally" or "naturally". That's not to say it should always be done that way round, or that the emphasis should be on terseness and "purity"; just that to dogmatically decry abstract formulations is IMHO no better than dogmatically disdaining examples. Then again, I'm someone who liked Banach's contraction mapping principle as an undergraduate, and didn't care much for solving differential equations; so my bias is obvious and undeniable ;) - Couldn't agree more! – Vipul Naik Mar 26 2010 at 13:52 To further Yemon Choi's thread, consider two historically popular algebraic topology textbooks. Currently Hatcher's book is very popular. Beforehand Spanier was quite popular. Spanier is in a sense more terse and to-the-point. But it also erases much of the context that you get from Hatcher's book. I was the TA for Hatcher's algebraic topology class a couple times at Cornell and remember some students having trouble dealing with the richness of the context in the book. Some questions in Hatcher's book present you with a picture and ask you to argue a certain pictured loop isn't null-homotopic. For a student used to dry set-theoretic rigour, this can be a major and uncomfortable leap. I'm not saying that Spanier is in any way a better book, but by providing a rich layer of context you're giving students a lot more to learn. If they're ready, great. But if they're not, it can be a problem. Everyone deals with those issues in different ways. Sometimes you teach less technical material and give more context (like an undergrad differential geometry of curves and surfaces in R^3 type course) and sometimes you head for the big machine and maybe sacrifice context for later -- let the students "add up" the context when they can. Many undergraduate measure theory courses operate this way. - 9 Switzer is appropriate for very few beginning students. – Tyler Lawson Jan 27 2010 at 5:44 16 Proposing Switzer as a textbook for beginning students (I think Ryan is talking about undergraduates even...) is quite close to trolling. – Mariano Suárez-Alvarez Jan 27 2010 at 6:08 7 You could certainly try, but this isn't the place for that, Harry. Especially since there are plenty of commentators here who have background in the terminology of categories and they don't agree with you. So the point seems pretty moot to me. Keep in mind that by-and-large, students first exposure to category theory is in an algebraic topology class. You can always ramp-up how efficient a book is by ramping-up the prerequisites, but putting category theory before algebraic topology is ahistorical. I would have found it pretty boring, too. – Ryan Budney Jan 27 2010 at 17:10 7 The problem with Switzer is not the category theory. Rather, it does things in far too much generality for a first course and is overburdened with things a beginner will find confusing (ie bordism theory, the Adams spectral sequence, cohomology operations in generalized homology theories, etc.). These are all topics I love, but I can't imagine wanting to teach them in a first course. The goal in teaching is not to show off how "smart" the teacher is, but rather to TEACH. Math-as-ego-trip has no place in advising students. – Andy Putman Jan 27 2010 at 17:18 5 Earlier you were making a more general assertion about anyone who was "comfortable with categories", not just your own experience. I'm not sure I see that Switzer does what you claim: "explained [homotopy groups] in one fell swoop". Perhaps you mean define, not explain? All the theorems about homotopy groups require proofs and Switzer doesn't have any shortcuts as far as I can see (I just found a Russian translation of Switzer, strangely enough, and am browsing through it). Here I'd argue category language doesn't really contribute much to the discussion. – Ryan Budney Jan 27 2010 at 18:39 show 10 more comments I believe that normal subgroups were first defined in the context of Galois theory (in particular, normal field extensions), by Galois. If one wants to abstract the situation slightly and see what kind of setting this is and why it makes normality important, I think the following is a fair representation: If a group $G$ acts transitively on a set $X$, and $H$ is the stabilizer of $x \in X$, then $g H g^{-1}$ is the stabilizer of $g x$. Thus a normal subgroup has the property that it leaves one $x \in X$ invariant, then it leaves every $x \in X$ invariant. Indeed, one could define a normal subgroup this way: a subgroup $N \subset G$ is normal if and only if for every set $X$ on which $G$ acts transitively, $N$ fixes some $x \in X$ if and only if $N$ fixes every $x \in X$. (Proof: take $X = G/N$.) This is not the same definition as being the kernel of a homomorphism, although of course it is equivalent. What is my point? Mathematical ideas have many facets, often multiple origins, certainly multiple applications. This creates a difficulty when writing, because to focus on one point of view one necessarily casts other points of view into the shadows. Any author of a textbook has to walk a line between presenting motivation, perhaps by focusing on a certain nice view-point, and maintaining applicability and appropriate generality. A related issue is that the example that will illuminate everything for one reader will seem obscure or even off-putting to another. When you lament the omission of a favourite piece of motivation from a textbook, bear in mind that the author may have found that this motivation doesn't work for a number of other students, and hence was not something they wanted to include. The solution to this is to find texts that focus in directions that you are interested in. Perhaps the ultimate solution is to move away from texts to reading research papers. If you find papers on topics or problems that you are interested in, you will hopefully have the motivation to read them. In doing so, you will then find yourself going back to earlier papers or texts books to understand the techniques that the author is using. But now all your study will have a focus and a context, and the whole experience will change. - I fixed an error that was happening with your LaTeX – Harry Gindi Feb 17 2010 at 21:26 Thank you, fpqc. – Emerton Feb 18 2010 at 2:34 4 I always appreciate it when a text makes a definition and immediately shows it to be equivalent to other natural concepts. For example, to define a normal subgroup in the usual way and then immediately to state and prove: "Theorem. The following are equivalent: (a) N is a normal subgroup of G; (b) N is the kernel of a homomorphism; (c) for every G-set X..." – Darsh Ranjan Mar 7 2010 at 0:43 It is interesting that we often also see the opposite complaint... For example: Here is this monster thousand-page calculus textbook. But see this old text by Courant: it covers the same material in 200 pages, just has less fluff. (And, of course, much of what they call "fluff" is what others call "motivation and context".) - 2 That's taking this discussion to a logical extreme, I suppose. But I'd say Courant covers far more than that any 1000-page calculus text I know of, and has more interesting examples. But these books are targeting far more divergent audiences. – Ryan Budney Jan 27 2010 at 18:49 3 Although Courant is probably not suitable for today's freshmen, I would like to see people try to write terser textbooks. The new "say the same thing 4 different ways including color fonts and pictures" textbooks are just total overstimulation. They're trying to replace the role of lectures. – Deane Yang May 29 2010 at 0:52 Today, textbooks could be shorter, and then much example material, colour pictures and son on could be in web material – Kjetil B Halvorsen Sep 22 at 20:14 I apologize if this topic has been discussed to death so far. Many of the posts above are absolutely correct in saying that mathematicians all learn math in different ways. Some are fine slogging through swamps of technical details, and some prefer to learn the "bigger-picture" intuition before trying to understand proofs. Many fall somewhere in the middle. I find it extremely helpful to have two sources when learning mathematics: one technical result/proof driven text and another more intuition and example oriented source. The latter doesn't need to be a book; indeed, as the thread author noted, many subjects lack such a book. However, more experienced mathematicians in the field tend to be able to provide a considerable amount of motivation for whatever you are learning. As an example, I learned differential topology from Gullemin & Pollack (motivation) and Lee's Smooth Manifolds book (details). Also, if you want an example of a book that provides a ton of motivation and almost no detail (which, I think, is extremely rare in a math book), you should look at Thurston's Three-Dimensional Geometry and Topology. - 3 Thurston's book is also the best exception I know to my claim that the most amazing mathematicians don't write books any more. It's a very strange book, but I find it quite inspiring. – Ilya Grigoriev Jan 27 2010 at 21:15 2 I agree with all of this, but especially the mention of Thurston's book (it was only a bound set of notes distributed by the Princeton math department when I used it). What beautiful and inspiring stuff. – Deane Yang Jan 28 2010 at 0:57 1 Thurston's book is a deep,awesome,infuriating read and it HAS to be on the must read list of any mathematics or physics graduate student beyond the first year courses. – Andrew L May 28 2010 at 22:40 Authors of mathematics have to make a lot of tradeoffs. Ideally, you want a book that is well motivated, has easy proofs, gives you a good intuition for working in an area yourself, covers lot of material etc. These are usually conflicting goals. If you want to motivate a problem historically you are pretty much limited to using historical tools. So you proof a lot of theorems in general topology using transfinite induction and the well-ordering theorem instead of applying Zorn's lemma. This makes things obviously harder to read for people uzsed to the modern toolkit. The proofs are likely to be longer and it is harder to cover much material. The intuition behind a result that is the easiest for a beginner, may not be the same intuition useful in actually working in an area. For the latter, you think in terms of big, abstract concepts. Also, it is clearly not the case that a proof that is easier for a beginner is also easier for someone more advanced. The proof for the beginner may use elementary techniques but a lot of computation. For someone more advanced, the computation is confusing noise. A proof that relates to an idea already seen in other contexts would be much simpler. There are books that are bad for every ausience at every stage of learning, but no book is perfekt for everyone at every stage of learning. - Books are expensive, and a book that can be used in many different problems is more useful than one that focuses exclusively on one. That is why nice stories of the adventures of mathematics are harder to sell than dry theoretical expositions. A story of solving a problem or proving a theorem is likely to be more entertaining and easier to follow and to remember even if the solution involves a lot of difficult mathematics. But each each story can hold just a small amount of theory, and once you know the stories, the story book becomes useless. Dry theoretical expositions find their way into our own stories, when we consult them in order to find a solution for one of our problems. We are more likely to buy such books, because they are so much more useful to us in reality. Beyond that it is all economics: writers of mathematical texts develop a dry theoretical style, because that is what their readers demand. - 9 I think this is an important point. The story (i.e., motivation) behind a piece of mathematics is very important but usually easy to remember. What is hard to remember are the rigorous details. So we want a book to remind us of the hard details. A book that is too wordy is in the long run just something that is too big and heavy to keep around. Now that we have the web, maybe that's a better place to maintain and organize the stories? – Deane Yang Jan 28 2010 at 13:46 I agree that sometimes authors present a concept simply because it's a standard example in the subject, but then spend a single page on it and just move on to other things. One example that comes to mind is a particular text on undergraduate real analysis which introduced Fourier Series in a few pages and then had a single sloppy exercise related to applications to PDEs. I'm not saying the book should have dedicated a chapter to PDEs, but one ugly exercise seems like a travesty and makes you scratch your head about why you're wasting your time on this stuff. I don't expect incredibly motivated concepts in graduate texts on the same subject simply because by then I should have already been motivated enough to study onwards. However, motivation for what you're doing is one of those dangerous phrases in mathematics. For the more difficult and abstract stuff out there, it's not always straightforward to communicate the direct usefulness of an idea. Just because I tell you a result is incredibly useful in say, the sciences, does that make all the difference? When I learned the Radon-Nikodym theorem in real analysis, I could not for the life of me see a genuinely useful application of it, until I came to the formal definition of conditional expectation in probability. In short, the proof of existence and uniqueness of conditional expectation is by the abstract nonsense argument of the Radon-Nikodym theorem. I certainly think it would have been quite nice if somebody told me in my real analysis class why we were learning the Radon-Nikodym theorem, but at the same time I don't think I would have been ready to learn the substantial amount of probability to really understand what the heck the formal definition of conditional expectation is (let alone why it's useful!). In the end, you're going to need to find a textbook which suites your needs. Each person has their own style for absorbing the material they need. Some people love the straightforward definition - theorem - proof approach while others like to see a section on "applications" after every idea presented (I personally fall into the latter category). If you want to learn the nitty-gritty version of complex analysis, you pick up Complex Analysis by Ahlfors. If you want to learn complex analysis from an engineering point of view, you pick up Complex Analysis For Engineers. It's up to you which applications you want to see, so supplement your knowledge accordingly. Plus, much of the time I don't come to appreciate a textbook until I've read it all the way through. If you're curious about "applications" of what you're learning, try going ahead 20-30 pages, and hopefully the author will have started subjects which apply what you have learned. - 3 I sympathize with the idea here, but I don't know if it really justifies that approach. If it's truly not possible at a certain stage to convey to students why the R-N Thm is interesting or useful, why teach it to them? I realize this attitude would lead to a very different approach in education, but I'm not sure it's bad. It seems similar to what some here are advocating for personal study (like Deane Yang's reading books backwards). – Mike Benfield Feb 3 2010 at 1:34 I have noticed a similar trend in a different setting: highly technical parts of computer science, in particular POPL-style approaches to programming languages, and ISSAC-style symbolic computation. But there also arises a solution, of sorts: people's proceedings papers are precise, often dry, and full of details. The good presentations of the same material at a conference will typically involve a lot slides for motivation, the big picture, worked examples that give the general idea, and so on. In other words, the proceedings paper alone is dry and only cursorily motivated, while the talk slides (on their own) could be seen as fluffy and imprecise. And yet, if you take both together, they give an absolutely fantastic view of the results. There is thus an increasing trend for computer scientists in these disciplines to post both their paper and their slides on their web page -- because each gives very different aspects of their actual contribution. I like this style. Is there a way this could be transposed to mathematics? - This style already exists in mathematics. I've been to plenty of math talks for which I've seen the corresponding journal articles which fit your description perfectly. – Mark Meckes Feb 18 2010 at 15:05 But do the speakers post their talk slides alongside the journal article on their home pages? The logical next step would be to have archival sites that make it the 'usual process' to submit (slides, paper) as a pair. – Jacques Carette Feb 18 2010 at 19:09 Some speakers do that, but not many (I don't). You're probably right that it would be better if we did so more regularly. One (minor) difficulty with doing literally what you describe is that mathematicians tend to give talks that don't correspond to just one paper. – Mark Meckes Feb 20 2010 at 12:19 Also, many mathematicians don't use slides at all. – Darsh Ranjan Mar 7 2010 at 0:46 I think it is just another instance of Sturgeon's law "90% of everything is crud". (Google for details.) - 3 The question is not about the existence of crud but its nature, which I suspect is specific to math textbooks; the crud of science textbooks, for example, probably looks very different. – Qiaochu Yuan Jan 27 2010 at 4:08 1 Still, I do not understand why this answer got so many negvotes. It was a really nice quote. – Anweshi Jan 27 2010 at 22:02 1 +1 because this law is something one easily forgets in such debates. It's important to keep in mind, when talking about good & bad, that some part of everything is just bad because nobody tried to make it good. And then the other part (to fill up to 90%) is what is discussed here. – Konrad Voelkel Feb 7 2010 at 13:40 I hope no one will object my raising this question from the dead... One point which has been alluded to by Tracer Tong but which is worth emphasizing is that it is sometimes very difficult to justify the usefulness of a fundamental concept without starting a whole new book. Just saying "This gets very important later on" may satisfy the lecturer/writer who knows what he is talking about but will leave the student with an aftertaste of argument by authority. This happens most often with exercises : it is very tempting for the author to take an example or a theorem from a more advanced corner of his subject and strip it down of its fancy apparel. I'll list a few examples of mathematical concepts I encountered in this way "before their times" and came out with the first impression that those were silly and unmotivated - and changed my mind when I learned about them in a more thorough manner : • Hyperbolic geometry (!!) • p-adic numbers (!!!) • Dirichlet series • Milnor K-theory I don't know the best option here... It is nice to see glimpses of more exciting subjects, but sometimes it is more a way to satisfy the (quite natural) inclination of the teacher for what lays further down the road. - I agree with the sentiment of the original post, but I have also seen people perfectly happy and willing to plow through pages of technical details. I think their drive is to learn theory X, because big names say its important(nothing wrong with that just doesnt work well for me). So ultimately its a matter of what is your goal in mathematics and what is your personality. Instead of arguing "why", we should try to exchange the missing motivation using the wonderful new tools we are privileged to have in 21st century (like MO, although not sure if MO staff would frown upon flood of questions like "what is the idea behind this definition".) Also, consider checking out this thread I started out of my own frustration with the lack of motivation. By reading two of the books suggested in that thread, I can testify that the examples and motivation are out there, you just have to find the right authors. http://mathoverflow.net/questions/7957/books-well-motivated-with-explicit-examples - Motivation is especially important in beginners, for instance in sophmore and junior undergraduate courses. A student who has seen three or four well-motivated steps to an abstraction approach would, I expect, be better prepared for a course that goes straight to it. That said however, I just finished two weeks of historical motivation for my Theory of Computation course and they were impatient with it. So some of how best to teach depends on what learners bring to it. - 2 I believe in a minimalistic approach. Try to figure out exactly what you want the students to be capable of doing by the end of the course. Provide the absolute minimum of every aspect (motivation, definitions, proofs, etc.) needed to achieve your goal. Too much of anything drags the course down. And of course try to make the students do as much of the work as possible. – Deane Yang Jan 28 2010 at 1:00 Too many professors use that as an excuse to not teach,Deane. "And,of course,try to make the students do as much of the work as possible." Does that benifit THEM or US,Deane? I wonder......... – Andrew L Apr 8 2010 at 5:23 2 Andrew, here I disagree with you. You mentioned "active learning" in another more recent comment. This is for me critical. A professor who presents overly complete beautiful crystal clear lectures is not necessarily doing the students any favors. It's better to give crystal clear incomplete lectures (gaps carefully chosen) and make the students finish the work. – Deane Yang May 29 2010 at 0:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9655004739761353, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/34489/can-all-quantum-superpositions-be-realized-experimentally
# Can all quantum superpositions be realized experimentally? When textbooks in QM give example of finite dimensional Hilbert spaces they give examples of photon polarizations or of 2-states systems and sometimes they mention how one can achieve superposition in such cases experimentally. On the other hand when they talk about simple potentials like particle in an infinite potential well, and talk about superposition of the stationary states of this problem they never mention how such superposition can be achieved experimentally (despite the fact that we are in the era of nanotechnology and scientists can "engineer" effectively such potentials and many others). Typical example that may appear in textbooks can be as simple as $\Psi(x)=\frac{4}{5}\phi_1(x)+\frac{3}{5}\phi_5(x)$, where {$\phi_n(x)$} are the normalized eigenfunctions of the particle in a box Hamiltonian. Other more general superpositions could be between infinite numbers of stationary states via $\Psi=\sum a_n\phi_n(x)$. That makes me wonder, is there a fundamental reason that prevents us from engineering such superposition in case of particle in a box and the like, or that we just do not know how to do it yet? why is it possible with spin and seem to be hard with particle in a box? or is it something related to the energy eignestates in the position representation? It is kind of frustrating to study for long hours/read/solve problems/HW on all kinds of potentials and on superpositions without knowing how/if they can be realized in experiment. If someone knows references in which this issue is discussed it would be greatly appreciated. - +1 on the pythagorean amplitudes :) – Emilio Pisanty Aug 18 '12 at 20:34 There are some papers about ways of constructing any superposition in certain experimental systems. I don't have time to locate them now, but hopefully somebody can do that. – Peter Shor Aug 18 '12 at 22:25 Beware of question like, "Can ... be realized experimentally?" Do you mean that, theoretically, can an experiment realize it? I would argue that's not the same as, "Can an experiment realize ..."? (More below on my answer.) – emarti Jan 13 at 19:59 ## 4 Answers It all depends on exactly what you want to do - what system you're handling, what state you want to engineer, and what you plan to do with it. (Note that "I just want to make it" is definitely a perfectly legitimate purpose, but then you also have to think about how you're going to detect it and make sure you've got it!) For the specific example you pose, creating a superposition of two particle-in-a-box states, you first have to make the box. This is now doable using quantum dots (semiconductor islands in a different semiconductor, possibly with an electron-donating impurity inside) with the right geometry. You also need to make sure that your well is deep enough to accommodate the states you want without shifting their energies too much. After that, though, it's a piece of cake (relatively), since the $\propto n^2$ dependence of the energy levels makes all the transition frequencies distinct. Then you just have to shine a laser pulse at the right frequency and you effectively eliminate all the other levels to get a two-level system interacting with a laser field - a Rabi problem - and you just need to drive a Rabi cycle long enough to get the superposition you need. However, not all systems are as easily manipulated, and the creation of specific states can be quite challenging. For example, for a harmonic oscillator, all the transition frequencies are the same, and you cannot do this kind of trick, so that making states with a well-defined number of photons/quanta can be very difficult (but doable!). For example, creating superpositions of different coherent states (i.e. "cat states") in light is currently only possible in certain geometries, as I found out on this question. Number states, coherent states, squeezed states, superpositions, entangled states, and so on, have been realized to some degree or other in light beams, mechanical oscillators, atoms and ions, circuit QED, and so on. Again, it depends on what you want your "weird quantum state" to do. A word of warning, though, on your more general infinite superposition $\Psi=\sum_n a_n \phi_n$. While in principle this is (more-or-less) doable, depending on the state, you also have to bear in mind that one can only ever do a finite number of measurements on the state and therefore you can only ever confirm with certainty a finite number of the coefficients $a_n$. This is another way of saying that you can only ever do stuff with some finite precision. Thus all you can create is a finite sum like $$\Psi=\sum_n^N a_n \phi_n+\textrm{ some amount of noise.}$$ Other than that, it again depends on what system you have and what state you want and it's up to your experimental ingenuity do design a procedure that will take you there. - I do not quite understand the part on shining the laser. Regarding the finite sum of stationary states to reach an approximate $\Psi$, I am wondering how well the stationary states, $\phi_n$, are prepared? are they approximate as well? – Revo Aug 19 '12 at 21:36 The $\phi_n$ are basically in your head, so they can be approximate or exact depending on what you need. Ultimately you need to refer to exactly what you are measuring. – Emilio Pisanty Aug 20 '12 at 0:02 About the laser: when you shine EM radiation of angular frequency $\omega$ on a system, it can only excite transitions at energy $\sim\Delta E=\hbar\omega$. If the system is on, say, the ground state, then states not at this energy difference will not be populated and can therefore be ignored. This is effectively the Bohr principle that atoms absorb and emit radiation at frequencies corresponding to the energy differences of transitions within the different energy levels. – Emilio Pisanty Aug 20 '12 at 0:44 Comment to Shor (apologies for the answer, I cannot yet write comments): Maybe you are referring to Quantum controllability theorems. Basically quantum controllability tells you what are the requirements needed for any state of the system to be accessible from any other state by means of an external electromagnetic field at a finite time. The problems are of course related to degeneracies in the spectra of many Hamiltonians. The first papers addressing this problem are J. Math. Phys. 24, 2608, (1983) and Phys. Rev. A, 51, 960 (1995). There are many works after this, particularly due to its importance in Quantum Control and its connection with Quantum Computation. To Emilio Pisanty: By the way, the harmonic oscillator is a well known uncontrollable system. However any truncation of the Hamiltonian makes the system controllable again. - Yes, for any finite-dimensional Hilbert space that effectively describes some physical system, it is possible to design a procedure that prepares the physical system in any state $|\psi\rangle \in {\mathcal H}$. The precise description of the procedure or gadget is an awkward task because the wording inevitably depends on the physical interpretation of the degrees of freedom and their interactions with various macroscopic fields we have. However, let me just pick a simple example. $\phi_1$ and $\phi_5$ may be interpreted as the states $|up\rangle$ and $|down\rangle$. Then any linear combination of them, including your combination $0.8 \phi_1+ 0.6 \phi_5$ (incidentally, I also love to use this Pythagorean combination as an example), may be prepared as the state "electron up" with respect to a particular axis that is calculable. I could do the calculation of the axis for you if you want, it's trivial. Of course, the state's phase will be undetermined; the state vector's overall phase is always unphysical (unless you can compare it with another phase of the same system, like in Berry's phase or the Aharonov-Bohm effect etc.). Quantum computation is a systematic "industry" that is able to perform many elementary operations on the Hilbert space, like exchanging $\phi_2$ and $\phi_5$ if you found it natural. Typically, only several operations – like rotations by preferred angles – are allowed operations on a quantum computer. However, it's straightforward to extend the basic operations of a quantum computer so that you may compose them to any unitary matrix you want. That's also enough to prepare any complex linear superposition of any basis vectors. It's also possible to "remap" qubits encoded e.g. in many electrons' spins to the amplitudes for any other states even though the detailed "diagram" of the apparatus that achieves such a goal will tend to depend on the precise technical implementation. But in principle, such a "remapping" is analogous to copying the classical information (in bits) from a CD to a Flash memory. The same things may be done at the quantum level but the bits are not copied; the original has to be destroyed. - But this does not answer how to superimpose experimentally 2 eigenstates or more of the particle in a box Hamiltonian. – Revo Aug 18 '12 at 19:53 OK, just shoot the particle into the box using a double-slit-like gadget where the kinetic energy is ready for the first level in the first arm, and accelerated to the 5th level in the second arm. Let the electrons in the two arms interfere before they're trapped in the box. The relative size of the slits will determine the ratio of the normalizations of the two amplitudes, fine adjustments of the arms' length will adjust the relative phase. If I can't shoot the particles inside, e.g. because you want the particle to be inside the box, you would have to tell me what I can do to manipulate it. – Luboš Motl Aug 18 '12 at 19:58 Let me mention that this technical description sounds very different than a quantum computer or others and it shouldn't be surprising. Your task "prepare a physical system in a particular quantum state" is the quantum counterpart of "prepare a classical system in a particular state". This is an extremely universal task that may mean anything - cook a soup or deliver a rover to Mars. All these things are states of a physical system, even classically. The procedures to achieve them of course depend on the physical system but with enough tools and interactions, it can always be done in principle. – Luboš Motl Aug 18 '12 at 20:01 Sorry, but I do not understand what you mean by "arm" let alone "the kinetic energy is ready", "accelerated", and "the arms length" to control the relative phases. Could you please elaborate on that mechanism. Are you saying that we have 2 holes, they act like filters, one will let only particles in the quantum state 1 to pass, and the other will allow only particles in the quantum state 5 to pass, then we let them interfere then collect them in a box? (I guess that would destroy the interference pattern so seems my understanding is wrong) – Revo Aug 19 '12 at 21:49 Yes, you can definitely construct holes that act as filters. A hole always is a filter, after all. It corresponds to a projection operator that only keeps the particle if its position belongs to a set, the hole. By accelerating the particle or doing something else with it along the path, the position information may be converted to another quantum number such as the momentum. If the initial state of the particle is the same (independent of the history/slit) and the final state is the same up to the final position along the would-be interference pattern, there is always interference. – Luboš Motl Aug 20 '12 at 5:46 Theoretically, any superposition can be experimentally realized. Experimentally, most can't. The fundamental reason is that a system must be decoupled from its environment so as not to decohere, yet still coupled strongly to an extremely well-calibrated apparatus to generate the superposition. I would guess the 'advanced information' for the 2012 Nobel Prize would be a good starting point, since this issue was so central for both Haroche and Wineland. As a very rough experimental summary, superpositions of two-level systems has been seen in a huge number of systems, from charge or flux states in qubits to electronic states of semiconductor impurities. A general superpositions on the order of up to maybe half a dozen states has been realized in a few systems. The systems that leap to mind are nuclear spins in a molecule, internal spins in an atom, the spin/electronic state of a chain of ions, momentum state of a free-falling aotm, or the polarization and spatial mode of one (or several) photons. For a large number of particles, very particular superpositions can be made. For example, spin squeezing has been observed for probably hunderds to thousands of atoms, but a general superposition state of so many atoms is beyond current experiments. That, in a word, is why it's hard to build a quantum computer. To answer the broader question, superpositions are very useful to know about and understand, even if they're very challenging to make. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400893449783325, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/15052/graphene-space-elevator-possible?answertab=active
# Graphene space elevator possible? I just read this story on MIT working on industrial scale, km^2 sheet production of graphene. A quick check of Wikipedia on graphene and Wikipedia on space elevator tells me Measurements have shown that graphene has a breaking strength 200 times greater than steel, with a tensile strength of 130 GPa (19,000,000 psi) and The largest holdup to Edwards' proposed design is the technological limit of the tether material. His calculations call for a fiber composed of epoxy-bonded carbon nanotubes with a minimal tensile strength of 130 GPa (19 million psi) (including a safety factor of 2) Does this mean we may soon actually have the material for a space elevator? - ## 4 Answers A decent terrestrial space elevator could be built with a material with a tensile strength of 50 Gigapascals (including a decent safety factor), so this material may suffice. Note that there is no prospect of having one 100,000 km nanotube - they would actually be much shorter (maybe 10 cm) and held together by the much weaker inter-tube molecular bonds (if the strings are long enough, they will bond together billions of times where they touch; if there are enough such contact points, the inter-tube bond can be as strong as you want. Graphene uses the same carbon-carbon bond as the nanotubes for strength, so it would not surprise me if graphene could be used to create strong tethers. I think that what is really holding the terrestrial space elevator back is the lack of money for elevator-focused R&D on string materials. There is really no other market for these materials, and other uses (such as bullet-proof vests) are not close enough to - The other things holding back research on beanstalks as such are the mind-boggling cost of such a project and the fact that they essential kill the entire satellite based space infrastructure out past geostationary orbit, and don't offer any compensation in terms of polar observation. It's a big hit. – dmckee♦ Sep 24 '11 at 23:37 1 Well, for the "no other market" argument; the article says it's 100 times more electrically conductive than copper and can transmit data 10 times faster than fiber optics. Both uses call for long sheets of graphene. In addition, MIT is actually working on it. – JollyJoker Sep 25 '11 at 11:23 1 @lurscher: It might pay for itself, but it wouldn't get build because for the foreseeable future trying to build it would bankrupt the whole planet. The fool things has to be built from geostationary orbit (oh, you can build a tower on a mountain, but that's chump change), which means getting gigatons there in the first place (we wouldn't lift them all, of course we'd bring them from the asteroids or some such). The short version is: it can't be done until we have pretty easy access to space. Chicken and egg problem. – dmckee♦ Sep 26 '11 at 16:45 3 @dmckee: Huh? Yes, putting 100 tons in geostationary orbit is expensive. But it didn't bankrupt the whole planet the last time we put 100 tons (a little at a time) into geostationary orbit. Since humans have already done it once, the evidence seems to indicate that putting 100 tons into geostationary orbit is feasible. – David Cary Sep 28 '11 at 2:42 2 @JollyJoker Your idea to build a 44 ton ribbon that can lift 1 ton is like proposing to build the Sears Tower with no engineering margin, at all. Quality control of the manufacturing will swamp these other strength limits - limits that have been obtained from microscopic quantities or from theoretical calcs using chemical bond strength. If we should ever become advanced enough to build 44 tons of material with such a finely controlled micro-structure, we will certainly have no need for a space elevator. Even a bulkier ribbon would require technology that makes the elevator itself obsolete. – AlanSE Sep 30 '11 at 16:48 show 4 more comments @lurscher of course I understand it's from GEO, the fact that GEO is the net zero apparent acceleration point is the reason it would be "unfurled" from GEO. If your point behind the stages is that it could be carried up in segments, then yes, no one ever argued otherwise. The only thing your $k^N$ mathematics shows is that it could theoretically be made with any material, regardless of its specific strength. This is true for any compression structure as well. There is still a practical problem if the approach results in needing trillions of tons of material. - agreed. in this sense, maybe its overall cheaper to use $10^7$ Tons of, i don't know, steel bars, than to use $10^{4}$ of carbon nanotube unobtainium (or can we say, notyetavailablaintium?) – lurscher Oct 4 '11 at 20:28 @lurscher lol, I was meaning to write this as a comment, but oh well. What you say about steel over concrete may be true, it all depends on the tradeoffs. The high strength/weight materials are very appealing b/c of the exponential nature of material needs. But it's important to stress that it's not just price, but energy, that is needed for every stage. Manufacture requires energy just like lifting to GEO does. One way or the other, we are limited in the energy society has access to, and one can categorically dismiss an idea by proving it requires too much energy. – AlanSE Oct 5 '11 at 18:18 Most proposed designs of the space elevator are such that the whole structure is under tensile stress from the ground anchor point. In these designs, there are stress limits that constraint the material properties of the ribbon. The calculations (based on geosynchronous height of earth) point to that 130 GPa figure. There is potentially another design approach in which there is no stress limit required in any point in the structure. In this case, the tensile stress is entirely from the geo synchronous orbit holding up the structure against its weight (rather than the earth holding it up against centrifugal force). You only need to make sure the whole structure is at equilibrium, so the center of mass stay roughly at GEO. So, you start at GEO, and start each level one at a time. after finishing each level, you adjust your center of mass to stay at equilibrium. Then you proceed to build the next level below the previous one, until you reach ground. In order to the upper levels to be able to hold the weight of the lower ones, the structure will follow a exponential pattern of joints. If whole elevator structure will have $N$ levels, the ground level (Level 0) will have one link. the next level (Level 1) will have $k$ links, which all sustain the weight from level 0 link. Level 2 links will have $k^2$ links, which sustain each of the $k$ links of the level 1 links. The last level will have $k^N$ links. So at GEO, the stress is the whole weight of the structure by the cross section area of all links. the area grows as $k^N$ while the weight of the whole structure grows as $\frac{1-k^{N+1}}{1-k}$. So asymptotically the stress as GEO stays under parameter control. The benefit of this approach is that you even can make the whole structure with normal materials (no stress limit required). Of course the tensile strength of the material chosen still affect the number of links and levels required to make the structure sustain its own weight. - This doesn't make sense to me. The stress at the point where it touches the ground is negligible compared to the ripping stress encountered GEO, which is the maximum stress (evidenced by true weightlessness at that point). If there is little or no stress at the point of contact with Earth it doesn't change the material requirements. You could make it out of steel, a 1 ton payload will still take a trillion tons of steel, and again, I believe this is before engineering margins are applied! You pattern may initially work, but we should run out of steel without much of the distance spanned. – AlanSE Sep 30 '11 at 18:58 why you think there is ripping stress at GEO? stress is force(weight) by area. the area is proportional to $k^N$. The weight grows as $\frac{1-k^{N+1}}{1-k}$, i hope that clears your doubts – lurscher Sep 30 '11 at 19:03 I agree that it is expensive, and the engineering challenges are significant, but the material of the structure is NOT one of them, at least with this design – lurscher Sep 30 '11 at 19:27 The force at GEO is just the maximum. It's also a little more complicated than just using $k$ times the last stage material for the next stage because the apparent acceleration (and the force) isn't linear. What you say works for piling up blocks or stacking paperweights on Earth. For the distance the space elevator spans, it's much more nonlinear, meaning the length of the stages change or they don't follow a geometric multiplier. The typically referenced designs for the space elevator are already optimized and have ridiculous material requirements. – AlanSE Sep 30 '11 at 19:53 The real economics will come into play via electricity. Space based solar transmitting electricity down graphene cables solves our energy crisis basically forever. Once you build the first cable, building more is an order of magnitude cheaper. Once you make that initial investment, the solar farms become trivial, although it will take years if not decades to get them up and running. Inexhaustible, utterly green energy that can be scaled virtually limitlessly- thats the gamechanger for the human race. - I like the optimism, but it might be tempered a bit; people made similar statements about nuclear power. There is wear and tear on any system, meteors and space debris will necessitate occasional replacement. The replacements will have to be manufactured from mined materials and transported so the system is not entirely green. There are also safety risks; the occasional accidental aeroplane collision with a cable and the potential for terrorism cannot be ignored. – AdamRedwine Sep 30 '11 at 16:27 I'll put aside the problems with space based solar for a moment, the lifting mechanisms, and I'll even put aside the glaring idea-crushing problem with the minimum payload size. Wait, we also have to put aside the fact that we've never built a 4,000 mile HVDC line. Provided that we build this, what will be the linear density of the power transmission line from GEO to Earth be? – AlanSE Sep 30 '11 at 16:38 @zassounotsukushi, couldnt the electricity be transmitted inside the tube as a stream of electrons(not in a metal just in a vacuum, ie cathode rays, and therefore no resistance?) – Jonathan. Oct 3 '11 at 6:39 @Jonathan You'd need a vacuum tube for the part in the atmosphere, but since that's a small fraction of the length, it might not weigh the elevator down prohibitively. The bigger question is the charge return. If electrons are being sent down to Earth at high voltage, what positive charge is being sent up? Simple superconducting HVDC lines would seem more promising, since low temperature is (relatively) easy, although radiation damage is still a problem. – AlanSE Oct 3 '11 at 14:01 Could the electrons be sent down the nanotubes? (where there are no particles?) – Jonathan. Oct 3 '11 at 14:59 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370290637016296, "perplexity_flag": "middle"}
http://directed-procrastination.blogspot.com/2011/06/versioned-arrays.html
directed procrastination Wednesday, June 15, 2011 Versioned Arrays I just learned of a method of creating functional data structures from mutable types called versioning. Basically, if you have mutable data like an array, you can implement functional changes of the data structure by storing the differences between multiple references. This is a might bit better than copying the data structure, particularly in the case of a large array. For an array of size $N$, versioned arrays can hold two arrays with $m$ changes between them using only $O(N+m)$ space. Further, while the access time for an element when the array is $m$ away from it's version is $O(m)$, many use cases require only accessing arrays a fixed number of changes away, making access time $O(1)$. A Simple Implementation using Lists This can be implemented simply enough using cons cells and lists. Implementing in Common Lisp, accessing an element is as follows: 1. A versioned array is list whose last element is an array 2. The values before the last element are a list of changes that need to occur to mutate the array into the one we are interested in. 3. Access to the array involves applying all changes to the array and inverting the changes as we a moving the array to the other side of them (I'll try to clarify later) 4. Leaving you with a list of length one, containing an array with the desired values in it, at which point you can access the data. By moving the array to the version we are currently accessing from, we are betting that more accesses will happen from this, or a similar, version. This will mean the rest of the accesses will be faster than the first. (defpackage :versioned-arrays (:use :cl :bt :iter :modf) (:export #:make-versioned-array #:varef )) (in-package :versioned-arrays) (defun make-versioned-array (dimensions &rest args &key (element-type t) initial-element initial-contents adjustable fill-pointer displaced-to displaced-index-offset ) "Make a versioned array." (when (or adjustable fill-pointer displaced-to displaced-index-offset) (error "The capabilities: adjustable, fill-pointer, displaced-to, displaced-index-offset are not implemented yet") ) (list (apply #'make-array dimensions args)) ) (defun varef (v-arr &rest idx) (multiple-value-bind (array changes) (get-array v-arr) (invert-changes (cdr v-arr) changes v-arr) (setf (car v-arr) array (cdr v-arr) nil ) (apply #'aref (car v-arr) idx) )) (defun get-array (v-arr) (if (arrayp (car v-arr)) (values (car v-arr) nil) (multiple-value-bind (array changes) (get-array (cdr v-arr)) (destructuring-bind (new-val &rest idx) (car v-arr) (let ((old-val (apply #'aref array idx))) (setf (apply #'aref array idx) new-val) (values array (cons (cons old-val idx) changes) )))))) (defun invert-changes (old-v-arr changes last) (cond ((null changes) nil) (t (setf (cdr old-v-arr) last (car old-v-arr) (car changes) ) (invert-changes (cdr old-v-arr) (cdr changes) old-v-arr) ))) First, let's consider a simple linear version history. This would happen if you had an array and only modified the original or the latest versions of the array. Using the standard cons cell diagrams where red is the car and blue is the cdr, our linear versioned array would look something like this: Modification goes very similarly: 1. Given that we want to modify the value at indices idx to new-val 2. We first store the current value at idx. This has the side effect of moving our array to the current version. 3. We cons a new cell to store the array in, place the delta where the array currently is, and modify the cdr of that cell to point to the new one. This can be simply coded using my Modf library: (define-modf-function varef 1 (new-val v-arr &rest idx) (let ((old-value ;; This moves the array to our version (apply #'varef v-arr idx) )) (let ((arr (car v-arr))) (setf (apply #'aref (car v-arr) idx) new-val) (setf (cdr v-arr) (list arr) (car v-arr) (cons old-value idx) ))) (cdr v-arr) ) When looking at what is above, it is important to note that this is a mutation filled computation. This does not involve building lists the normal way with cons. Instead we are modifying lists in place. This is necessary as we need any changes we make to be seen by any reference to the versioned array. This is what I meant by inverting the changes. Inverting means reversing the order of the deltas as well as swapping the values in the deltas with the values in the array. The issue here is that we may have any given number of references to this data structure floating about. We need to modify the values that might be referenced so that if we use those reference later, they will still work. So things might be coming into focus, although a versioned array is a list of deltas with it's last element an array, many other versioned arrays (or versions of the array) might be sharing some structure with this array. If they truly are versions of the same array, they at the very least share the last element in the list, which contains the array they are versioning. In the following figures we see the structure of a more complicated array as elements are accessed and modified (edited). Editing an element in the array is much like an access except that at the end it creates another delta with the specified edit. Multithreading The very fact that we are messing around with mutation should clue you in that we are probably going to have thread safety issues. In fact, even reading is a mutating operation for this structure, so it isn't even safe to read from multiple threads at the same time. Due to this, everything inside the internals of this data structure needs to be wrapped in a lock unique to the structure. I chose to switch from a list to a structure so I could hold the lock in a straight-forward manner (and so I can have a type to specialize print-object on). There is a single lock on the array. This means that, unlike tree based implementations of functional arrays, working with one version will halt any work with another version. Performance Let's see how this performs on the well known N-Queens problem. This involves backtracking, which is one of those use cases where we are only interested in array versions a constant number of deltas away from the current. If we ignore how simple the queens problem is and explicitly build a chess board to check the other attacking squares, we would get something like what follows. Here I have a version based on versioned arrays, and one sketched out for standard arrays. I also implemented this for a functional array based on FSet, which uses some kind of trees. (defun place-queen (board i j) "This complicated bit is to test if a queen can attack another." (if (and (iter (for k from 0 below (va-dimension board 0)) (never (and (/= i k) (varef board k j))) ) (iter (for k from 0 below (va-dimension board 1)) (never (and (/= j k) (varef board i k))) ) (iter (for k1 from i downto 0) (for k2 from j downto 0) (never (and (/= k1 i) (/= k2 j) (varef board k1 k2))) ) (iter (for k1 from i below (va-dimension board 0)) (for k2 from j below (va-dimension board 1)) (never (and (/= k1 i) (/= k2 j) (varef board k1 k2))) ) (iter (for k1 from i downto 0) (for k2 from j below (va-dimension board 1)) (never (and (/= k1 i) (/= k2 j) (varef board k1 k2))) ) (iter (for k1 from i below (va-dimension board 0)) (for k2 from j downto 0) (never (and (/= k1 i) (/= k2 j) (varef board k1 k2))) )) (modf (varef board i j) t) nil )) (defun n-queens (board n row) "Simple enough backtracking algorithm." (if (= row n) board (iter (for i below n) (let ((result (place-queen board i row))) (when result (thereis (n-queens result n (+ row 1))) ))))) (defun place-queen* (board i j) "This complicated bit is to test if a queen can attack another." (if (and ...similar-tests...) t nil )) (defun n-queens* (board n row) "A bit more convoluted backtracking algorithm." (if (= row n) board (let (winning-board) (iter (for i below n) (let ((result (place-queen* board i row))) (when result (setf (aref board i row) t) (let ((try (n-queens* board n (+ row 1)))) (when try (setf winning-board try) (finish) ) (setf (aref board i row) nil) )) (finally (return winning-board)) ))))) Comparing the two, we naturally see that the functional form is more elegant and understandable. The speed is another matter though… $N$ArrayVersioned ArrayFSet ArrayVA/ArrayFSet/Array 1114444 12224411220.5 13112121212 14172552771516.294118 151420723114.78571416.5 161161726193714.87931016.698276 17681006116714.79411817.161765 1859587751007314.74789916.929412 194058469414.617.35 The functional version is easier to understand, of course, though using something like screamer would hide most if not all of the side effect annoyances. Speed wise, though, the functional approaches seem to be around a factor of 15 slower. The FSet method is consistently slower, but not by enough to say definitively that versioned arrays are going to be faster for this problem. We do see a slow growth in the FSet implementation. We are probably seeing the $O(\log N)$ access time. There might be a completely different story to tell with proper optimization of the versioned arrays. Particularly if someone tried to get it closer to the metal, so to speak. Conclusions A little side note: since starting the post, I have found out that Haskell uses something similar to this in its Diff Array library with the notable exception that the version history is forced to be linear. This is done by forcing a copy if a version other than the most recent is changed. This seems like it could lead to mysterious poor performance for small changes in your algorithm and certainly doesn't seem appropriate for backtracking. Fortunately, the person who let me know about data structure in the first place is currently implementing a library for this for Haskell. One thing to note is when this doesn't work well. It doesn't work well when you have references to versions of an array with very different deltas. For instance, if you saved some initial state of an array, made many changes, and used both arrays by, say, comparing them element by element, things could get pretty slow. To curb this, several people have developed schemes to split the versioning tree if too many deltas are in between two particular arrays. Typically this is done probabilistically, randomly splitting the delta tree with probability $\frac 1 N$. With this modification, we have an expected amortized edit time of $O(1)$ (worst case $O(N)$ which is a steep price) and an expected worst case access time to be $O(N)$. The true worst case access time is $O(m)$ for arbitrarily bad luck on the RNG, but the expected $O(N)$ is a much needed improvement over knowing it is $O(m)$ where $m$ can be arbitrarily large. Since all of the work is done in walking the deltas and rebasing the array, edits and accesses have the same time complexity. This all sounds like gloom and doom, but really, there are a large class of problems that don't require access from very different arrays. Further, much of this trouble can be resolved by breaking the abstraction and allowing the user to explicitly force a copy. I was thinking about it, and there is nothing here limiting you to arrays. I could, in principle, implement this kind of versioning for any mutable structure. Instead of storing indices, the arguments of the setter function, store the function and the arguments and the setter can be inverted the same way 1it is in Modf. It doesn't sound like too bad of an idea to have a generic functionality to stick any object into and get back an immutable object with reasonable asymptotic performance. Maybe this should be a future extension for Modf. Posted by at 9:56 AM Labels: Common Lisp, programming Post a Comment Subscribe to: Post Comments (Atom) Blog Archive • ►  2013 (6) • ►  May (2) • ►  April (2) • ►  February (1) • ►  January (1) • ►  2012 (15) • ►  December (1) • ►  November (1) • ►  October (1) • ►  September (2) • ►  August (1) • ►  July (5) • ►  June (2) • ►  April (1) • ►  March (1) • ▼  2011 (19) • ►  November (2) • ►  October (2) • ►  August (1) • ►  May (3) • ►  April (1) • ►  February (1) • ►  January (3) • ►  2010 (5) • ►  December (2) • ►  October (2) • ►  September (1)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330565929412842, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/11/05/an-example-of-a-parallelogram/?like=1&source=post_flair&_wpnonce=5b12459335
# The Unapologetic Mathematician ## An Example of a Parallelogram Today I want to run through an example of how we use our new tools to read geometric information out of a parallelogram. I’ll work within $\mathbb{R}^3$ with an orthonormal basis $\{e_1, e_2, e_3\}$ and an identified origin $O$ to give us a system of coordinates. That is, given the point $P$, we set up a vector $\overrightarrow{OP}$ pointing from $O$ to $P$ (which we can do in a Euclidean space). Then this vector has components in terms of the basis: $\displaystyle\overrightarrow{OP}=xe_1+ye_2+ze_3$ and we’ll write the point $P$ as $(x,y,z)$. So let’s pick four points: $(0,0,0)$, $(1,1,0)$, $(2,1,1)$, and $(1,0,1)$. These four point do, indeed, give the vertices of a parallelogram, since both displacements from $(0,0,0)$ to $(1,1,0)$ and from $(1,0,1)$ to $(2,1,1)$ are $e_1+e_2$, and similarly the displacements from $(0,0,0)$ to $(1,0,1)$ and from $(1,1,0)$ to $(2,1,1)$ are both $e_1+e_3$. Alternatively, all four points lie within the plane described by $x=y+z$, and the region in this plane contained between the vertices consists of points $P$ so that $\displaystyle\overrightarrow{OP}=u(e_1+e_2)+v(e_1+e_3)$ for some $u$ and $v$ both in the interval $[0,1]$. So this is a parallelogram contained between $e_1+e_2$ and $e_1+e_3$. Incidentally, note that the fact that all these points lie within a plane means that any displacement vector between two of them is in the kernel of some linear transformation. In this case, it’s the linear functional $\langle e_1-e_2-e_3,\underline{\hphantom{X}}\rangle$, and the vector $e_1-e_2-e_3$ is perpendicular to any displacement in this plane, which will come in handy later. Now in a more familiar approach, we might say that the area of this parallelogram is its base times its height. Let’s work that out to check our answer against later. For the base, we take the length of one vector, say $e_1+e_2$. We use the inner product to calculate its length as $\sqrt{2}$. For the height we can’t just take the length of the other vector. Some basic trigonometry shows that we need the length of the other vector (which is again $\sqrt{2}$) times the sine of the angle between the two vectors. To calculate this angle we again use the inner product to find that its cosine is $\frac{1}{2}$, and so its sine is $\frac{\sqrt{3}}{2}$. Multiplying these all together we find a height of $\sqrt{\frac{3}{2}}$, and thus an area of $\sqrt{3}$. On the other hand, let’s use our new tools. We represent the parallelogram as the wedge $(e_1+e_2)\wedge(e_1+e_3)$ — incidentally choosing an orientation of the parallelogram and the entire plane containing it — and calculate its length using the inner product on the exterior algebra: $\displaystyle\begin{aligned}\mathrm{vol}\left((e_1+e_2)\wedge(e_1+e_3)\right)^2&=2!\langle(e_1+e_2)\wedge(e_1+e_3),(e_1+e_2)\wedge(e_1+e_3)\rangle\\&=2!\frac{1}{2!}\det\begin{pmatrix}\langle e_1+e_2,e_1+e_2\rangle&\langle e_1+e_2,e_1+e_3\rangle\\\langle e_1+e_3,e_1+e_2\rangle&\langle e_1+e_3,e_1+e_3\rangle\end{pmatrix}\\&=\det\begin{pmatrix}2&1\\1&2\end{pmatrix}\\&=\left(2\cdot2-1\cdot1\right)=3\end{aligned}$ Alternately, we could calculate it by expanding in terms of basic wedges. That is, we can write $\displaystyle\begin{aligned}(e_1+e_2)\wedge(e_1+e_3)&=e_1\wedge e_1+e_1\wedge e_3+e_2\wedge e_1+e_2\wedge e_3\\&=e_2\wedge e_3-e_3\wedge e_1-e_1\wedge e_2\end{aligned}$ This tells us that if we take our parallelogram and project it onto the $y$-$z$ plane (which has an orthonormal basis $\{e_2,e_3\}$) we get an area of ${1}$. Similarly, projecting our parallelogram onto the $x$-$y$ plane (with orthonormal basis $\{e_1,e_2\}$ we get an area of $-1$. That is, the area is ${1}$ and the orientation of the projected parallelogram disagrees with that of the plane. Anyhow, now the squared area of the parallelogram is the sum of the squares of these projected areas: $1^2+(-1)^2+(-1)^2=3$. Notice, now, the similarity between this expression $e_2\wedge e_3-e_3\wedge e_1-e_1\wedge e_2$ and the perpendicular vector we found before: $e_1-e_2-e_3$. Each one is the sum of three terms with the same choices of signs. The terms themselves seem to have something to do with each other as well; the wedge $e_2\wedge e_3$ describes an area in the $y$-$z$ plane, while $e_1$ describes a length in the perpendicular $x$-axis. Similarly, $e_1\wedge e_2$ describes an area in the $x$-$y$ plane, while $e_3$ describes a length in the perpendicular $z$-axis. And, magically, the sum of these three perpendicular vectors to these three parallelograms gives the perpendicular vector to their sum! There is, indeed, a linear correspondence between parallelograms and vectors that extends this idea, which we will explore tomorrow. The seemingly-odd choice of $e_3\wedge e_1$ to correspond to $e_2$, though, should be a tip-off that this correspondence is closely bound up with the notion of orientation. ### Like this: Posted by John Armstrong | Analytic Geometry, Geometry ## 6 Comments » 1. Looks very interesting. Especially since for the past few months I have been studying Geometric algebra (aka Clifford algebra) which subsumes most of the geometric concepts you have been dealing with. I am actually a physics student and without this self-study I wouldn’t have understood a thing of what you wrote. Pity they don’t teach these advanced math concepts. I would like to know whether you’re familiar with GA and if so what is your opinion of it? Comment by Rie-mann | November 6, 2009 | Reply 2. Sure, Clifford algebras are useful for various things.. I’m not sure what exactly you want my opinion about. Comment by | November 6, 2009 | Reply 3. There is a geometry to the Ideocosm. * Metaphor: a parallelogram in the space of ideas. * “A is to B as C is to D” locates four points in the Ideocosm (Zwicky’s name for the space of all possible ideas). Sometimes, in literature, one of these points is implicit. * “A is to B” is a vector, with tail at A and head at B (I note that metaphors occur in Mathematics). The vector has a direction; it points in a particular way. * “C is to D” is a vector. * “A is to B… AS… C is to D” tells us that those two vectors are parallel. * When one says “figure of speech,” one may analyze the laws of figure (Geometry), as well as the laws of speech. I’ve discussed this at greater length in other blaths. For example, is the Ideocosm really a Metric Space? What is the topology of the space off all possible ideas; mathematical ideas; physical system ideas? Ia a metaphor between metaphors a parallopiped? I’m very serious. Comment by | November 6, 2009 | Reply 4. Jonathan, these ideas about the ideocosm are not obviously stupid, but at the same time it’s hard for me to see how they could be a fruitful line of inquiry. Is there a tiniest germ of a testable hypothesis about the space of ideas (much less any sort of research program)? Is it even a coherent concept, or might it crumble into self-referential paradox (“ideocosm” being itself an idea, an idea about ideas)? The parallelogram idea is sort of suggestive, and we’ve talked at the Cafe about how spans crystallize part of what we generally mean by an analogy (I would say “analogy”, not “metaphor”). There are also spans between spans, which you can read about also on the blog you’re reading now. Exact lines of inquiry, partly linguistic and partly philosophic, might be possible here and also interesting. There is certainly a rich n-categorical mathematics of spans. But “ideocosm” itself strikes me as too wild and woolly, pitched at a wrong level as it were, and not at the stage of anything approaching a science. Something for late-night bull sessions perhaps, fueled by intoxicants. Comment by | November 7, 2009 | Reply 5. Todd Trimble is right, of course. Since he has a full-time job and I do not, he is more than welcome to provide the intoxicants when we meet f2f. So… Limits and Convergence, in applying my claim to Mathematics itself. The paper arXiv:0810.5078 Title: Demonstrative and non-demonstrative reasoning by analogy Authors: Emiliano Ippoliti analyzes a set of issues related to analogy and analogical reasoning, namely: 1) The problem of analogy and its duplicity; 2) The role of analogy in demonstrative reasoning; 3) The role of analogy in non-demonstrative reasoning; 4) The limits of analogy; 5) The convergence, particularly in multiple analogical reasoning, of these two apparently distinct aspects and its methodological and philosophical consequences. The paper, using example from number theory, argues for an heuristc conception of analogy. This paper seems to be addressing interesting points, some of which have a categorical flavor, such as: “Furthermore, analogy exhibits dynamical limits: it can start from fruitfulness and end in nonsense. Quantum mechanics is an example of such dynamical limits, in which an initial analogical success becomes a failure: “in particular analogy between quantum systems and classical particles and waves become a stumbling block preventing a consistent interpretation of the theory.” The result is that the double analogy between classic physics and quantum physics has to be abandoned in order to gain a ‘real’ understanding of quantum mechanics: “if we want to build or learn new theory then we are likely to use analogy as a bridge between the known and the unknown. But as soon as the new theory is on hand it should be subjected to a critical examination with a view to dismounting its heuristics scaffolding and reconstructing the system in a literal way.” “Although Bunge’s criticisms of analogy is the consequence of a logical empiricist and realist conception of analogy I disagree with (i.e. analogy is an obstacle because it can’t provide a literal and objective description of the quantum world), he points out some important limits (both static and dynamical) of analogy, which not only affect both the demonstrative and the non-demonstrative role of analogy, but should also be taken in account every time analogy is used or analysed.” So, to mkae the Ideocosm less fuzzy, what CAN we say about its topology? Comment by | November 8, 2009 | Reply 6. [...] last week I said that I’d talk about a linear map that extends the notion of the correspondence between [...] Pingback by | November 9, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 68, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357900619506836, "perplexity_flag": "middle"}
http://outofthenormmaths.wordpress.com/tag/continued-fractions/
# Tag Archives: continued fractions March 13, 2012 · 7:21 pm ## Percentages for sceptics: part III I wanted to do some self-criticism of my previous two posts in this series: 1. You can calculate the minimum of responses from a single percentage by hand (no need for computer programmes or look-up tables). 2. I’ve made a very rough model to estimate how many people the program typically returns when fed six percentages (as I did several times here). In between, I’ve collected some links to demonstrate how great continued fractions can be. Handy calculations There are many ways of writing real numbers (fractions and irrationals) apart from in decimal notation. You can represent them in binary, for instance $\pi = 11.001001000011\ldots$, or in other bases. These have their uses: there is a formula to calculate the $n$th binary digit of $\pi$ without calculating all the preceding digits. For our purposes we will use continued fractions. People write $\pi$ as $[3; 7, 15, 1, 292, 1, 1, 1, 2, 1, ...]$: this notation means that 3 is a good first approximation to $\pi$, the well-known $3+\frac{1}{7}=\frac{22}{7}$ is the closest you can be with any fraction $\frac{p}{q}$ with $q \leq 7$. Then $3+\frac{1}{ 7 + \frac{1}{15}}=\frac{333}{106}$ is the best with $q\leq 106$, and the fourth term $3+\frac{1}{7+\frac{1}{15+\frac{1}{1}}}=\frac{335}{113}$ is a very good approximation, as the next number in the square brackets, 292, is very large (I’ll motivate this observation at the end of the section). The golden ratio is sometimes called the ‘most irrational’ number because it has a continued fraction expansion with all ones, so the sequence converges slowly. Continue reading →
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 12, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428429007530212, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-puzzles/152662-interesting-problem-please-help.html
# Thread: 1. ## Interesting Problem, Please Help Hello. I came up with an interesting problem that I am now trying to figure out. Imagine a plane in space, and within that plane two parallel lines. From the perspective of any point outside of the plane (i.e. if your eye was there), the parallel lines would appear to converge at some point. Keeping it simple for now, how would one go about finding the angle that the parallel lines appear to converge at from an arbitrary point? For the purpose of this problem, let's say that the point is equidistant from the parallel lines. So, as a function of the distance from the point to the plane, how would the angle change? I am trying to work it out by myself, and it seems I've tried almost everything. If anyone can easily solve it, would you please push me in the right direction as opposed to directly telling me the answer? I want to try and work it out. Also, I'm a student with only a very basic knowledge of calculus. Thanks in advance. Spandan 2. This is indeed a hard problem because it is difficult to define exactly what you want. Once we can put some boundaries on what we want, I think it'll be relatively easy to get the answer. So, here's a method how you can do this in the real world. Say you are standing in the middle of that plane with your head right between the parallel lines and somewhat above them. Put the glass in such a way that the line from your eye to the point in the horizon where the lines meet is perpendicular to the glass surface (we gotta make some assumptions, right?). Now, while you keep the glass stationary and your head stationary, use a marker and sketch along the lines of what you see onto the glass. Once you are done doing this drawing, you should see a triangle on the piece of glass. Then you can take your proctractor and ruler and get to work. Soo, this is how you measure the angle. Now, to turn that into math, I think a good place to start would be to consider the eye being a point with coordinates (0,0,h) (h for height). Then the parallel lines are along the x-y plane beneath your feet, running parallel to the x-axis. You can define there separation to be some number, but in reality, all you need is that the separation s = 1, since you can scale the problem. If you are unsure about this, just assume that s is arbitrary. Now, for the line from your eye to infinity. This is tricky, since if you look directly at infinity, you will be looking straight ahead, i.e. parallel to the ground. If you don't understand this part, I can elaborate on it. Now, put the "glass" plane perpendicular to that line of sight, so it'll actually be vertical and parallel to the y-z plane. Put it at a distance d from you, where d is an arbitrary number. Now to draw the lines on this plane, this is exactly like taking the projection of the lines onto the plane, through the eye. You can look at this for a similar idea. Stereographic projection - Wikipedia, the free encyclopedia I could help you with the math, but since you want to work on it yourself, I won't spoil it for you. If you solve this, you can try something else, which I think will be more accurate. Instead of a plane, use a spherical glass. Then you'd have to project the lines onto the glass, and measure the angle between them right where they meet. Since this is a curved, surface you may need some calculus to find those. Good luck! 3. Originally Posted by spandan Hello. I came up with an interesting problem that I am now trying to figure out. Imagine a plane in space, and within that plane two parallel lines. From the perspective of any point outside of the plane (i.e. if your eye was there), the parallel lines would appear to converge at some point. Keeping it simple for now, how would one go about finding the angle that the parallel lines appear to converge at from an arbitrary point? For the purpose of this problem, let's say that the point is equidistant from the parallel lines. So, as a function of the distance from the point to the plane, how would the angle change? I am trying to work it out by myself, and it seems I've tried almost everything. If anyone can easily solve it, would you please push me in the right direction as opposed to directly telling me the answer? I want to try and work it out. Also, I'm a student with only a very basic knowledge of calculus. Thanks in advance. Spandan Wouldn't it depend on how far you can see? i.e. where the horizon is? 4. chiph588@, for simplicity we should assume that you can see all the way to infinity. Since we are on a plane (and not the surface of the earth, for example), the horizon turns out to be straight ahead at eye level. 5. But we can't see to infinity. If we could wouldn't there be no vanishing point? 6. A vanishing point is where the parallel lines seems to merge. This happens at a great distance, so we might as well assume infinity. Another way to look at it is that if you are at point (0,y) and you are looking down to a point (x,0) on the x axis. The angle between this line of sight and the the y-axis is just [LaTeX ERROR: Convert failed] . If you let x get bigger and bigger, you will get x/y getting bigger and bigger. If you let x go to infinity you are looking at the limit of the arctan function as its argument reaches infinity. That limit is [LaTeX ERROR: Convert failed] , which means that if you follow the x-axis to "infiinity", you will be looking parallel to it. 7. I think the angle will be zero degrees and the two lines won't appear to be straight. Intuitively it makes sense to me, but I'm probably wrong. 8. It's not zero. If you project the image on a plane it's quite simple. If you project on a sphere it's a little more complicated. It's fun! 9. What if you were height $h$ above the plane, right in between the two lines. Look straight down and the two lines appear to be distance $d$ apart. Now look down "the road" at a $45$ degree angle. The two lines will appear to be distance $\displaystyle \frac{d}{\sqrt2}$ apart. This is under the assumption that if you move something twice as close to you then it looks twice as big. Not sure if that's true actually. With these two distances we can then find the angle. 10. First of all, thanks for your help. yes vlasev, this is very fun! I thought of that "pane of glass" method when I was hitting my head against the problem, but I abandoned it because I couldn't figure out at which angle to place the glass at. I tried to take into account the way the eye sees (you know, length is judged by the spacing of two points in our retina) and couldn't decide. Perpendicular makes sense, but technically speaking, if h is greater than zero, the eye couldn't see the intersection if it was looking parallel to the plane. Instead of working with projections (sorry, my understanding of projections is very shaky, especially when i'm projecting something infinitely long), couldn't we possibly fix a height and find the relative apparent locations of two points on one of the parallel lines? Because it's a line, we could then proceed to find the apparent slope of the line and from there find the angle. Actually, I kind of imagine this problem and others like it as a different form of math, one in which objects have an absolute property such as length, but also a variable property such as percieved length. A pen and a skyscraper have obviously different absolute lengths, but hold the pen closer to your eye and they can have the same relative length. Does this already exist? If you've heard of something like this, would you send me a link about it? chiph588@, the horizon has to be at infinity, although the point at which they appear to converge may not be. If you think about it from a height of zero (i.e. the eyeball is inside the plane), the angle would appear to be 180 degrees. As the eyeball rises, the angle would change. 11. If you are looking parallel to the plane, the horizon will be right in the line of sight. This is how infinity works. Here is a short "derivation of this fact". Let you be at point $(0,h)$ in the $xy$ plane. Look at point $(x,0)$ on the $x$-axis. The angle between your line of sight and the $y$-axis is just $\arctan(x/h)$. Now, via limits, we get $\displaystyle \lim_{x \to \infty} \arctan\left(\frac{x}{h}\right) = \pi/2$ Hence the angle between your line of sight and the y-axis is precisely 90 degrees and you are looking straight ahead. It may seem paradoxical, but think of it this way. The parallel lines seem to meet at the horizon. By the same logic, your parallel line of sight should meet those lines at infinity. EDIT: For the angle at infinity, there is a simple derivation without much insight and there is a more complicated derivation which offers greater insight. Do you want me to post both of these later? 12. I understand you'll be looking at a 90 degree angle, but in my earlier post I was talking about the angle the two parallel lines appear to make from your perspective. 13. vlasev, your explanation makes sense... to an extent. The two parallel lines seem to meet at infinity from the perspective of the eye, but the line of sight of the eye doesn't meet the lines. I guess, from a different perspective it would make sense, but thinking about it in terms of limits, the parallel lines approach each other (in this perspective). Maybe I'm thinking about this wrong. Post those infinity derivations for me please. chiph588, I had left my computer on the site for a while before i submitted the last post, and as a result i didn't get to read your post about looking down the road. I think this is most like the solution i thought of, but I couldn't work out the function the percieved distance would change according to. I'm not sure if it's as simple as your 1:1 idea, and that's where i got stuck. 14. Alright, I'll put it as a spoiler down here: Spoiler: Let the eye be at $(0,0,h)$, and the lines be on the $xy$ plane and parallel to the $x$ axis, each at distance s from the $x$-axis. Let the eye be facing towards the positive $x$-axis. Let the plane $D$ be at distance $d$ from the eye. Let each point on that plane have coordinates $(u,v)$ (they are analogous to $(y,z)$ actually). Here is a more informal derivation. The lines intersect the plane $D$ at $(d,s,0)$ and $(d,-s,0)$. Since the vanishing point is directly straight ahead, the projections of these lines on the plane $D$ must intersect at the point $(d,0,h)$. Since the lines are on a plane and we are projecting them on another plane, their projections must be lines also. Thus we have that the projections of these lines on plane $D$ are the lines starting at $(s,0)$, $(-s,0)$ and intersecting at $(0,h)$ in $(u,v)$ coordinates. From these you can use simple trig to deduce that the angle between these two projections must be $2\cos^{-1}\left(\frac{s}{\sqrt{h^s+s^2}}\right)$ on the plane. Surprisingly, this angle does not depend on the distance we have chosen for the plane. It seems counter-intuitive. Here is the more complicated derivation that uses some vectors. The idea is to write down an equation of the line going from the eye at $(0,0,h)$ to a point on one of the lines $(p,s,0)$. Then we need to find the intersection of this line with the plane $D$. This will be at coordinates $(d,u,v)$. To get the equation of a line, we need a position vector $\vec{R_o}$ and a direction vector $\vec{V}$, which we'll multiply by a parameter $t$ to get the vector equation for the line to be $\vec{R_o}+t\vec{V}$. So, choose the position vector $\vec{R_o} = <p,s,0>$ and the direction vector be from that point to the eye. By the rules of vector addition, this is $\vec{V}=<0,0,h>-\vec{R_o} = <-p,-s,h>$. Thus the vector equation for the line is $\displaystyle{\vec{R_o}+t\vec{V} = <p(1-t),s(1-t),th>}$ Since we want the projection onto the plane $D$, we need to have the x-coordinates match, which means $p(1-t) = d$. From this we get that $(1-t) = d/p$ and $t = 1-d/p$. Subbing both of those in the equation of the line, we see that the intersection point is $(d,\frac{sd}{p},(1-\frac{d}{p})h)$ Now, if we set $p = d$, that is we are looking at the intersection between the line and the plane $D$, we will simply get $(d,s,0)$. However, the interesting part is when we let it go to infinity. $\displaystyle{\lim_{p\to \infty}(d,\frac{sd}{p},(1-\frac{d}{p})h) = (d,0,h)}$ This is the same result as before! Only this time we have it more rigorously. Furthermore, since the coordinates of the projection of the line on the plane $D$ are $(d,\frac{sd}{p},(1-\frac{d}{p})h)$, then in $(u,v)$ coordinates, this is just $(\frac{sd}{p},(1-\frac{d}{p})h)$. These are just parametric equations of the line in parameter $p$. They form a system of parametric equations $\displaystyle{u = \frac{sd}{p}}$ $\displaystyle{v = (1-\frac{d}{p})h) }$ From these we get that $d/p = u/s$ and $v = h - \frac{h}{s}u$, which is the equation of the projection of the line onto the plane $D$. You can see that it is a line! the other line has equation $v = h + \frac{h}{s}u$. With trig you can get that the angle between one line and the vertical axis is $\theta = \cos^{-1}\left(\frac{s}{\sqrt{h^s+s^2}}\right)$ and the angle between the lines is just $2\theta$ Here is the reason why I think this doesn't depend on the distance $d$ of the plane $D$. Say we are looking at two parts of the line. One at distance $p_1$ and the other at distance $p_2$. The projections on the plane $D$ will BOTH involve the parameter $d$, but since we are looking at the relation between those two distances $p_1$ and $p_2$, we will have a sort of cancellation of the parameter $d$. I think this is a neat result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 73, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9652757048606873, "perplexity_flag": "head"}
http://sbseminar.wordpress.com/category/representation-theory/
## Hall algebras are Grothendieck groupsApril 18, 2011 Posted by Ben Webster in hopf algebras, representation theory. 13 comments I’ve been attending a seminar/class run by Nick Proudfoot preparing for his workshop this summer on canonical bases. In conversations with Nick and graduate students, and there’s been some confusion about the relationship between Hall algebras and Grothendieck groups. Obviously, if you read the definitions you’ll see they are not the same, but the idea seems to be floating around that there is something going on with them. At some point, I decided writing a blog post on the subject would be a good idea. What are Hall algebras? The Hall algebra of a category is the Grothendieck group of constructible sheaves/perverse sheaves on the moduli stack of objects in the category. The Hall algebra is an algebra because the constructible derived category of the moduli stack of objects in abelian category is monoidal in a canonical way. To my mind, this is what makes Hall algebras worth studying, yet it’s oddly ignored in the literature on them (as far as I know; people should feel free to correct me). For example, it’s never mentioned in Schiffmann’s Lectures on Hall Algebras, the closest thing the subject has to a standard reference. (more…) ## Representation theory courseJanuary 24, 2011 Posted by Joel Kamnitzer in representation theory, teaching. 9 comments Well, like David, I am teaching a course this semester and writing up notes. My course is on representation theory. More specifically, I hope to cover the basics of the representation theory of complex reductive groups, including the Borel-Weil theorem. In my class, I have started from the theory of compact groups, for two reasons. First, that is the way, I learned the subject from my advisor Allen during a couple of great courses. Second, I am following up on a course last semester taught by Eckhard Meinrenken on compact groups. Feel free to take a look at the notes on the course webpage and give me any feedback. Very soon, I will reach the difficult task of explaining complexification of compact groups. As I complained about in my previous post, I don’t feel that this topic is covered properly in any source, so I am bit struggling with it. Anyway, the answers to that post did help me out, so we will see what happens. ## Passage from compact Lie groups to complex reductive groupsNovember 25, 2010 Posted by Joel Kamnitzer in Algebraic Geometry, representation theory, things I don't understand. 25 comments Once again, I’m preparing to teach a class and needing some advice concerning an important point. I’m teaching a course of representation theory as a followup to an excellent course on compact Lie groups, taught this semester by Eckhard Meinrenken. In my class, I would like to explain transition from compact Lie groups to complex reductive groups, as a first step towards the Borel-Weil theorem. A priori, compact connected Lie groups and complex reductive groups, seem to have little in common and live in different worlds. However, there is a 1-1 correspondence between these objects — for example $U(n)$ and $GL_n(\mathbb{C})$ are related by this correspondence. Surprisingly, it is not that easy to realize this correspondence. Let us imagine that we start with a compact connected Lie group $K$ and want to find the corresponding complex algebraic group $G$. I will call this process complexification. One approach to complexification is to first show that $K$ is in fact the real points of a real reductive algebraic group. For any particular $K$ this is obvious — for example $S^1 = U(1)$ is described by the equation $x^2 + y^2 = 1$. But one might wonder how to prove this without invoking the classification of compact Lie groups. I believe that one way to do this is to consider the category of smooth finite-dimensional representation of the group and then applying a Tannakian reconstruction to produce an algebraic group. This is a pretty argument, but perhaps not the best one to explain in a first course. A slightly more explicit version would be to simply define $G$ to be $Spec (\oplus_{V} V \otimes V^*)$ where $V$ ranges over the irreducible complex representations of $K$ (the Hopf algebra structure here is slightly subtle). In fact, not only is every compact Lie group real algebraic, but every smooth map of compact Lie groups is actually algebraic. So the the category of compact Lie groups embeds into the category of real algebraic groups. For a precise statement along these lines, see this very well written MO answer by BCnrd. A different approach to complexification is pursued in Allen Knutson’s notes and in Sepanski’s book. Here the complexification of $K$ is defined to be any $G$ such that there is an embedding $K \subset G(\mathbb{C})$, such that on Lie algebras $\mathfrak{g} = \mathfrak{k} \otimes_{\mathbb{R}} \mathbb{C}$. (Actually, this is Knutson’s definition, in Sepanski’s definition we first embed $K$ into $U(n)$.) This definition is more hands-on, but it is not very obvious why such $G$ is unique, without some structural theorems describing the different groups $G$ with Lie algebra $\mathfrak{g}$. At the moment, I don’t have any definite opinion on which approach is more mathematically/pedagogically sound. I just wanted to point out something which I have accepted all my mathematical life, but which is still somewhat mysterious to me. Can anyone suggest any more a priori reasons for complexification? ## A (partial) explanation of the fundamental lemma and Ngo’s proofSeptember 24, 2009 Posted by Joel Kamnitzer in Algebraic Geometry, geometric Langlands, Number theory, representation theory, things I don't understand. 4 comments I would like to take Ben up on his challenge (especially since he seems to have solved the problem that I’ve been working on for the past four years) and try to explain something about the Fundamental Lemma and Ngo’s proof.  In doing so, I am aided by a two expository talks I’ve been to on the subject — by Laumon last year and by Arthur this week. Before I begin, I should say that I am not an expert in this subject, so please don’t take what I write here too seriously and feel free to correct me in the comments.  Fortunately for me, even though the Fundamental Lemma is a statement about p-adic harmonic analysis, its proof involves objects that are much more familiar to me (and to Ben).  As we shall see, it involves understanding the summands occurring in a particular application of the decomposition theorem in perverse sheaves and then applying trace of Frobenius (stay tuned until the end for that!). First of all I should begin with the notion of “endoscopy”.  Let $G, G'$ be two reductive groups and let $\hat{G}, \hat{G}'$ be there Langlands duals.  Then $G'$ is called an endoscopic group for $G$ if $\hat{G}'$ is the fixed point subgroup of an automorphism of $\hat{G}$.  A good example of this is to take $G = GL_{2n}$, $G' = SO_{2n+1}$.  At first glance these groups having nothing to do with each other, but you can see they are endoscopic since their dual groups are $GL_{2n}$ and $Sp_{2n}$ and we have $Sp_{2n} \hookrightarrow GL_{2n}$. As part of a more general conjecture called Langlands functoriality, we would like to relate the automorphic representations of $G$ to the automorphic representations of all possible endoscopic groups $G'$.  Ngo’s proof of the Fundamental Lemma completes the proof of this relationship. (more…) ## A hunka hunka burnin’ knot homologySeptember 24, 2009 Posted by Ben Webster in category O, Category Theory, combinatorics, homological algebra, link homology, low-dimensional topology, quantum groups, representation theory. 19 comments One of the conundra of mathematics in the age of the internet is when to start talking about your results. Do you wait until a convenient chance to talk at a conference? Wait until the paper is ready to be submitted to the arXiv (not to mention the question of when things are ready for the arXiv)? Until your paper is accepted? Or just until you’re confident you’ve disposed of any major errors in your proofs? This line is particularly hard to walk when you think the result in question is very exciting. On one hand, obviously you are excited yourself, and want to tell people your exciting results (not to mention any worries you might have about being scooped); on the other, the embarrassment of making a mistake is roughly proportional to the attention that a result will grab. At the moment, as you may have guessed, this is not just theoretical musing on my part. Rather, I’ve been working on-and-off for the last year, but most intensely over the last couple of months, on a paper which I think will be rather exciting (of course, I could be wrong). (more…) ## SF&PA: Subfactors = finite dimensional simple algebrasMarch 23, 2009 Posted by Noah Snyder in Category Theory, representation theory, subfactors. 2 comments Since my next post on Scott’s talk concerns the construction of a new subfactor, I wanted to give another attempt at explaining what a subfactor is. In particular, a subfactor is just a finite-dimensional simple algebra over C! Now, I know what you’re thinking, doesn’t Artin-Wedderburn say that finite dimensional algebras over C are just matrix algebras? Yes, but those are just the finite dimensional algebras in the category of vector spaces! What if you had some other C-linear tensor category and a finite dimensional simple algebra object in that category? Let me start with an example (very closely related to Scott Carnahan’s pirate post). (more…) ## Generalized moonshine I: Genus zero functionsJanuary 8, 2009 Posted by Scott Carnahan in group theory, mathematical physics, Number theory, Paper Advertisement, representation theory. 21 comments This is a plug for my first arXiv preprint, 0812.3440. It didn’t really exist as an independent entity until about a month ago, when I got a little frustrated writing a larger paper and decided to package some results separately. It is the first in a series of n (where n is about five right now), attacking the generalized moonshine conjecture. Perhaps the most significant result is that nontrivial replicable functions of finite order with algebraic integer coefficients are genus zero modular functions. This answers a question that has been floating around the moonshine community for about 30 years. Moonshine originated in the 1970s, when some mathematicians noticed apparent numerical coincidences between the theory of modular functions and the theory of finite simple groups. Most notable was McKay’s observation that 196884=196883+1, where the number on the left is the first nontrivial Fourier coefficient of the modular function j, which classifies complex elliptic curves, and the numbers on the right are the dimensions of the smallest irreducible representations of the largest sporadic finite simple group, called the monster. Modular functions and finite group theory were two areas of mathematics that were not previously thought to be deeply related, so this came as a bit of a surprise. Conway and Norton encoded the above equation together with other calculations by Thompson and themselves in the Monstrous Moonshine Conjecture, which was proved by Borcherds around 1992. I was curious about the use of the word “moonshine” here, so I looked it up in the Oxford English Dictionary. There are essentially four definitions: 1. Light from the moon, presumably reflected from the sun (1425) 2. Appearance without substance, foolish talk (1468 – originally “moonshine in the water”) 3. A base of rosewater and sugar, or a sweet pudding (1558 cookbook!) 4. Smuggled or illegally distilled alcoholic liquor (1782) The fourth and most recent definition seems to be the most commonly used among people I know. The second definition is what gets applied to the monster, and as far as I can tell, its use is confined to English people over 60. It seems to be most popularly known among scientists through a quote by Rutherford concerning the viability of atomic power. I’ll give a brief explanation of monstrous moonshine, generalized moonshine, and my paper below the fold. There is a question at the bottom, so if you get tired, you should skip to that. (more…) ## Woit on Geometric Representation TheoryDecember 22, 2008 Posted by David Speyer in D-modules, mathematical physics, representation theory. 1 comment so far Just wanted to point out to everyone that Peter Woit, of the blog Not Even Wrong is doing a great job blogging on the relations between representation theory of Lie groups, functions on Lie groups and differential operators. And he promises there will be physics before the end! ## Request: Quivers and RootsNovember 2, 2008 Posted by David Speyer in introductions, representation theory, Requests, Uncategorized. 12 comments Consider two finite dimensional vector spaces $A$ and $B$ and a linear map $\phi$ between them. Then we can decompose $A$ as $K \oplus R$ where $K$ is the kernel of $\phi$ and $R$ is any subspace transverse to $K$. Similarly, we can write $B$ as $I \oplus C$ where $I$ is the image of $\phi$. So we can write $\phi$ as the direct sum of $K \to 0$, the identity map from $R \to I$ and $0 \to C$. At the cost of making some very arbitrary choices, we may simplify even more and say that we can express $\phi$ as the sum of three types of maps: $0 \to k$, the identity map $k \to k$ and $k \to 0$ (where $k$ is our ground field.) Now, suppose that we have two maps, $\phi$ and $\psi$ from $A$ to $B$. We’ll start with the case that $A$ and $B$ have the same dimension. If $\phi$ is bijective, then we can choose bases for $A$ and $B$ so that $\phi$ is the identity. Once we have done that, we still have some freedom to change bases further. Assuming that $k$ is algebraically closed, we can use this freedom to put $\psi$ into Jordan normal form. In other words, we can choose bases such that $(\phi,\psi)$ are direct sums of maps like $\left( \left( \begin{smallmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{smallmatrix} \right), \left( \begin{smallmatrix} \alpha & 1 & 0 \\ 0 & \alpha & 1 \\ 0 & 0 & \alpha \end{smallmatrix} \right) \right)$. (Here several different values $\alpha$ may occur in the various summands, and of course, the matrices can be sizes other than $3 \times 3$.) If we don’t assume that $\phi$ is bijective (and if we want to allow $A$ and $B$ to have different dimensions) we get a few more cases. But the basic picture is not much worse: in addition to the summands above, we also need to consider the maps $\left( \left( \begin{smallmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{smallmatrix} \right), \left( \begin{smallmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \end{smallmatrix} \right) \right)$ (for various sizes $n \times (n+1)$, not just $2 \times 3$) and the transpose of these. These three possibilities, and their direct sums, describe all pairs $(\phi, \psi)$ up to isomorphism. Now, consider the case of three maps. As the dimensions of $A$ and $B$ grow, so do the number of parameters necessary to describe the possible cases. Moreover, almost all cases can not be decomposed as direct sums. More precisely, as long as $\mathrm{dim\ } A/\mathrm{dim\ }B$ is between $(3+\sqrt{5})/2$ and $(3-\sqrt{5})/2$, the maps which can be expressed as direct sums of simpler maps have measure zero in the $\mathrm{Hom}(A,B)$. (Where did that number $(3+\sqrt{5})/2$ come from? Stay tuned!) In the opinion of experts, there will probably never be any good classification of all triples of maps. The subject of quivers was invented to systemize this sort of analysis. It’s become a very large subject, so I can’t hope to summarize it in one blog post. But I think it is fair to say that anyone who wants to think about quivers needs to start by learning the connection to root systems. So that’s what I’ll discuss here. (more…) ## Group rings arrr commutativeSeptember 18, 2008 Posted by Scott Carnahan in Category Theory, hopf algebras, quantum algebra, quantum groups, representation theory. 10 comments If you are familiar with group rings, you might think that the title of this post is false. If G is a nonabelian group, multiplying the basis elements g and h in $\mathbb{Z}G$ can yield $gh \neq hg$, so we have a problem. In general, if you have a problem that you can’t solve, you should cheat and change it to a solvable one (According to my advisor, this strategy is due to Alexander the Great). Today, we will change the definition of commutative to make things work. (more…)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 88, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412132501602173, "perplexity_flag": "head"}
http://mathoverflow.net/questions/48576?sort=oldest
## When can the group von Neumann algebra of a one-relator group be isomorphic to a free group factor? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G=\langle a,b | R \rangle$ be a one-relator group. When can the left group von Neumann algebra $LG$ be isomorphic to a free group factor? Jesse and Andreas have "trapped the lion" pretty well with their comments below. A bit more modest related question: if $L_{a}$ and $L_{b}$ are the unitary elements in $LG$ corresponding to $a$ and $b$, respectively, can the free entropy of ($L_{a}+L_{a}^{*},L_{b}+L_{b}^{*}$) be finite? For the definition of free entropy, see Voiculescu's survey: http://arxiv.org/PS_cache/math/pdf/0103/0103168v1.pdf. (It is possible for a set of generators of a type $II_{1}$-factor that is not a free group factor to have finite free entropy. Nate Brown establishes this in http://arxiv.org/abs/math/0403294.) - 2 Following the $\ell^2$-Betti numerology, this should never happen if $G$ is torsionfree (unless $G$ is abelian). Linnell and Dicks showed that the first $\ell^2$-Betti number of a torsionfree 2-generator 1-relator group vanishes. If it where isomorphic to an interpolated free group factor $L\mathbb F_t$, then one would expect that $t=1$ (being equal to the first $\ell^2$-Betti number plus $1$. – Andreas Thom Dec 7 2010 at 18:10 2 To go along with what Andreas wrote, if you allow torsion then I believe it was Dykema and Radulescu who showed that the groups $\langle a, b \ | \ b^k \rangle$ for $2 \leq k < \infty$ always give interpolated free group factors $L\mathbb F_t$, with $1 < t < 2$. – Jesse Peterson Dec 7 2010 at 21:35 1 Jesse has answered this question as asked. Thanks! (I should have looked more carefully for such results before asking the question!!!) – Jon Bannon Dec 7 2010 at 22:31 I've changed the question a bit to allow for more thoughts. – Jon Bannon Dec 7 2010 at 22:37 1 Also, do you mean free entropy when you say free entropy dimension above? The free entropy dimension is always finite when the factor embeds into $R^\omega$. – Jesse Peterson Dec 7 2010 at 23:15 show 2 more comments ## 1 Answer I post this for future reference... I've just come across this nice result: http://www.math.jussieu.fr/~pfima/Documents/Baumslag-Solitar-Groups.pdf It turns out that the group factors associated to certain non-residually finite Baumslag-Solitar groups are prime, have no Cartan, and are not solid...whereas free group factors are solid. The result here also proves (Appealing to Ozawa's solid von Neumann algebra paper) that the group von Neumann algebra of such a Baumslag -Solitar group cannot be isomorphic to the group von Neumann algebra of an I.C.C. hyperbolic group. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8546901941299438, "perplexity_flag": "middle"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Superconductor
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Superconductivity (Redirected from Superconductor) Superconductivity is a phenomenon occurring in certain materials at low temperatures, characterised by the complete absence of electrical resistance and the damping of the interior magnetic field (the Meissner effect.) In conventional superconductors, superconductivity is caused by a force of attraction between certain conduction electrons arising from the exchange of phonons, which causes the conduction electrons to exhibit a superfluid phase composed of correlated pairs of electrons. There also exists a class of materials, known as unconventional superconductors, that exhibit superconductivity but whose physical properties contradict the theory of conventional superconductors. In particular, the so-called high-temperature superconductors superconduct at temperatures much higher than should be possible according to the conventional theory (though still far below room temperature.) There is currently no complete theory of high-temperature superconductivity. Superconductivity occurs in a wide variety of materials, including simple elements like tin and aluminium, various metallic alloys, some heavily-doped semiconductors, and certain ceramic compounds containing planes of copper and oxygen atoms. The latter class of compounds, known as the cuprates, are high-temperature superconductors. Superconductivity does not occur in noble metals like gold and silver, nor in ferromagnetic metals. Contents ## Elementary properties of superconductors Most of the physical properties of superconductors vary from material to material, such as the heat capacity and the critical temperature at which superconductivity is destroyed. On the other hand, there is a class of properties that are independent of the underlying material. For instance, all superconductors have exactly zero resistivity to low applied currents when there is no magnetic field present. The existence of these "universal" properties imply that superconductivity is a thermodynamic phase, and thus possess certain distinguishing properties which are largely independent of microscopic details. ### Zero electrical resistance Suppose we were to attempt to measure the electrical resistance of a piece of superconductor. The simplest method is to place the sample in an electrical circuit, in series with a voltage (potential difference) source V (such as a battery), and measure the resulting current. If we carefully account for the resistance R of the remaining circuit elements (such as the leads connecting the sample to the rest of the circuit, and the source's internal resistance), we would find that the current is simply V/R. According to Ohm's law, this means that the resistance of the superconducting sample is zero. Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. In a normal conductor, an electrical current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat (which is essentially the vibrational kinetic energy of the lattice ions.) As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance. The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons, instead consisting of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice (given by kT, where k is Boltzmann's constant and T is the temperature), the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation. (Note: actually, in a class of superconductors known as type II superconductors, a small amount of resistivity appears when a strong magnetic field and electrical current are applied. This is due to the motion of vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, the vortices are stationary, and the resistivity vanishes.) ### Superconducting phase transition In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from less than 1K to around 20K. Solid mercury, for example, has a critical temperature of 4.2K. As of 2001, the highest critical temperature found for a conventional superconductor is 39 K for magnesium diboride (MgB2), although this material displays enough exotic properties that there is doubt about classifying it as a "conventional" superconductor. Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature of 92 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The explanation for these high critical temperatures remains unknown. (Electron pairing due to phonon exchanges explains superconductivity in conventional superconductors, but it does not explain superconductivity in the newer superconductors that have a very high Tc.) The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. At low temperatures, it varies instead as e−α/T for some constant α. (This exponential behavior is one of the pieces of evidence for the existence of the energy gap.) Behavior of heat capacity (C) and resistivity (ρ) at the superconducting phase transition The order of the superconducting phase transition is still a matter of debate. It had long been thought that the transition is second-order, meaning there is no latent heat. However, recent calculations have suggested that it may actually be weakly first-order due to the effect of long-range fluctuations in the electromagnetic field. ### Meissner effect When a superconductor is placed in a weak external magnetic field H, the field penetrates for only a short distance λ, called the penetration depth, after which it decays rapidly to zero. This is called the Meissner effect. For most superconductors, the penetration depth is on the order of a hundred nm. The Meissner effect is sometimes confused with the "perfect diamagnetism" one would expect in a perfect electrical conductor: according to Lenz's law, when a changing magnetic field is applied to a conductor, it will induce an electrical current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field. The Meissner effect is distinct from perfect diamagnetism because a superconductor expels all magnetic fields, not just those that are changing. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law. A conductor in a static field, such as the dome of a Van de Graff generator, will have a field within itself, even if there is no net charge in the interior. The Meissner effect was explained by London and London, who showed that the electromagnetic free energy in a superconductor is minimized provided $\nabla^2\mathbf{H} = \lambda^{-2} \mathbf{H}\,$ where H is the magnetic field and λ is the penetration depth. This equation, which is known as the London equation , predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface. The Meissner effect breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electrical current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called "fluxons" because the flux carried by these vortices is quantized. Most pure elemental superconductors (except niobium) are Type I, while almost all impure and compound superconductors are Type II. Variation of internal magnetic field (B) with applied external magnetic field (H) for Type I and Type II superconductors ## Theories of superconductivity Since the discovery of superconductivity, great efforts have been devoted to finding out how and why it works. During the 1950s, theoretical condensed matter physicists arrived at a solid understanding of "conventional" superconductivity, through a pair of remarkable and important theories: the phenomenological Ginzburg-Landau theory (1950) and the microscopic BCS theory (1957). Generalisations of these theories form the basis for understanding the closely related phenomenon of superfluidity, but the extent to which similar generalisations can be applied to unconventional superconductors as well is still controversial. ## History of superconductivity Superconductivity was discovered in 1911 by Heike Kamerlingh Onnes, who was studying the resistivity of solid mercury at cryogenic temperatures using the recently-discovered liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistivity abruptly disappeared. For this discovery, he was awarded the Nobel Prize in Physics in 1913. In subsequent decades, superconductivity was found in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K. The next important step in understanding superconductivity occurred in 1933, when Meissner and Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon which has come to be known as the Meissner effect. In 1935, F. and H. London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current. In 1950, the phenomenological Ginzburg-Landau theory of superconductivity was devised by Landau and Ginzburg. This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov showed that Ginzburg-Landau theory predicts the division of superconductors into the two categories now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau having died in 1968.) Also in 1950, Maxwell and Reynolds et. al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element. This important discovery pointed to the electron-phonon interaction as the microscopic mechanism responsible for superconductivity. The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper, and Schrieffer. This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972. The BCS theory was set on a firmer footing in 1958, when Bogoliubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian. In 1959, Gor'kov showed that the BCS theory reduced to the Ginzburg-Landau theory close to the critical temperature. In 1962, the first commercial superconducting wire, a niobium-titanium alloy, was developed by researchers at Westinghouse. In the same year, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum h/e, and thus (coupled with the quantum Hall resistivity) for Planck's constant h. Josephson was awarded the Nobel Prize for this work in 1973. In 1986, Bednorz and Mueller discovered superconductivity in a lanthanum-based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987). It was shortly found that replacing the lanthanum with yttrium, i.e. making YBCO, raised the critical temperature to 92 K, which was important because liquid nitrogen could then be used as a refrigerant (at atmospheric pressure, the boiling point of nitrogen is 77 K.) This is important commercially because liquid nitrogen can be produced cheaply on-site with no raw materials, and is not prone to some of the problems (solid air plugs, etc) of helium in piping. Many other cuprate superconductors have since been discovered, and the theory of superconductivity in these materials is one of the major outstanding challenges of theoretical condensed matter physics. ## Technological applications of superconductivity There have been many technological innovations based on superconductivity. Superconductors are used to make the most powerful electromagnets known to man, including those used in MRI machines and the beam-steering magnets used in particle accelerators. They are also used to make SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. Superconductors have also been used to make digital circuits (e.g. based on the Rapid Single Flux Quantum technology) and microwave filters for mobile phone base stations. Many promising applications of superconductivity have been stalled by the impracticality of maintaining large systems (e.g. long stretches of cable) at cryogenic temperatures. These problems may soon be alleviated with the development of high temperature superconductors suitable for use in industry, as these can be cooled using liquid nitrogen rather than liquid helium (which is more expensive and difficult to handle.) Promising future applications include high-performance transformers, power storage devices, electric power transmission, electric motors (e.g. for vehicle propulsion), and magnetic levitation devices. ## Superconductors in science fiction Superconductivity has long been a staple of science fiction. One of the first mentions of the phenomenon occurred in Robert Heinlein's novel Beyond This Horizon (1942). Notably, the use of a fictional room temperature superconductor was a major plot point in the Ringworld novels by Larry Niven, first published in 1970. Superconductivity is a popular device in science fiction due to the simplicity of the underlying concept - zero electrical resistance - and the rich technological possibilities. For example, superconducting magnets could be used to generate the powerful magnetic fields used by Bussard ramjets, a type of spacecraft commonly encountered in science fiction. The most troublesome property of real superconductors, the need for cryogenic cooling, is often circumvented by postulating the existence of room temperature superconductors. Many stories attribute additional properties to their fictional superconductors, ranging from infinite heat conductivity in Niven's novels (real superconductors conduct heat poorly, though superfluid helium has immense but finite heat conductivity) to teleportation in the Stargate movie and TV series. ## Links and references ### Selected references Papers • H.K. Onnes, Commun. Phys. Lab. 12, 120 (1911) • W. Meissner and R. Oschenfeld, Naturwiss. 21, 787 (1933) • F. London and H. London, Proc. R. Soc. London A149, 71 (1935) • V.L. Ginzburg and L.D. Landau, Zh. Eksp. Teor. Fiz. 20, 1064 (1950) • E.Maxwell, Phys. Rev. 78, 477 (1950) • C.A. Reynolds et. al., Phys. Rev. 78, 487 (1950) • J. Bardeen, L.N. Cooper, and J.R. Schrieffer, Phys. Rev. 108, 1175 (1957) • N.N. Bogoliubov, Zh. Eksp. Teor. Fiz. 34, 58 (1958) • L.P. Gor'kov, Zh. Eksp. Teor. Fiz. 36, 1364 (1959) • B.D. Josephson, Phys. Lett. 1, 251 (1962) • J.G. Bednorz and K.A. Mueller, Z. Phys. B64, 189 (1986) ### See also 03-10-2013 05:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92167067527771, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/214469/are-projective-modules-over-an-artinian-ring-free?answertab=votes
# Are projective modules over an artinian ring free? Quoting a comment to this question: By a theorem of Serre, if $R$ is a commutative artinian ring, every projective module [over $R$] is free. (The theorem states that for any commutative noetherian ring $R$ and projective module $P$ [over $R$], if $\operatorname{rank}(P) > \dim(R)$, then there exists a projective [$R$-module] $Q$ with $\operatorname{rank}(Q)=\dim(R)$ such that $P\cong R^k \oplus Q$ where $k=\operatorname{rank}(P)−\dim(R)$.) When $R$ is a PID, this is in Lang's Algebra (Section III.7), and when $R$ is local this is a famous theorem of Kaplansky. But in spite of a reasonable effort, I can't seem to find any other reference to this theorem of Serre. Does anyone know of one? Is there any other way to show that every projective module over an artinian ring is free? - 2 Let $R=\mathbb{Z}_6$ and $P=\hat{2}\mathbb{Z}_6$. Then $R$ is artinian, $P$ is projective and not free. – YACP Oct 15 '12 at 22:02 – Hamish Oct 15 '12 at 22:28 1 For any (non-zero) commutative ring $R$, you can construct finite projective modules on $R\times R$ which aren't free by gluing two $R$-modules of different ranks. In particular you can do this for $k\times k$ where $k$ is any field, which is an Artinian ring. – Keenan Kidwell Oct 15 '12 at 22:54 ## 2 Answers Let $R$ be any commutative ring whose projective modules are all free, and let $e\notin \{0,1\}$ be an idempotent of $R$. Then $eR$ and $(1-e)R$ are both projective, hence free of some rank 1 or more, and $eR\oplus(1-e)R=R$, so that we have $R^n\cong R$ as $R$ module for some natural number $n\geq 2$. This is absurd since commutative rings have IBN. This shows that $R$ cannot have any nontrivial idempotents. Since an Artinian ring without nontrivial idempotents is local, you can see now the dramatic failure of Artinian rings to have the "projective implies free" property, except in the "good" local case. - @navigetor23 Commtative implies IBN is hardly a theorem: it can even be proved here. It's apparent that $R^n\cong R^m$ iff there are matrices of appropriate sizes such that $AB=I_n$ and $BA=I_m$. Let $R$ be nonzero commutative and suppose $R^m\cong R^n$ with $n\neq m$. Project with $\pi$ onto $R/M=F$ for $M$ a maximal ideal. If the above $A,B$ exist, then projecting all the entries with $\pi$ says that the field $F$ does not have IBN, an absurdity. I know you'll probably be able to provide a similar length proof of your own statement, but please don't spoil it for me :) – rschwieb Oct 16 '12 at 16:25 Sorry to show up late to this party, but you were quoting my comment and somehow I missed it. Yes, a condition was overlooked: we must presume that $P$ has constant rank, alternatively, $\operatorname{Spec}(A)$ is connected, or $A$ has no non-trivial idempotents, etc. This result is Serre's Splitting Theorem which states that a projective $A$-module, $P$, of constant rank $r \geq d+1$ where $d=\dim(A)$ must contain a unimodular element (SST) [not to be left out: $P$ will also be cancellative under this condition (Bass's Cancellation Theorem)]. $P$ containing a unimodular element $p \in P$ is equivalent to a surjection $P \twoheadrightarrow A$, giving us kernel $Q$ (which will also be projective), and because the surjection splits we have $P \simeq A \oplus Q$. Repeat this splitting until the rank of the resulting kernel matches the dimension of $A$. As a result, projective $A$-modules for which $\operatorname{rank}(P)=\dim(A)$ are called projective modules of top-rank. For further reference: see T.Y. Lam's Serre's Problem on Projective Modules (esp. p.291-2) - And since a connected commutative Artinian ring is local, this unfortunately fell under the conditions originally listed in that post (local, PID), and nothing new is gained. I'm glad to learn about this theorem and reference, though! – rschwieb Feb 22 at 17:41 @YACP Thanks, I went ahead and deleted all the unuseful ones pointed at your self-deleted posts. I'm not sure why you would discourage edits to improve posts, so I'll just guess you didn't know how to start otherwise and leave it at that. – rschwieb Feb 22 at 17:52 Indeed, this result becomes rather trivial when $\dim(A) = 0$. Also, I'm happy to be reminded of the full list of my standard assumptions. – Andrew Parker Feb 22 at 17:52 – YACP Feb 22 at 17:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9339419603347778, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/183051/tate-models-and-subgroups-of-typem-m-re-silverman-hindry-88
# Tate models and subgroups of type$(m,m)$ - re Silverman, Hindry 88 I need help in understanding a passage in a paper by Hindry and Silverman, "The Canonical Height and Integral Points on Elliptic Curves". (re. page 439) Let $E(K)$ be an elliptic curve with multiplicative reduction at a valuation $\nu$, and K a function field of zero characteristic. The torsion points of this curve are supposed to have the form $E(K)_\mathrm{tors}$ $\cong$ $\dfrac{\mathbb{Z}}{m\mathbb{Z}} \times \dfrac{\mathbb{Z}}{n\mathbb{Z}}$. They select a point P so that it kills the right side of the product, i.e., $\dfrac{E(K)_\mathrm{tors}}{\langle P\rangle} \cong \dfrac{\mathbb{Z}}{m\mathbb{Z}} \times \dfrac{\mathbb{Z}}{m\mathbb{Z}}$. Then, they use the isogeny theorem to construct another elliptic curve $E'$ such that $E$ is isogenous with $E'$. Next, they construct the Tate models for these curves: $E(K_\nu) \cong \dfrac{K_\nu^{*}}{\langle q\rangle }$ and $E'(K_\nu) \cong \dfrac{K_\nu^{*}}{\langle q'\rangle}$, where $K_\nu$ is the completion at $\nu$, $q$ and $q'$ have the property that $\nu(q) > 1$ and $\nu(q') > 1$. Up to this point everything is fine. However, in the next line they find that there are elements $q_1$ and $q'_1$ such that $q = q_1^m$ and $q' = q'_1^m$; the reason being that there is a subgroup of type $(m,m)$ in both curves. At first, I thought that one could see this by elementary group theory, but that only gives that $q^{k} = q_1^m$ for some power $k$ of $q$, and I could not continue from this point. Am I missing something? Can anyone help me understand this? cheers - TeX is not so crude that you need to write $<p>$. I changed it to $\langle p\rangle$. – Michael Hardy Aug 16 '12 at 3:18 ## 1 Answer Fixed a choice of $q^{1/m}$ and a primitive $m$-th root $\xi_m$ in an algebraic closure $\bar{K}_\nu$ of $K_\nu$. When you have a parametrization $E(K_ν)≅K_ν^*/⟨q⟩$, the $m$-torsion points in $E({K}_\nu)$ are given by $z\in K_\nu$ such that $z^m=q^k$ for some $k\in \mathbb Z$. So $z=(q^{1/m}){}^k \xi_m^r$ for some $k, r\in\mathbb Z$. There are at most $m^2$ solutions. It this bound is reached, then $z\in K_\nu$ for all $k,r$, so $q^{1/m}\in K_\nu$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536693692207336, "perplexity_flag": "head"}
http://mathoverflow.net/questions/81958/complement-of-a-simply-connected-domain
## Complement of a simply connected domain [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Whether the unbounded connected component of the complement of the closure of a bounded simply connected domain in the extended complex plane $\overline C$ is a Jordan domain (in $\overline C= S^2\subset R^3$)? Explanation: If $G\subset \overline C$ is a bounded simply connected domain, then $H:=$ unbounded connected component of Interior$(\overline C\setminus G)$ is a symply connected domain in $\overline C$. My question is whether $H$ is a Jordan domain in $\overline C$? It is very natural question! I am trying to find and define the smallest Jordan domain $G'$ containing a simply connected bounded domain $G$ in the complex plane (it will be equal to the complement of $H$, provided the answer to the question is affirmative)! - The question you ask does not quite match the title of your post. Moreover, please see mathoverflow.net/howtoask and mathoverflow.net/faq#whatnot – Yemon Choi Nov 26 2011 at 19:47 Looks like homework AND does not make any sense. – Igor Rivin Nov 26 2011 at 19:48 I hope that, the question is now clear. – Marijan Nov 26 2011 at 20:04 You still don't say why you want to know the answer, or give evidence that you've done some special cases or tried to use existing results – Yemon Choi Nov 26 2011 at 21:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.906976580619812, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/289137/calculate-201234567-mod-251/289140
# Calculate $20^{1234567} \mod 251$ I need to calculate the following $$20^{1234567} \mod 251$$ I am struggling with that because $251$ is a prime number, so I can't simplify anything and I don't have a clue how to go on. Moreover how do I figure out the period of $[20]_{251}$? Any suggestions, remarks, nudges in the right direction are very appreciated. - ## 2 Answers Hint: Use Fermat's Little Theorem to find the period of $[20]_{251}$. Then, use this to simplify $20^{1234567}$. - ...and then you're left with $20^{67} \bmod 251$. – Fly by Night Jan 28 at 18:27 1 Then do some exponentiation by squaring. – Joe Z. Jan 28 at 18:30 How does that help? – Fly by Night Jan 28 at 18:37 If you square 20 six times, then multiply by 400 and then by 20, taking the factor mod 251 each time, you will have calculated $20^{67}$ modulo 251. – Joe Z. Jan 28 at 19:00 If you do not know the little theorem, a painful but -I think- still plausible method is to observe that $2^{10} = 1024 \equiv 20$, and $10^3 = 1000 \equiv -4$. Then, we may proceed like this: $20^{1234567} = 2^{1234567}10^{1234567} = 2^{123456\times 10 + 7}10^{411522\times 3 + 1} = 1280\times 1024^{123456}1000^{411522} \equiv 1280\times 20^{123456}4^{411522}$. Observe that after one pass, we are still left with the powers of $2$ and the powers of $20$ that we can handle with the same equivalences above. We still have to make some calculations obviously (divisions etc), but it is at least not as hopeless as before. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550809860229492, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/213336/noether-normalization-over-mathbbz
# Noether normalization over $\mathbb{Z}$ I would like to know what is a correct analogue of Noether normalization theorem for rings finitely generated over $\mathbb Z$. Obviously, Noether normalization can not hold "literately" in this case since, for example the ring $\mathbb Z_2[X]$ does not contain a polynomial subring with coefficients in $\mathbb Z$ over which it is finite. I am asking this question to better understand the second part of the answer of Qing Liu to the question given here: http://mathoverflow.net/questions/57515/one-point-in-the-post-of-terence-tao-on-ax-grothendieck-theorem - ## 1 Answer Take a look at this: http://www.math.lsa.umich.edu/~hochster/615W10/supNoeth.pdf. It proves the generalized version of Noether Normalization, which is what you need (or rather what Qing Liu uses in his answer). In general I think Mel Hochster's notes are really good. The above link was in the following answer http://mathoverflow.net/questions/42276, which was a link in Qing Liu's answer that you mention in your question. Sorry, I should mention what the general version of Noether Normalization is that Hochster proves in his notes: Let $D$ be a domain, and $R$ a finitely generated $D$ algebra. There exists a nonzero $f \in D$, and a finite injective ring map $D_f[X_1,\dots,X_n] \hookrightarrow R_f$. Here the $X_i$ are indeterminates. Note how the above version implies Noether Normalization over a field. Although, if you know some basic scheme theory, I feel like Qing Liu's answer involving constructible sets is equally enlightening. - Dear Rankeya, thank you for the answer and for affirming that Hochester's notes are good:) ! This is important information. – agleaner Oct 14 '12 at 12:43 Also, I have one more question. Would you advise some (not too scary) place where to read the proof of Chevalet theorem on constructive sheaves? – agleaner Oct 14 '12 at 12:59 If you meant Chevalley's Theorem on constructible sets, then Ravi Vakil's notes, "Foundations of Algebraic Geometry", available on his website has a nice section on constructible sheaves and Chevalley's theorem. I believe he proves the theorem in section 8.4 of his notes. Note, however, that Ravi leaves many things as exercises, which depending on your background might be time consuming. Also, when you want to know about any topic in AG (even CA), I recommend the Stacks Project. It has a wonderful new search feature, which allows you to go straight to what you want. – Rankeya Oct 14 '12 at 14:15 More importantly, it has a section on constructible sets, and Chevalley's theorem. Most of these sources might appear scary the first time you use them. I was scared the first time I saw the Stacks Project. If you don't let your fears get to you, I guarantee that you will be rewarded and learn some beautiful math. – Rankeya Oct 14 '12 at 14:15 It is interesting, I never heard about Stacks Project... I see it has over 3000 pages :)... Will try to see if I will be able to use it. – agleaner Oct 14 '12 at 14:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335014820098877, "perplexity_flag": "middle"}
http://ams.org/bookstore?fn=20&arg1=ficseries&ikey=FIC-59
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Return to List Geometric Representation Theory and Extended Affine Lie Algebras Edited by: Erhard Neher and Alistair Savage, University of Ottawa, ON, Canada, and Weiqiang Wang, University of Virginia, Charlottesville, VA A co-publication of the AMS and Fields Institute. SEARCH THIS BOOK: Fields Institute Communications 2011; 213 pp; hardcover Volume: 59 ISBN-10: 0-8218-5237-X ISBN-13: 978-0-8218-5237-8 List Price: US\$99 Member Price: US\$79.20 Order Code: FIC/59 See also: Finite Dimensional Algebras and Quantum Groups - Bangming Deng, Jie Du, Brian Parshall and Jianpan Wang Representations of Semisimple Lie Algebras in the BGG Category $$\mathscr {O}$$ - James E Humphreys Representations of Quantum Algebras and Combinatorics of Young Tableaux - Susumu Ariki Lie theory has connections to many other disciplines such as geometry, number theory, mathematical physics, and algebraic combinatorics. The interaction between algebra, geometry and combinatorics has proven to be extremely powerful in shedding new light on each of these areas. This book presents the lectures given at the Fields Institute Summer School on Geometric Representation Theory and Extended Affine Lie Algebras held at the University of Ottawa in 2009. It provides a systematic account by experts of some of the exciting developments in Lie algebras and representation theory in the last two decades. It includes topics such as geometric realizations of irreducible representations in three different approaches, combinatorics and geometry of canonical and crystal bases, finite $$W$$-algebras arising as the quantization of the transversal slice to a nilpotent orbit, structure theory of extended affine Lie algebras, and representation theory of affine Lie algebras at level zero. This book will be of interest to mathematicians working in Lie algebras and to graduate students interested in learning the basic ideas of some very active research directions. The extensive references in the book will be helpful to guide non-experts to the original sources. Titles in this series are co-published with the Fields Institute for Research in Mathematical Sciences (Toronto, Ontario, Canada). Readership Graduate students and research mathematicians interested in Lie algebras and algebraic combinatorics. AMS Home | Comments: webmaster@ams.org © Copyright 2012, American Mathematical Society Privacy Statement
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8160952925682068, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/entropy?sort=faq&pagesize=15
# Tagged Questions A important property of all systems in thermodynamics and statistical mechanics. Entropy characterizes the degree to which the energy of the system is *not* available to do useful work 3answers 1k views ### How do you prove $S=-\sum p\ln p$? How does one prove the formula for entropy $S=-\sum p\ln p$? Obviously systems on the microscopic level are fully determined by the microscopic equations of motion. So if you want to introduce a law ... 3answers 915 views ### Maxwell's Demon Constant (Information-Energy equivalence) New Scientist article: Summon a 'demon' to turn information into energy The speed of light c converts between space and time and also appears in e=mc^2. Maxwell's Demon can turn information supplied ... 6answers 658 views ### How can it be that the beginning universe had a high temperature and a low entropy at the same time? The Big Bang theory assumes that our universe started from a very/infinitely dense and extremely/infinitely hot state. But on the other side, it is often claimed that our universe must have been ... 3answers 750 views ### How efficient is a desktop computer? As I understand it (and admittedly it's a weak grasp), a computer processes information irreversibly (AND gates, for example), and therefore has some minimum entropy increase associated with its ... 6answers 623 views ### Does the scientific community consider the Loschmidt paradox resolved? If so what is the resolution? Does the scientific community consider the Loschmidt paradox resolved? If so what is the resolution? I have never seen dissipation explained, although what I have seen a lot is descriptions of ... 5answers 2k views ### Why was the universe in a extraordinarily low-entropy state right after the big bang? Let me start by saying that I have no scientific background whatsoever. I am very interested in science though and I'm currently enjoying Brian Greene's The Fabric of the Cosmos. I'm at chapter 7 and ... 4answers 945 views ### What is information? We're all familiar with basic tenants such as "information cannot be transmitted faster than light" and ideas such as information conservation in scenarios like Hawking radiation (and in general, ... 7answers 598 views ### How is $\frac{dQ}{T}$ measure of randomness of system? I am studying entropy and its hard for me to catch up what exactly is entropy. Many articles and books write that entropy is the measure of randomness or disorder of the system. They say when a gas ... 3answers 1k views ### Why does the low entropy at the big bang require an explanation? (cosmological arrow of time) I have read Sean Carrol's book. I have listened to Roger Penrose talk on "Before the Big Bang". Both are offering to explain the mystery of low entropy, highly ordered state, at the Big Bang. Since ... 3answers 4k views ### Do magnets lose their magnetism? I recently bought some buckyballs, considered to be the world's best selling desk toy. Essentially, they are little, spherical magnets that can form interesting shapes when a bunch of them are used ... 3answers 295 views ### Chance of objects going against greater entropy My books uses the argument that the multiplicities of a few macrostates in a macroscopic object take up an extraordinarily large share of all possible microstates, such that even over the entire ... 3answers 1k views ### Is there any proof for the 2nd law of thermodynamics? Are there any analytical proofs for the 2nd law of thermodynamics? Or is it based entirely on empirical evidence? 5answers 1k views ### Can a single classical particle have any entropy? recently I have had some exchanges with @Marek regarding entropy of a single classical particle. I always believed that to define entropy one must have some distribution. In Quantum theory, a single ... 1answer 264 views ### Motivation for maximum Renyi/Tsallis entropy The Conditional limit theorem of Van Campenhout and Cover gives a physical reason for maximizing (Shannon) entropy. Nowadays, in statistical mechanics, people talk about maximum Renyi/Tsallis entropy ... 4answers 584 views ### Ignorance in statistical mechanics Consider this penny on my desc. It is a particular piece of metal, well described by statistical mechanics, which assigns to it a state, namely the density matrix $\rho_0=\frac{1}{Z}e^{-\beta H}$ ... 2answers 538 views ### About Susskind's claim “information is indestructible” I really can't understand what Leonard Susskind means when he says that information is indestructible. Is that information really recoverable? He himself said that entropy is hidden information. ... 4answers 852 views ### Entropy of radiation emitted into space In several papers I see something equivalent to the following expression for the entropy of radiation given by an astronomical object such as the Sun (assuming the object can be approximated as a ... 3answers 319 views ### Is there a mechanism for time symmetry breaking? Excluding Thermodynamic's arrow of time, all mathematical descriptions of time are symmetric. We know the arrow of time is real and we know the equations describing physics are real so is there any ... 1answer 158 views ### Why is (von Neumann) entropy maximized for an ensemble in thermal equilibrium? Consider a quantum system in thermal equilibrium with a heat bath. In determining the density operator of the system, the usual procedure is to maximize the von Neumann entropy subject to the ... 4answers 615 views ### What are the arguments towards the Life-and-Entropy relation? I've heard it from a few people, and I've seen it popup here in the site a couple of times. There seems to be speculation (and studies?) towards this idea, and this is what I've picked up so far: ... 2answers 248 views ### Connection between entropy and energy An isolated system $A$ has entropy $S_a>0$. Next, the isolation of $A$ is temporarily violated, and it has entropy reduced $$S_b ~=~ S_a - S,\space\space\space S\leq S_a.$$ Is it true to say: the ... 2answers 410 views ### Energy of unmixing Mixing of two different fluids is associated with an increase of entropy. Conversely, separation of two gases must be associated with a decrease of the entropy of the two fluids. Is there a minimum ... 2answers 224 views ### black hole no-hair theorems vs. entropy and surface area I was revisiting some old popular science books a while ago and two statements struck me as incompatible. No-hair theorems: a black hole is fully-described by just a few numbers (mass, spin etc) ... 2answers 440 views ### Experiments that measure the time a gas takes to reach equilibrium If you take two ideal gases at different temperatures, and allow them to share energy through heat, they'll eventually reach a thermodynamic equilibrium state that has higher entropy than the ... 5answers 1k views ### Second law of Thermodynamics: Why is it only “almost” always true that entropy is non-decreasing? The venerable Wikipedia states the second law of Thermodynamics as such: ... 2answers 366 views ### Does entropy apply to Newton's First Law or does “acted upon” always require an external factor? First law: Every body remains in a state of rest or uniform motion (constant velocity) unless it is acted upon by an external unbalanced force. This means that in the absence of a non-zero net ... 3answers 379 views ### How bright can we make a sun jar? A sun jar is an object that stores solar energy in a battery and then releases it during dark hours through a led. Assume: a $65cm^2$ solar panel a 12h/12h light/dark cycle insolation of ... 1answer 348 views ### Does the heat death of the universe really imply a maximum entropy state *all* of the time? Or most of the time? Statistically speaking, you're going to still encounter deviations from equilibrium, even though the expected value is equilibrium. But these rare deviations from equilibrium - which are inevitable - ... 1answer 47 views ### Would the universe get consumed by blackholes because of entropy? Since the total entropy of the universe is increasing because of spontaneous processes, black holes form because of entropy (correct me if I'm wrong), and the universe is always expanding, would the ... 0answers 44 views ### Entropy calculation, erasing bits? [closed] Erasing 2 bits, information lose 2answers 349 views ### What is the physical or mathematical meaning of the Gibbs-Duhem equation? The Gibbs-Duhem equation states $$0~=~SdT-VdP+\sum(N_i d\mu_i),$$ where $\mu$ is the chemical potential. Does it have any mathematical (about intensive parameters) or physical meaning?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9290016889572144, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/26903?sort=oldest
## When are entire functions surjective? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there some useful criterion to determine whether or not an entire function is surjective? - ## 1 Answer Maybe Picard's theorem is of help http://en.wikipedia.org/wiki/Picard_theorem - 6 Indeed. And it is surjective if and only if it not of the form $e^{h(z)}+\alpha$ for a suitable constant $\alpha$ and a suitable entire function $h(z)$. – Roland Bacher Jun 3 2010 at 10:13 +1. And to show this, it's probably worth looking at en.wikipedia.org/wiki/… – dke Jun 3 2010 at 10:24 3 I don't see how Picard's theorem, or Roland Bacher's remark, is useful in practice to determine whether an entire function is surjective. – Pete L. Clark Jun 3 2010 at 12:36 @Pete L. Clark: Hence the 'maybe' in the post. I was thinking about a useful/practical criterion but nothing came to mind. – babubba Jun 3 2010 at 12:51 But certainly if there is no $\alpha \in \mathbb{C}$ such that $\frac{f'(z)}{f(z) - \alpha}$ is entire, we can conclude that $f$ is not surjective. – Saul Glasman Jun 3 2010 at 20:59 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8722923398017883, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Confidence_intervals
# Confidence interval (Redirected from Confidence intervals) See also: Confidence distribution In statistics, a confidence interval (CI) is a type of interval estimate of a population parameter and is used to indicate the reliability of an estimate. It is an observed interval (i.e. it is calculated from the observations), in principle different from sample to sample, that frequently includes the parameter of interest if the experiment is repeated. How frequently the observed interval contains the parameter is determined by the confidence level or confidence coefficient. More specifically, the meaning of the term "confidence level" is that, if confidence intervals are constructed across many separate data analyses of repeated (and possibly different) experiments, the proportion of such intervals that contain the true value of the parameter will match the confidence level; this is guaranteed by the reasoning underlying the construction of confidence intervals.[1][2][3] Whereas two-sided confidence limits form a confidence interval, their one-sided counterparts are referred to as lower or upper confidence bounds. Confidence intervals consist of a range of values (interval) that act as good estimates of the unknown population parameter. However, in infrequent cases, none of these values may cover the value of the parameter. The level of confidence of the confidence interval would indicate the probability that the confidence range captures this true population parameter given a distribution of samples. It does not describe any single sample. This value is represented by a percentage, so when we say, "we are 99% confident that the true value of the parameter is in our confidence interval", we express that 99% of the observed confidence intervals will hold the true value of the parameter. After a sample is taken, the population parameter is either in the interval made or not, there is no chance. The desired level of confidence is set by the researcher (not determined by data). If a corresponding hypothesis test is performed, the confidence level is the complement of respective level of significance, i.e. a 95% confidence interval reflects a significance level of 0.05.[citation needed] The confidence interval contains the parameter values that, when tested, should not be rejected with the same sample. Greater levels of variance yield larger confidence intervals, and hence less precise estimates of the parameter. Confidence intervals of difference parameters not containing 0 imply that there is a statistically significant difference between the populations. In applied practice, confidence intervals are typically stated at the 95% confidence level.[4] However, when presented graphically, confidence intervals can be shown at several confidence levels, for example 50%, 95% and 99%. Certain factors may affect the confidence interval size including size of sample, level of confidence, and population variability. A larger sample size normally will lead to a better estimate of the population parameter. A confidence interval does not predict that the true value of the parameter has a particular probability of being in the confidence interval given the data actually obtained. (An interval intended to have such a property, called a credible interval, can be estimated using Bayesian methods; but such methods bring with them their own distinct strengths and weaknesses.) ## Conceptual basis In this bar chart, the top ends of the bars indicate observation means and the red line segments represent the confidence intervals surrounding them. Although the bars are shown as symmetric in this chart, they do not have to be symmetric. ### Introduction Interval estimates can be contrasted with point estimates. A point estimate is a single value given as the estimate of a population parameter that is of interest, for example the mean of some quantity. An interval estimate specifies instead a range within which the parameter is estimated to lie. Confidence intervals are commonly reported in tables or graphs along with point estimates of the same parameters, to show the reliability of the estimates. For example, a confidence interval can be used to describe how reliable survey results are. In a poll of election voting-intentions, the result might be that 40% of respondents intend to vote for a certain party. A 90% confidence interval for the proportion in the whole population having the same intention on the survey date might be 38% to 42%. From the same data one may calculate a 95% confidence interval, which in this case might be 36% to 44%. A major factor determining the length of a confidence interval is the size of the sample used in the estimation procedure, for example the number of people taking part in a survey. ### Meaning and interpretation For users of frequentist methods, various interpretations of a confidence interval can be given. • The confidence interval can be expressed in terms of samples (or repeated samples): "Were this procedure to be repeated on multiple samples, the calculated confidence interval (which would differ for each sample) would encompass the true population parameter 90% of the time."[1] Note that this does not refer to repeated measurement of the same sample, but repeated sampling.[2] • The explanation of a confidence interval can amount to something like: "The confidence interval represents values for the population parameter for which the difference between the parameter and the observed estimate is not statistically significant at the 10% level".[5] In fact, this relates to one particular way in which a confidence interval may be constructed. • The probability associated with a confidence interval may also be considered from a pre-experiment point of view, in the same context in which arguments for the random allocation of treatments to study items are made. Here the experimenter sets out the way in which they intend to calculate a confidence interval and know, before they do the actual experiment, that the interval they will end up calculating has a certain chance of covering the true but unknown value.[3] This is very similar to the "repeated sample" interpretation above, except that it avoids relying on considering hypothetical repeats of a sampling procedure that may not be repeatable in any meaningful sense. See Neyman construction. In each of the above, the following applies: If the true value of the parameter lies outside the 90% confidence interval once it has been calculated, then an event has occurred which had a probability of 10% (or less) of happening by chance. #### Philosophical issues The principle behind confidence intervals was formulated to provide an answer to the question raised in statistical inference of how to deal with the uncertainty inherent in results derived from data that are themselves only a randomly selected subset of a population. There are other answers, notably that provided by Bayesian inference in the form of credible intervals. Confidence intervals correspond to a chosen rule for determining the confidence bounds, where this rule is essentially determined before any data are obtained, or before an experiment is done. The rule defined such that over all possible datasets that might be obtained, there is a high probability ("high" is specifically quantified) that the interval determined by the rule will include the true value of the quantity under consideration. That is a fairly straightforward and reasonable way of specifying a rule for determining uncertainty intervals. The Bayesian approach appears to offer intervals that can, subject to acceptance of an interpretation of "probability" as Bayesian probability, be interpreted as meaning that the specific interval calculated from a given dataset has a certain probability of including the true value, conditional on the data and other information available. The confidence interval approach does not allow this, since in this formulation and at this same stage, both the bounds of interval and the true values are fixed values and there is no randomness involved. For example, in the poll example outlined in the introduction, to be 95% confident that the actual number of voters intending to vote for the party in question is between 36% and 44%, should not be interpreted in the common-sense interpretation that there is a 95% probability that the actual number of voters intending to vote for the party in question is between 36% and 44%. The actual meaning of confidence levels and confidence intervals is rather more subtle. In the above case, a correct interpretation would be as follows: If the polling were repeated a large number of times (you could produce a 95% confidence interval for your polling confidence interval), each time generating about a 95% confidence interval from the poll sample, then 95% of the generated intervals would contain the true percentage of voters who intend to vote for the given party. Each time the polling is repeated, a different confidence interval is produced; hence, it is not possible to make absolute statements about probabilities for any one given interval. For more information, see the section on meaning and interpretation. The questions concerning how an interval expressing uncertainty in an estimate might be formulated, and of how such intervals might be interpreted, are not strictly mathematical problems and are philosophically problematic.[6] Mathematics can take over once the basic principles of an approach to inference have been established, but it has only a limited role[original research?] in saying why one approach should be preferred to another.[citation needed]. ### Relationship with other statistical topics #### Statistical hypothesis testing Confidence intervals are closely related to statistical significance testing. For example, if for some estimated parameter θ one wants to test the null hypothesis that θ = 0 against the alternative that θ ≠ 0, then this test can be performed by determining whether the confidence interval for θ contains 0. More generally, given the availability of a hypothesis testing procedure that can test the null hypothesis θ = θ0 against the alternative that θ ≠ θ0 for any value of θ0, then a confidence interval with confidence level γ = 1 − α can be defined as containing any number θ0 for which the corresponding null hypothesis is not rejected at significance level α.[7] In consequence,[clarification needed] if the estimates of two parameters (for example, the mean values of a variable in two independent groups of objects) have confidence intervals at a given γ value that do not overlap, then the difference between the two values is significant at the corresponding value of α. However, this test is too conservative. If two confidence intervals overlap, the difference between the two means still may be significantly different.[8][9] While the formulations of the notions of confidence intervals and of statistical hypothesis testing are distinct they are in some senses related and to some extent complementary. While not all confidence intervals are constructed in this way, one general purpose approach to constructing confidence intervals is to define a 100(1 − α)% confidence interval to consist of all those values θ0 for which a test of the hypothesis θ = θ0 is not rejected at a significance level of 100α%. Such an approach may not always be available since it presupposes the practical availability of an appropriate significance test. Naturally, any assumptions required for the significance test would carry over to the confidence intervals. It may be convenient to make the general correspondence that parameter values within a confidence interval are equivalent to those values that would not be rejected by a hypothesis test, but this would be dangerous. In many instances the confidence intervals that are quoted are only approximately valid, perhaps derived from "plus or minus twice the standard error", and the implications of this for the supposedly corresponding hypothesis tests are usually unknown. It is worth noting, that the confidence interval for a parameter is not the same as the acceptance region of a test for this parameter, as is sometimes thought. The confidence interval is part of the parameter space, whereas the acceptance region is part of the sample space. For the same reason the confidence level is not the same as the complementary probability of the level of significance. #### Confidence region Main article: Confidence region Confidence regions generalize the confidence interval concept to deal with multiple quantities. Such regions can indicate not only the extent of likely sampling errors but can also reveal whether (for example) it is the case that if the estimate for one quantity is unreliable then the other is also likely to be unreliable. #### Confidence band Main article: Confidence band ‹ The template below (Empty section) is being considered for possible deletion. See templates for discussion to help reach a consensus.› This section is empty. (February 2013) ## Statistical theory ### Definition Let X be a random sample from a probability distribution with statistical parameters θ, which is a quantity to be estimated, and φ, representing quantities that are not of immediate interest. A confidence interval for the parameter θ, with confidence level or confidence coefficient γ, is an interval with random endpoints (u(X), v(X)), determined by the pair of random variables u(X) and v(X), with the property: ${\Pr}_{\theta,\varphi}(u(X)<\theta<v(X))=\gamma\text{ for all }(\theta,\varphi).$ The quantities φ in which there is no immediate interest are called nuisance parameters, as statistical theory still needs to find some way to deal with them. The number γ, with typical values close to but not greater than 1, is sometimes given in the form 1 − α (or as a percentage 100%·(1 − α)), where α is a small non-negative number, close to 0. Here Prθ,φ indicates the probability distribution of X characterised by (θ, φ). An important part of this specification is that the random interval (u(X), v(X)) covers the unknown value θ with a high probability no matter what the true value of θ actually is. Note that here Prθ,φ need not refer to an explicitly given parameterised family of distributions, although it often does. Just as the random variable X notionally corresponds to other possible realizations of x from the same population or from the same version of reality, the parameters (θ, φ) indicate that we need to consider other versions of reality in which the distribution of X might have different characteristics. In a specific situation, when x is the outcome of the sample X, the interval (u(x), v(x)) is also referred to as a confidence interval for θ. Note that it is no longer possible to say that the (observed) interval (u(x), v(x)) has probability γ to contain the parameter θ. This observed interval is just one realization of all possible intervals for which the probability statement holds. #### Approximate confidence intervals In many applications, confidence intervals that have exactly the required confidence level are hard to construct. But practically useful intervals can still be found: the rule for constructing the interval may be accepted as providing a confidence interval at level γ if ${\Pr}_{\theta,\phi}(u(X)<\theta<v(X))\approx\gamma\text{ for all }(\theta,\phi)\,$ to an acceptable level of approximation. Alternatively, some authors[10] simply require that ${\Pr}_{\theta,\phi}(u(X)<\theta<v(X))\ge\gamma\text{ for all }(\theta,\phi)\,$ which is useful if the probabilities are only partially identified, or imprecise. ### Desirable properties When applying standard statistical procedures, there will often be standard ways of constructing confidence intervals. These will have been devised so as to meet certain desirable properties, which will hold given that the assumptions on which the procedure rely are true. These desirable properties may be described as: validity, optimality and invariance. Of these "validity" is most important, followed closely by "optimality". "Invariance" may be considered as a property of the method of derivation of a confidence interval rather than of the rule for constructing the interval. In non-standard applications, the same desirable properties would be sought. • Validity. This means that the nominal coverage probability (confidence level) of the confidence interval should hold, either exactly or to a good approximation. • Optimality. This means that the rule for constructing the confidence interval should make as much use of the information in the data-set as possible. Recall that one could throw away half of a dataset and still be able to derive a valid confidence interval. One way of assessing optimality is by the length of the interval, so that a rule for constructing a confidence interval is judged better than another if it leads to intervals whose lengths are typically shorter. • Invariance. In many applications the quantity being estimated might not be tightly defined as such. For example, a survey might result in an estimate of the median income in a population, but it might equally be considered as providing an estimate of the logarithm of the median income, given that this is a common scale for presenting graphical results. It would be desirable that the method used for constructing a confidence interval for the median income would give equivalent results when applied to constructing a confidence interval for the logarithm of the median income: specifically the values at the ends of the latter interval would be the logarithms of the values at the ends of former interval. ### Methods of derivation For non-standard applications, there are several routes that might be taken to derive a rule for the construction of confidence intervals. Established rules for standard procedures might be justified or explained via several of these routes. Typically a rule for constructing confidence intervals is closely tied to a particular way of finding a point estimate of the quantity being considered. Descriptive statistics This is closely related to the method of moments for estimation. A simple example arises where the quantity to be estimated is the mean, in which case a natural estimate is the sample mean. The usual arguments indicate that the sample variance can be used to estimate the variance of the sample mean. A naive confidence interval for the true mean can be constructed centered on the sample mean with a width which is a multiple of the square root of the sample variance. Likelihood theory Where estimates are constructed using the maximum likelihood principle, the theory for this provides two ways of constructing confidence intervals or confidence regions for the estimates.[citation needed] Estimating equations The estimation approach here can be considered as both a generalization of the method of moments and a generalization of the maximum likelihood approach. There are corresponding generalizations of the results of maximum likelihood theory that allow confidence intervals to be constructed based on estimates derived from estimating equations.[citation needed] Via significance testing If significance tests are available for general values of a parameter, then confidence intervals/regions can be constructed by including in the 100p% confidence region all those points for which the significance test of the null hypothesis that the true value is the given value is not rejected at a significance level of (1-p).[7] Bootstrapping In situations where the distributional assumptions for that above methods are uncertain or violated, resampling methods allow construction of confidence intervals or prediction intervals. The observed data distribution and the internal correlations are used as the surrogate for the correlations in the wider population. ## Examples ### Practical example A machine fills cups with a liquid, and is supposed to be adjusted so that the content of the cups is 250 g of liquid. As the machine cannot fill every cup with exactly 250 g, the content added to individual cups shows some variation, and is considered a random variable X. This variation is assumed to be normally distributed around the desired average of 250 g, with a standard deviation of 2.5 g. To determine if the machine is adequately calibrated, a sample of n = 25 cups of liquid are chosen at random and the cups are weighed. The resulting measured masses of liquid are X1, ..., X25, a random sample from X. To get an impression of the expectation μ, it is sufficient to give an estimate. The appropriate estimator is the sample mean: $\hat \mu=\bar X = \frac{1}{n}\sum_{i=1}^n X_i.$ The sample shows actual weights x1, ..., x25, with mean: $\bar x=\frac {1}{25} \sum_{i=1}^{25} x_i = 250.2\,\text{grams}.$ If we take another sample of 25 cups, we could easily expect to find mass values like 250.4 or 251.1 grams. A sample mean value of 280 grams however would be extremely rare if the mean content of the cups is in fact close to 250 grams. There is a whole interval around the observed value 250.2 grams of the sample mean within which, if the whole population mean actually takes a value in this range, the observed data would not be considered particularly unusual. Such an interval is called a confidence interval for the parameter μ. How do we calculate such an interval? The endpoints of the interval have to be calculated from the sample, so they are statistics, functions of the sample X1, ..., X25 and hence random variables themselves. In our case we may determine the endpoints by considering that the sample mean X from a normally distributed sample is also normally distributed, with the same expectation μ, but with a standard error of: $\frac {\sigma}{\sqrt{n}}=\frac {2.5~\text{g}}{\sqrt{25}}=0.5\ \text{grams}$ By standardizing, we get a random variable: $Z = \frac {\bar X-\mu}{\sigma/\sqrt{n}} =\frac {\bar X-\mu}{0.5}$ dependent on the parameter μ to be estimated, but with a standard normal distribution independent of the parameter μ. Hence it is possible to find numbers −z and z, independent of μ, between which Z lies with probability 1 − α, a measure of how confident we want to be. We take 1 − α = 0.95, for example. So we have: $\!P(-z\le Z\le z) = 1-\alpha = 0.95.$ The number z follows from the cumulative distribution function, in this case the cumulative normal distribution function: $\begin{align} \Phi(z) & = P(Z \le z) = 1 - \tfrac{\alpha}2 = 0.975,\\[6pt] z & = \Phi^{-1}(\Phi(z)) = \Phi^{-1}(0.975) = 1.96, \end{align}$ and we get: $\begin{align} 0.95 & = 1-\alpha=P(-z \le Z \le z)=P \left(-1.96 \le \frac {\bar X-\mu}{\sigma/\sqrt{n}} \le 1.96 \right) \\[6pt] & = P \left( \bar X - 1.96 \frac{\sigma}{\sqrt{n}} \le \mu \le \bar X + 1.96 \frac{\sigma}{\sqrt{n}}\right) \end{align}.$ In other words, the lower endpoint of the 95% confidence interval is: $Lower\ endpoint = \bar X - 1.96 \frac{\sigma}{\sqrt{n}},$ and the upper endpoint of the 95% confidence interval is: $Upper\ endpoint = \bar X + 1.96 \frac{\sigma}{\sqrt{n}}.$ With the values in this example, the confidence interval is: $\begin{align} 0.95 & = P\left(\bar X - 1.96 \times 0.5 \le \mu \le \bar X + 1.96 \times 0.5\right) \\[6pt] & = P \left( \bar X - 0.98 \le \mu \le \bar X + 0.98 \right). \end{align}$ This might be interpreted as: with probability 0.95 we will find a confidence interval in which we will meet the parameter μ between the stochastic endpoints $\! \bar X - 0{.}98$ and $\! \bar X + 0.98.$ This does not mean that there is 0.95 probability of meeting the parameter μ in the interval obtained by using the currently computed value of the sample mean, $(\bar{x}-0.98,\, \bar{x}+0.98).$ Instead, every time the measurements are repeated, there will be another value for the mean X of the sample. In 95% of the cases μ will be between the endpoints calculated from this mean, but in 5% of the cases it will not be. The actual confidence interval is calculated by entering the measured masses in the formula. Our 0.95 confidence interval becomes: $(\bar x - 0.98;\bar x + 0.98) = (250.2 - 0.98; 250.2 + 0.98) = (249.22; 251.18).\,$ The vertical line segments represent 50 realizations of a confidence interval for μ. In other words, the 95% confidence interval is between the lower endpoint 249.22 g and the upper endpoint 251.18 g. As the desired value 250 of μ is within the resulted confidence interval, there is no reason to believe the machine is wrongly calibrated. The calculated interval has fixed endpoints, where μ might be in between (or not). Thus this event has probability either 0 or 1. One cannot say: "with probability (1 − α) the parameter μ lies in the confidence interval." One only knows that by repetition in 100(1 − α) % of the cases, μ will be in the calculated interval. In 100α% of the cases however it does not. And unfortunately one does not know in which of the cases this happens. That is why one can say: "with confidence level 100(1 − α) %, μ lies in the confidence interval." The maximum error is calculated to be 0.98 since it is the difference between value that we are confident of with upper or lower endpoint. The figure on the right shows 50 realizations of a confidence interval for a given population mean μ. If we randomly choose one realization, the probability is 95% we end up having chosen an interval that contains the parameter; however we may be unlucky and have picked the wrong one. We will never know; we are stuck with our interval. ### Theoretical example Suppose {X1, ..., Xn} is an independent sample from a normally distributed population with (parameters) mean μ and variance σ2. Let $\bar{X}=(X_1+\cdots+X_n)/n\,,$ $S^2=\frac{1}{n-1}\sum_{i=1}^n\left(X_i-\bar{X}\,\right)^2.$ Where X is the sample mean, and S2 is the sample variance. Then $T=\frac{\bar{X}-\mu}{S/\sqrt{n}}$ has a Student's t-distribution with n − 1 degrees of freedom.[11] Note that the distribution of T does not depend on the values of the unobservable parameters μ and σ2; i.e., it is a pivotal quantity. Suppose we wanted to calculate a 95% confidence interval for μ. Then, denoting c as the 97.5th percentile of this distribution, $\Pr\left(-c\le T \le c\right)=0.95\,$ (Note: "97.5th" and "0.95" are correct in the preceding expressions. There is a 2.5% chance that T will be less than −c and a 2.5% chance that it will be larger than +c. Thus, the probability that T will be between −c and +c is 95%.) Consequently $\Pr\left(\bar{X} - \frac{cS}{\sqrt{n}} \le \mu \le \bar{X} + \frac{cS}{\sqrt{n}} \right)=0.95\,$ and we have a theoretical (stochastic) 95% confidence interval for μ. After observing the sample we find values x for X and s for S, from which we compute the confidence interval $\left[ \bar{x} - \frac{cs}{\sqrt{n}}, \bar{x} + \frac{cs}{\sqrt{n}} \right], \,$ an interval with fixed numbers as endpoints, of which we can no longer say there is a certain probability it contains the parameter μ; either μ is in this interval or isn't. ## Alternatives and critiques Main article: Interval estimation See also: Prediction interval, Tolerance interval, and Credible interval Confidence intervals are one method of interval estimation, and the most widely used in frequentist statistics. An analogous concept in Bayesian statistics is credible intervals, while an alternative frequentist method is that of prediction intervals which, rather than estimating parameters, estimate the outcome of future samples. For other approaches to expressing uncertainty using intervals, see interval estimation. There is disagreement about which of these methods produces the most useful results: the mathematics of the computations are rarely in question–confidence intervals being based on sampling distributions, credible intervals being based on Bayes' theorem–but the application of these methods, the utility and interpretation of the produced statistics, is debated. Users of Bayesian methods, if they produced an interval estimate, would in contrast to confidence intervals, want to say "My degree of belief that the parameter is in fact in this interval is 90%,"[12] while users of prediction intervals would instead say "I predict that the next sample will fall in this interval 90% of the time."[citation needed] Confidence intervals are an expression of probability and are subject to the normal laws of probability. If several statistics are presented with confidence intervals, each calculated separately on the assumption of independence, that assumption must be honoured or the calculations will be rendered invalid. For example, if a researcher generates a set of statistics with intervals and selects some of them as significant, the act of selecting invalidates the calculations used to generate the intervals. ### Comparison to prediction intervals A prediction interval for a random variable is defined similarly to a confidence interval for a statistical parameter. Consider an additional random variable Y which may or may not be statistically dependent on the random sample X. Then (u(X), v(X)) provides a prediction interval for the as-yet-to-be observed value y of Y if ${\Pr}_{\theta,\phi}(u(X) < Y < v(X)) = \gamma\text{ for all }(\theta,\phi).\,$ Here Prθ,φ indicates the joint probability distribution of the random variables (X, Y), where this distribution depends on the statistical parameters (θ, φ). ### Comparison to Bayesian interval estimates A Bayesian interval estimate is called a credible interval. Using much of the same notation as above, the definition of a credible interval for the unknown true value of θ is, for a given γ,[13] $\Pr(u(x)<\Theta<v(x) | X = x)=\gamma. \,$ Here Θ is used to emphasize that the unknown value of θ is being treated as a random variable. The definitions of the two types of intervals may be compared as follows. • The definition of a confidence interval involves probabilities calculated from the distribution of X for given (θ, φ) (or conditional on these values) and the condition needs to hold for all values of (θ, φ). • The definition of a credible interval involves probabilities calculated from the distribution of Θ conditional on the observed values of X = x and marginalised (or averaged) over the values of Φ, where this last quantity is the random variable corresponding to the uncertainty about the nuisance parameters in φ. Note that the treatment of the nuisance parameters above is often omitted from discussions comparing confidence and credible intervals but it is markedly different between the two cases. In some simple standard cases, the intervals produced as confidence and credible intervals from the same data set can be identical. They are very different if informative prior information is included in the Bayesian analysis; and may be very different for some parts of the space of possible data even if the Bayesian prior is relatively uninformative. ### Confidence intervals for proportions and related quantities An approximate confidence interval for a population mean can be constructed for random variables that are not normally distributed in the population, relying on the central limit theorem, if the sample sizes and counts are big enough. The formulae are identical to the case above (where the sample mean is actually normally distributed about the population mean). The approximation will be quite good with only a few dozen observations in the sample if the probability distribution of the random variable is not too different from the normal distribution (e.g. its cumulative distribution function does not have any discontinuities and its skewness is moderate). One type of sample mean is the mean of an indicator variable, which takes on the value 1 for true and the value 0 for false. The mean of such a variable is equal to the proportion that have the variable equal to one (both in the population and in any sample). This is a useful property of indicator variables, especially for hypothesis testing. To apply the central limit theorem, one must use a large enough sample. A rough rule of thumb is that one should see at least 5 cases in which the indicator is 1 and at least 5 in which it is 0. Confidence intervals constructed using the above formulae may include negative numbers or numbers greater than 1, but proportions obviously cannot be negative or exceed 1. Additionally, sample proportions can only take on a finite number of values, so the central limit theorem and the normal distribution are not the best tools for building a confidence interval. See "Binomial proportion confidence interval" for better methods which are specific to this case. ## References 1. ^ a b Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, Chapman & Hall, p49, p209 2. ^ a b Kendall, M.G. and Stuart, D.G. (1973) The Advanced Theory of Statistics. Vol 2: Inference and Relationship, Griffin, London. Section 20.4 3. ^ a b Neyman, J. (1937). "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability". Philosophical Transactions of the Royal Society of London A 236: 333–380. 4. Zar, J.H. (1984) Biostatistical Analysis. Prentice Hall International, New Jersey. pp 43–45 5. Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, Chapman & Hall, p214, 225, 233 6. T. Seidenfeld, Philosophical Problems of Statistical Inference: Learning from R.A. Fisher, Springer-Verlag, 1979 7. ^ a b Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, Chapman & Hall, Section 7.2(iii) 8. Goldstein, H.; Healey, M.J.R. (1995). "The graphical presentation of a collection of means". Journal of the Royal Statistical Society 158: 175–77. 9. Wolfe R, Hanley J (Jan 2002). "If we're so different, why do we keep overlapping? When 1 plus 1 doesn't make 2". CMAJ 166 (1): 65–6. PMC 99228. PMID 11800251. 10. George G. Roussas (1997) A Course in Matheamtical Statistics, 2nd Edition, Academic Press, p397 11. Rees. D.G. (2001) Essential Statistics, 4th Edition, Chapman and Hall/CRC. ISBN 1-58488-007-4 (Section 9.5) 12. Cox D.R., Hinkley D.V. (1974) Theoretical Statistics, Chapman & Hall, p390 13. Bernardo JE, Smith, Adrian (2000). Bayesian theory. New York: Wiley. p. 259. ISBN 0-471-49464-X. ## Bibliography • Fisher, R.A. (1956) Statistical Methods and Scientific Inference. Oliver and Boyd, Edinburgh. (See p. 32.) • Freund, J.E. (1962) Mathematical Statistics Prentice Hall, Englewood Cliffs, NJ. (See pp. 227–228.) • Hacking, I. (1965) Logic of Statistical Inference. Cambridge University Press, Cambridge. ISBN 0-521-05165-7 • Keeping, E.S. (1962) Introduction to Statistical Inference. D. Van Nostrand, Princeton, NJ. • Kiefer, J. (1977). "Conditional Confidence Statements and Confidence Estimators (with discussion)". Journal of the American Statistical Association 72: 789–827. • Mayo, D. G. (1981) "In defence of the Neyman-Pearson theory of confidence intervals", Philosophy of Science, 48 (2), 269–280. JSTOR 187185 • Neyman, J. (1937) "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability" Philosophical Transactions of the Royal Society of London A, 236, 333–380. (Seminal work.) • Robinson, G.K. (1975). "Some Counterexamples to the Theory of Confidence Intervals". Biometrika 62: 155–161. • Smithson, M. (2003) Confidence intervals. Quantitative Applications in the Social Sciences Series, No. 140. Belmont, CA: SAGE Publications. ISBN 978-0-7619-2499-9.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 25, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8787822127342224, "perplexity_flag": "head"}
http://mathoverflow.net/questions/16031?sort=votes
## Suppose C and D are Morita equivalent fusion categories, can you say anything about R I: C->Z(C)=Z(D)->D? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) If C and D are (higher) Morita equivalent fusion categories, then the Drinfel'd centers Z(C) and Z(D) are braided equivalent. Given any fusion category C we have a restriction functor Z(C)->C (by forgetting the "half-braiding"), and adjoint to that an induction functor C->Z(C). If C and D are Morita equivalent then you can compose the induction and restriction to get a functor C->Z(C)=Z(D)->D. (Actually now that I think about you may need to fix the Morita equivalence in order to actually identify Z(C) and Z(D)?) Is there anything nice one can say about this composition? If C=D then Etingof-Nikshych-Ostrik says that $R \circ I(V) = \sum_X X \otimes V \otimes X^*$. The reason that I ask is that Izumi calculated the induction and restriction graphs for the Drinfel'd center of one of the even parts of the Haagerup subfactor, and I would like to understand the same picture for the other even part. - What does the 'higher' mean in this context? – Mariano Suárez-Alvarez Feb 22 2010 at 5:59 Well "Morita equivalence" concerns algebras and invertible bimodules. Here I was referring to tensor categories and invertible bimodule categories over them. This is one categorical level up, and so sometimes it's called a "higher Morita equivalence" (coined by Mueger I think?) and sometimes just called a "Morita equivalence." – Noah Snyder Feb 22 2010 at 6:49 ## 1 Answer I think the answer should depend on the particular choice of Morita equivalence between C and D. So let M be (bi)module category connecting C and D. My first guess would be that $R\circ I(V)=\sum_X{\underline Hom}(X,V\otimes X)$ (sum over simple objects of M; ${\underline Hom}$ is the internal $Hom$). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9128605127334595, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/193284-application-derivative.html
# Thread: 1. ## Application of derivative Hi guys, i have got a few question that needs help before my exams next week The surface area of a cube with side x cm, is increasing at a rate of 16cm2/s Whnt the total surface area of the cube reaches 216cm2 a)show that the volume of the cube is 216cm3 b) find the rate which its volume is increasing I completed part a) but for part b, they wants dv/dt so i figure out since da/dt is given i need to find dv/da. But I am stuck ! anyone could explain to me ? THANKS IN ADVANCE ! 2. ## Re: Application of derivative Let $A$ be the surface area and $s$ be the side of the cube. You have $\frac{dA}{dt}=16$. So $A=16t+C$. Taking $C=0$, $A=16t$. $\begin{align*} A &=16t \\ \implies 6s^2 &=16t \\ \implies s^2 &=\frac{8}{3}t \\ \implies s^3 &=\left( \frac{8}{3}t \right)^{\frac{3}{2}} \\ \implies V &=\left( \frac{8}{3}t \right)^{\frac{3}{2}} \\ \implies \frac{dV}{dt} &=\frac{d\left( \frac{8}{3}t \right)^{\frac{3}{2}}}{dt} \\ \implies \frac{dV}{dt}&= \left(\frac{8\sqrt{2}}{\sqrt{3}} \right)\sqrt{t} \end{align*}$ 3. ## Re: Application of derivative $A = 6x^2$ $\frac{dA}{dt} = 12x \cdot \frac{dx}{dt}$ when $A = 216$ , $x = 6$ $16 = 12(6) \cdot \frac{dx}{dt}$ $\frac{dx}{dt} = \frac{2}{9}$ $V = x^3$ $\frac{dV}{dt} = 3x^2 \cdot \frac{dx}{dt}$ finish it
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9011811017990112, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/36667/how-can-i-find-all-increasing-sequences-a-i-i-1-infty-such-that-dx
# How can I find all increasing sequences $\{a_i\}_{i=1}^{\infty}$ such that $d(x_1+x_2+\cdots+x_k)=d(a_{x_{1}}+a_{x_{2}}+\cdots + a_{x_{k}})$? How can one find all increasing sequences $\{a_i\}_{i=1}^{\infty}$ such that $$d(x_1+x_2+\cdots+x_k)=d(a_{x_{1}}+a_{x_{2}}+\cdots + a_{x_{k}}),$$ holds for all $k$-tuples $(x_1,x_2,\cdots,x_k)$ of positive integers, where $d(n)$ is number of integer divisors of a positive integer $n$, and $k \geq 3$ is a fixed integer? A special case of this problem which was given in this year's Iran Olympiad: Find all increasing sequences $a_1,a_2,a_3,...$ of natural numbers such that for each $i,j\in \mathbb N$, number of the divisors of $i+j$ and $a_i+a_j$ is equal. - Do you have even one example? (other than $1,2,3,4,\dots$) – Gerry Myerson May 4 '11 at 2:12 1 – Amir Hossein May 4 '11 at 9:40 Well then maybe that would have been a more honest way of stating the question. Why not make it as easy as possible for people to answer, instead of making it hard? – Gerry Myerson May 4 '11 at 13:22 3 Why not add a description of your proof for the Olympiad problem and say why you think it does not generalize? The more information you present, the better. – Matthew Conroy May 4 '11 at 20:06 3 @Amir, sure, you can ask questions when you don't know the answers, in fact, you're not supposed to ask questions when you do know the answers. But (I think) the right way to ask this question would be to state the Olympiad result, present the general question, and then ask whether the general question has the same answer as the Olympiad question. – Gerry Myerson May 5 '11 at 1:18 show 2 more comments ## 1 Answer Well, for $k$ prime you can use the same argument as $k = 2$: You look at the indices $i_p = k^{p-2}$ for $p$ prime. $k * a_{i_p}$ has $p$ factors and must therefore be of the form $q^{p-1}$ for some prime $q$, but since it is divisible by $k$, $q=k$. So we have an infinite sequence of indices for which $a_n = n$, and we can use the fact that the sequence is increasing to prove $\forall n \space a_n = n$. Non-prime $k$ is a little trickier, but I think you can do something similar. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9529391527175903, "perplexity_flag": "head"}
http://mathoverflow.net/questions/41653/a-special-residually-finite-group/41659
## A special residually finite group ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there an example of a finitely generated (infinite) residually finite group $\Gamma$ for which every linear representation of $\Gamma$ has finite image? - By linear representation, do you mean finite-dimensional linear representation? – Yemon Choi Oct 10 2010 at 6:20 Yes, and over a field of characteristic zero. – anon Oct 10 2010 at 6:31 I erased my answer, since it had the same properties as the Grigorchuk group (and is based on unpublished work), but I wanted to point out that Mark Sapir commented that the original construction of an infinite torsion residually finite group was due to Golod. en.wikipedia.org/wiki/Periodic_group – Agol Oct 11 2010 at 17:42 ## 1 Answer A group $G$ is just infinite if it is infinite but every proper quotient is finite. Clearly a just infinite group which is not linear has the property that its image under any linear representation is finite. Thus any group which is finitely generated, residually finite, not linear and just infinite is an example of what you want: for instance, the Grigorchuk group. - 3 It's not hard to prove that the Grigorchuk grp is not linear. For instance, the Tits alternative says that every fg linear grp G either contains a solvable subgroup of finite index or contains a nonabelian free subgroup. This implies that the "growth function" f(n) of G (here f(n) is the number of elements of G of length at most n in a fixed genset) grows either polynomially (if G has a solvable subgrp of finite index) or exponentially (if G contains a nonabelian free subgrp). However, the first major thm about the Grigorchuk grp is that its growth fcn is superpolynomial but subexponential. – Andy Putman Oct 10 2010 at 20:54 1 By the way, I highly recommend reading the final chapter in Pierre de la Harpe's book "Topics in Geometric Group Theory", which is entirely devoted to the Grigorchuk group. It serves as a sort of "universal counterexample" to conjectures in geometric group theory. – Andy Putman Oct 10 2010 at 21:09 1 @Andy: solvable groups can have exponential growth. For example ${\mathbb Z}\wr {\mathbb Z}$ is solvable of class 2 and has exponential growth because it contains a free non-cyclic subsemigroup. – Mark Sapir Oct 10 2010 at 22:54 1 @Pete, thanks for the far better rewriting of my answer. – Mustafa Gokhan Benli Oct 10 2010 at 23:59 1 Andy - with a 'virtually' in front of 'nilpotent', that's the Milnor--Wolf Theorem. – HW Oct 11 2010 at 2:37 show 6 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9185716509819031, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/11012/characterizing-free-modules-by-exterior-power?answertab=votes
# characterizing free modules by exterior power Assume $M$ is a (finitely generated) $A$-module such that $\wedge^n M$ is free of rank $1$ for some $n \geq 1$. Does it follow that $M$ is free of rank $n$? Or at least locally free of rank $n$? In general, is there a way of characterizing local free modules via exterior powers and tensor products? - ## 3 Answers This isn't a complete answer, but just an idea about your first question. Let $P$ be a rank one projective module over the commutative ring $A$, and $P^*=\mathrm{Hom}_A(P,A)$ be its dual. Then for $M=P\oplus P^*$, $\bigwedge^2 M\cong P\otimes_A P^*\cong A$ is free. There must be $P$ for which $M$ isn't free, but I can't think of any off the top of my head. If $A$ is a Dedekind domain then $M$ is free. Taking $A=C^\infty(N)$ where $N$ is a smooth manifold, then $P$ would correspond to a line bundle on $N$. If $M$ is free then the direct sum of this line bundle and its dual would be trivial. Surely there are manifolds and line bundles for which this isn't true? - 1 Real line bundle $\oplus$ dual line bundle is always trivial. Because of following two facts. Fact A: Line bundle is isomorphic to dual line bundle by choosing metric on it .Fact B: line bundles have 2-torsion because they are classified by stiefel-whitney class which have 2-torsion, because they belong to $H^1(M, \mathbb Z /(2) )$ – evgeniamerkulova Nov 20 '10 at 14:00 Thanks Robin. I'm also sure that there is such an example. However, your $M$ is locally free of rank $2$. So this does not answer yet my 2nd question, which is probably more difficult ... – Martin Brandenburg Nov 20 '10 at 14:32 Following variation of Robin Chapman idea works in algebraic geometry. Take $X$ elliptic curve minus point : it is affine variety. For general point $P \in X$ take $L=\mathcal O(P)$. Then $M=L\oplus L^*=\mathcal O(P) \oplus \mathcal O(-P)$ has properties: A) $\Lambda ^2 (M)=\mathcal O$ is free of rank one. B) $M$ is not free because it has no sections except zero (since $\mathcal O(P)$ and $\mathcal O(-P)$ have no sections except zero) - The conditions "locally free", "finitely generated projective" and "dualizable" are equivalent, and the latter one can be formulated in terms of tensor products (namely of the "unit" $A \to M \otimes M^*$ and the "counit" $M^* \otimes M \to A$ satisfying the two triangular identities). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483914375305176, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/275638/transformation-of-symplectic-structure-by-a-matrix?answertab=active
# transformation of symplectic structure by a matrix Suppose that in canonical symplectic basis $e_1,e_2,f_1,f_2$ we have $$\Omega=pf_1^*\wedge f_2^*+qe_1^*\wedge e_2^*+r(e_1^*\wedge f_2^*+e_2^*\wedge f_1^*)+s(e_1^*\wedge f_1^*-e_2^*\wedge f_2^*)$$ Let $A_t$ be transformation of symplectic structure that depend on the real parameter $t$ and in the basis of $e_1,e_2, f_1, f_2$ has the following form: $\begin{vmatrix} 1& 0& 0& t\\ 0& 1& t& 0\\ 0& 0& 1& t\\ 0& 0& 0& 1\\ \end{vmatrix}$ so $A_t$ how act on $\Omega$? in fact how can we find ?s $(p,q,r,s)\overset{A_t}{\rightarrow}(?,?,?,?)$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8949316143989563, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2009/10/26/tensor-and-symmetric-algebras/?like=1&_wpnonce=122935e828
# The Unapologetic Mathematician ## Tensor and Symmetric Algebras There are a few graded algebras we can construct with our symmetric and antisymmetric tensors, and at least one of them will be useful. Remember that we also have symmetric and alternating multilinear functionals in play, so the same constructions will give rise to even more algebras. First and easiest we have the tensor algebra on $V$. This just takes all the tensor powers of $V$ and direct sums them up $\displaystyle T(V)=\bigoplus\limits_{n=0}^\infty V^{\otimes n}$ This gives us a big vector space — an infinite-dimensional one, in fact — but it’s not an algebra until we define a bilinear multiplication. For this one, we’ll just define the multiplication by the tensor product itself. That is, if $\mu\in V^m$ and $\nu\in V^n$ are two tensors, their product will be $\mu\otimes\nu\in V^{m+n}$, which is by definition bilinear. This algebra has an obvious grading by the number of tensorands. This is exactly the free algebra on a vector space, and it’s just like we built the free ring on an abelian group. If we perform the construction on the dual space $V^*$ we get an algebra of functions. If $V$ has dimension $d$, then this is isomorphic to the algebra $T(V^*)\cong\mathbb{F}\{X^1,\dots,X^d\}$ of noncommutative polynomials in $d$ variables. Next we consider the symmetric algebra on $V$, which consists of the direct sum of all the spaces of symmetric tensors $\displaystyle S(V)=\bigoplus\limits_{n=0}^\infty S^n(V)$ with a grading again given by the number of tensorands. Now, despite the fact that each $S^n(V)$ is a subspace of the tensor space $T^{\otimes n}$, this is not a subalgebra of $T(V)$. This is because the tensor product of two symmetric tensors may well not be symmetric itself. Instead, we will take the tensor product of $\mu\in S^m(V)$ and $\nu\in S^n(V)$, and then symmetrize it, to give $\mu\odot\nu\in S^{m+n}(V)$. This will be bilinear, and it will work with our choice of grading, but will it be associative? If we have three symmetric tensors $\lambda\in S^l(V)$, $\mu\in S^m(V)$, and $\nu\in S^n(V)$, then we could multiply them by $(\lambda\odot\mu)\odot\nu$ or by $\lambda\odot(\mu\odot\nu)$. To get the first of these, we tensor $\lambda$ and $\mu$, symmetrize the result, then tensor with $\nu$ and symmetrize that. But since symmetrizing $\lambda\otimes\mu$ consists of adding up a number of shuffled versions of this tensor, we could tensor with $\nu$ first and then symmetrize only the first $l+m$ tensorands, before finally tensoring the entire thing. I assert that these two symmetrizations — the first one on only part of the whole term — are equivalent to simply symmetrizing the whole thing. Similarly, symmetrizing the last $m+n$ tensorands followed by symmetrizing the whole thing is equivalent to just symmetrizing the whole thing. And so both orders of multiplication are the same, and the operation $\odot$ indeed defines an associative multiplication. To see this, remember that symmetrizing the whole term involves a sum over the symmetric group $S_{l+m+n}$, while symmetrizing over the beginning involves a sum over the subgroup $S_{l+m}\subseteq S_{l+m+n}$ consisting of those permutations acting on only the first $l+m$ places. This will be key to our proof. We consider the collection of left cosets of $S_{l+m}$ within $S_{l+m+n}$. For each one, we can pick a representative element (this is no trouble since there are only a finite number of cosets with a finite number of elements each) and collect these representatives into a set $C$. Then the whole group $S_{l+m+n}$ is the disjoint union $\displaystyle S_{l+m+n}=\biguplus\limits_{\gamma\in\Gamma}\gamma S_{l+m}$ This will let us rewrite the symmetrizer in such a way as to make our point. So let’s write down the product of the two group algebra elements we’re interested in $\displaystyle\begin{aligned}\left(\frac{1}{(l+m+n)!}\sum\limits_{\pi\in S_{l+m+n}}\pi\right)\left(\frac{1}{(l+m)!}\sum\limits_{\hat{\pi}\in S_{l+m}}\hat{\pi}\right)&=\left(\frac{1}{(l+m+n)!}\sum\limits_{\gamma\in\Gamma}\sum\limits_{\pi\in\gamma S_{l+m}}\pi\right)\left(\frac{1}{(l+m)!}\sum\limits_{\hat{\pi}\in S_{l+m}}\hat{\pi}\right)\\&=\left(\frac{1}{(l+m+n)!}\sum\limits_{\gamma\in\Gamma}\sum\limits_{\pi\in S_{l+m}}\gamma\pi\right)\left(\frac{1}{(l+m)!}\sum\limits_{\hat{\pi}\in S_{l+m}}\hat{\pi}\right)\\&=\left(\frac{1}{(l+m+n)!}\left(\sum\limits_{\gamma\in\Gamma}\gamma\right)\left(\sum\limits_{\pi\in S_{l+m}}\pi\right)\right)\left(\frac{1}{(l+m)!}\sum\limits_{\hat{\pi}\in S_{l+m}}\hat{\pi}\right)\\&=\left(\frac{1}{(l+m+n)!}\sum\limits_{\gamma\in\Gamma}\gamma\right)\left(\frac{1}{(l+m)!}\sum\limits_{\pi\in S_{l+m}}\sum\limits_{\hat{\pi}\in S_{l+m}}\pi\hat{\pi}\right)\\&=\frac{1}{(l+m+n)!}\left(\sum\limits_{\gamma\in\Gamma}\gamma\right)\left(\sum\limits_{\pi\in S_{l+m}}\pi\right)\\&=\frac{1}{(l+m+n)!}\sum\limits_{\pi\in S_{l+m+n}}\pi\end{aligned}$ Essentially, because the symmetrization of the whole term subsumes symmetrization of the first $l+m$ tensorands, the smaller symmetrization can be folded in, and the resulting sum counts the whole sum exactly $(l+m)!$ times, which cancels out the normalization factor. And this proves that the multiplication is, indeed, associative. This multiplication is also commutative. Indeed, given $\mu\in S^m(V)$ and $\nu\in S^n(V)$, we can let $\tau_{m,n}$ be the permutation which moves the last $n$ slots to the beginning of the term and the first $m$ slots to the end. Then we write $\displaystyle\begin{aligned}\mu\odot\nu&=\left(\frac{1}{(m+n)!}\sum\limits_{\pi\in S_{m+n}}\pi\right)(\mu\otimes\nu)\\&=\left(\frac{1}{(m+n)!}\sum\limits_{\pi\in S_{m+n}}\pi\tau_{m,n}\right)(\mu\otimes\nu)\\&=\left(\frac{1}{(m+n)!}\sum\limits_{\pi\in S_{m+n}}\pi\right)\left(\tau_{m,n}(\mu\otimes\nu)\right)\\&=\left(\frac{1}{(m+n)!}\sum\limits_{\pi\in S_{m+n}}\pi\right)(\nu\otimes\mu)\\&=\nu\odot\mu\end{aligned}$ because right-multiplication by $\tau_{m,n}$ just shuffles around the order of the sum. The symmetric algebra $S(V)$ is the free commutative algebra on the vector space $V$. And so it should be no surprise that the symmetric algebra on the dual space is isomorphic to the algebra of polynomial functions on $V$, where the grading is the total degree of a monomial. If $V$ has finite dimension $d$, we have $S(V^*)\cong\mathbb{F}[X^1,\dots,X^d]$. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 7 Comments » 1. Those direct sums should start at n=0 if you want the algebra to have an identity. Comment by | October 26, 2009 | Reply 2. Thanks for catching that. Comment by | October 26, 2009 | Reply 3. [...] Let’s continue yesterday’s discussion of algebras we can construct from a vector space. Today, we consider the “exterior [...] Pingback by | October 27, 2009 | Reply 4. [...] of Tensor Algebras The three constructions we’ve just shown — the tensor, symmetric tensor, and exterior algebras — were all asserted to be the “free” constructions. This [...] Pingback by | October 28, 2009 | Reply 5. [...] going on that I learned from Todd Trimble, which is that “the exterior algebra is the symmetric algebra of a purely odd supervector [...] Pingback by | November 9, 2009 | Reply 6. Is it possible to provide an explicit example as given in http://unapologetic.wordpress.com/2008/12/22/symmetric-tensors/ ? Suppose we have two symmetric tensors a = [ a_1 b_1] and b = [a_2 b_2] (written in matrix form). b_1 c_1 b_2 c_2 What would get on symmetrizing a \otimes b ? Comment by Datta | March 12, 2012 | Reply 7. Sorry, in the above I meant the symmetric tensors a = [a_1, b_1, b_1, c_1] and b = [a_2, b_2, b_2, c_2] which can also be written as 2 by 2 symmetric matrices. Comment by Datta | March 12, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 56, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196535348892212, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/definite-integral+integration
# Tagged Questions 0answers 61 views ### Why Riemann integration is needed? [closed] What is the necessity of the notion of "Riemann Integration" ? Why is normal definite integral is not good enough ? 0answers 51 views ### Integral involving exponential, power and Bessel function Is there any formula for calculating the following definite integral, including exponential and Bessel function? $$\int_0^{a}x^{-1} e^{x}I_2(bx)dx$$ Thanks in advance 2answers 58 views ### Explanation for these transformations of integrals I've recently found the following transformations: $$\int _{a} ^{\infty} \frac{\ln x}{x^2 + a^2}\,dx = \int _{a} ^{0} \frac{\ln x}{x^2 + a^2}\,dx$$ \int _{0} ^{\pi/2} ... 2answers 79 views ### Find the following integral (most likely substitution) $$\int_0^1 \frac{\ln(1+x^2)}{1+x^2} \ dx$$ I tried letting $x^2=\tan \theta$ but it didn't work. What should I do? Please don't give full solution, just a hint and I will continue. 2answers 69 views ### Definite trig integral [duplicate] How do I evaluate: $$\int_{0}^{\pi} \sin (\sin x) \ dx$$ I have seen a similar question here but can't find it. 3answers 94 views ### How to prove that $\lim\limits_{n\to\infty}\int\limits _{a}^{b}\sin\left(nt\right)f\left(t\right)dt=0\text { ? }$ Let $f:\left[a,b\right]\to\mathbb{R}$ be a function that is derivative so that $f'$ is continuous then $$\lim_{n\to\infty}\int\limits _{a}^{b}\sin\left(nt\right)f\left(t\right)dt=0$$ My attempt: I ... 3answers 81 views ### Integrating a school homework question. Show that $$\int_0^1\frac{4x-5}{\sqrt{3+2x-x^2}}dx = \frac{a\sqrt{3}+b-\pi}{6},$$ where $a$ and $b$ are constants to be found. Answer is: $$\frac{24\sqrt3-48-\pi}{6}$$ Thank you in advance! 1answer 75 views ### Evaluating the following integral: I am trying to evaluate this integral: $$\int_{0}^{\infty }\frac{\cos(x)}{1+x^{2}}dx$$ My attempt: \int_0^{\infty}\frac{\cos(x)}{(x+i)(x-i)}dx=1/2 \int_{-\infty}^{\infty} ... 3answers 84 views ### Is it possible to evaluate this integral? [duplicate] Is it possible to evaluate this integral: $$\int_{0}^{\frac{\pi }{2}}\ln(\sin 2x){\rm d}x$$ 3answers 146 views ### Tricky elementary integral $$\int_{0}^{\frac{\pi }{2}}x\cot(x)dx$$ I tried integration by parts and got $\frac{1}{2}\int_{0}^{\frac{\pi }{2}}x^{2} \csc^{2}x dx$ which doesn't help at all. I don't really know what to do. Any ... 0answers 48 views ### Integral of product of normal cdf and pdf What do you think, is there a closed form solution of the following Integral $\textbf{ }$ $$\int_{-\infty}^{a-y}n(x)\, N(b-2y-x)\, dx,$$ where $N(x)=\int_{-\infty}^x n(z)\, dz\quad$ and \$\quad ... 3answers 35 views ### Confusing Triple Integral i'm having trouble with this integral the integral is $\int_0^9\int_{\sqrt z}^3\int_0^y z\cos(y^6)\,dx\,dy\,dz$. We aren't given any more information and i'm a bit stuck as to where to start. I don't ... 2answers 53 views ### Triple integral problem involving a sphere Let $R = \{(x,y,z)\in \textbf{R}^3 :x^2+y^2+z^2\le\pi^2\}$ How do I integrate this triple integral $$\int\int\int_R \cos x\, dxdydz,$$ where $R$ is a sphere of radius $\pi$? I have trouble ... 2answers 121 views ### Trig Fresnel Integral $$\int_{0}^{\infty }\sin(x^{2})dx$$ I'm confused with this integral because the square is on the x, not the whole function. How can I integrate it? Thank you. I have not done complex analysis (only ... 1answer 80 views ### What is the proof that anti-derivative gives function = area under curve? For many years now I have thought about this but have not been able to get a clear answer. We all know that $\displaystyle \lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$ gives us a function we call as the ... 2answers 28 views ### Is it true that $\int_1^ba^{\log_b x}dx> \log_eb$ Is it true that $\int_1^ba^{\log_b x}dx> \log_eb$ $\forall a,b>0\ and\ b\not = 1$ 1answer 25 views ### definite integral negative variable Man, it's been so long since I did this. I am trying to do this: NB: limits are $-\pi$ and 0, but I can't get the minus in the limits. If anybody knows how do to that please let me know, the $\pi$ ... 3answers 86 views ### Is this definite integral impossible? From my understanding when you integrate $f(x)$ you get $F(x)+C$, and when finding a definite integral the $C's$ cancels out due to subtraction. However, I came across an example where the $C$ doesn't ... 0answers 26 views ### Complex Fourier series of a function [duplicate] I need to find the complex Fourier series of this function, and I'm having problems calculating these integers: $$|a|<1$$ $$x\in [-\pi,\pi]$$ $$f(x)=\frac{1-a\cos(x)}{1-2a\cos(x)+a^2}$$ ... 2answers 181 views ### Complex Fourier series I need to find the complex Fourier series of this function, and I'm having problems calculating these integers: $$|a|<1$$ $$x\in [-\pi,\pi]$$ $$f(x)=\frac{1-a\cos(x)}{1-2a\cos(x)+a^2}$$ ... 3answers 95 views ### Is this integral right? $$\pi\int_0^{x}\left(\cot(\pi t)-\frac{1}{\pi t}\right)dt=\log\frac{\sin(\pi x)}{\pi x}$$ (original image) Is this integral right? Regardless of whether it's right or not, please give me a procedure ... 2answers 86 views ### Evaluating $\int_{\mathbb{R}}\frac{\exp(-x^2)}{1+x^2}\,\mathrm{d}x$ I would like to evaluate in a closed form the integral $$\int_{\mathbb{R}}\frac{\exp(-x^2)}{1+x^2}\,\mathrm{d}x$$ I tried various methods : integration by parts some changes of variables ... 1answer 54 views ### Prove that $\int_0^x \int_0^y \int_0^z f(t) dt dz dy = \frac{1}{2} \int_0^x (x-t)^2 f(t) dt$ Prove that $$\int_0^x \int_0^y \int_0^z f(t) dt dz dy = \frac{1}{2} \int_0^x (x-t)^2 f(t) dt$$ Came across this problem and I'm not even sure how to start it. I figured that if the end goal is ... 2answers 50 views ### Integration solving problem A integration is given $$x-x_0 = \pm \int_{0}^{\phi(x)}\frac{d\Phi}{\sqrt\frac{\lambda}{2}(\Phi^2-\frac{m^2}{\lambda})} \tag{1}$$ The author said that, equation (2) can be written from equation (1) by ... 3answers 235 views ### integral with $\log\left(\frac{x+1}{x-1}\right)$ I encountered a tough integral and I am wondering if anyone has any ideas on how to evaluate it. \displaystyle ... 2answers 161 views ### Let $f:[a,b]\to\mathbb R$ be Riemann integrable and $f>0$. Prove that $\int_a^bf>0$. (No Measure theory) [closed] Is the Riemann integral of a strictly positive function positive? This is not a duplicate. I'm specifically interested in a proof not involving Measure Theory. The thread above uses the fact that $f$ ... 0answers 105 views ### Riemann sums vs Darboux sums Let we speak of the tagged partitions of an interval and a bounded function defined on it. I think that the tags give rise to particular Riemann sums which may be quite different in value that the ... 3answers 77 views ### Integral with Bessel functions of the First Kind. I'd like to solve the following integral: $I = \int_0^\infty J_0(at) J_1(bt) e^{-t} dt\$ where $J_n$ is an $n^{th}$ order Bessel Function of the First Kind and $a$ and $b$ are both positive real ... 3answers 55 views ### Integration question I have trouble in integrating the following integral. I would appreciate any help :D $$\int_0^1 \sqrt{-\log x}\, a\, x^{a-1}dx$$ Thanks heaps :D The answer is $\sqrt{\pi}/2(\sqrt{a})$. 2answers 78 views ### Integrating by substitution I'm embarrassed to ask this question, but what's the flaw in the following evaluation? $\displaystyle\int_{0}^{\pi} \sin (\sin x) \ dx = \int_{0}^{0} \frac{\sin u}{\sqrt{1-u^{2}}} \ du = 0$. 2answers 163 views ### Is the Riemann integral of a strictly positive function positive? In the proof here a strictly positive function in $(0,\pi)$ is integrated over this interval and the integral is claimed as a positive number. It seems intuitively obvious as the area enclosed by a ... 0answers 56 views ### Simplify the integral with error function $\newcommand{\erf}{\operatorname{erf}}$ I have the following integral and I need to simplify the solution. I have written first two steps. I don't know what is the value of $$\erf(\infty)$$ I ... 0answers 30 views ### Solving an complex Integration with complex exp and other terms I am trying to solve a partial differential equation and while solving I need to solve the following integral. If anyone could help me solve this integral that would be great. y(x,t) = \int_{c-i ... 2answers 40 views ### Show that the integrals are equivalent Show that: $$\int_o^{\infty}\frac{\cos(x)}{1+x}dx=\int_o^{\infty}\frac{\sin(x)}{(1+x)^2}dx$$ I have no idea how to approach. The only thing I can think is substitution $y=\pi/2-x$ or integration by ... 1answer 66 views ### How to find limits of integration on a convolution of CRVs In finding the convolution of two independent and continuous random variables, I am struggling with limits of integration. I cannot seem to figure out over what intervals the probability density ... 0answers 30 views ### Fundamental theorem of calculus 1 where integrand is a 2nd order partial derivative I have a function $b(x,y)$ such that $b(x,0)=0$. Now, suppose I wish to evaluate the following integral: (Note that $b$ is continuous almost everywhere but it is assumed that it is integrable. Also, ... 1answer 114 views ### Improper integral evaluation I'm looking for a method to evaluate the following integral: $\displaystyle \int_0^{\infty} \left( \frac{1}{e^x - 1} - \frac{1}{x} + \frac{e^{-x}}{2} \right) \frac{1}{x} dx$ EDIT: Using the link, ... 3answers 61 views ### Definite Integral with a discontinuty I have the next integral: $$\int^{\pi/2}_0{\frac{\ln(\sin(x))}{\sqrt{x}}}dx$$ I have no clue how to start. At $x=0$ there is a clear discontinuity and I don't know how to solve the integral. The main ... 2answers 56 views ### Need to prove $\frac{3}{5}(2^{\frac{1}{3}}-1)\le\int_0^1\frac{x^4}{(1+x^6)^{\frac{2}{3}}}dx\le1$ I need to show that $$\frac{3}{5}(2^{\frac{1}{3}}-1)\le\int_0^1\frac{x^4}{(1+x^6)^{\frac{2}{3}}}dx\le1$$ I just know that if in $[a,b]$, $f(x)\le g(x)\le h(x)$, then ... 1answer 87 views ### Evaluating the integral $\int_0^1\arctan(1-x+x^2)dx$ I need to evaluate $$\int_0^1\arctan(1-x+x^2)dx$$ What I did: First I assume $$I=\int_0^1\arctan(1-x+x^2)dx=\int_0^1\arctan((x-\frac{1}{2})^2+\frac{3}{4})dx$$ Since the function is symmetric about ... 2answers 60 views ### Definite integral of an exponential quotient I was wondering if someone could help me find the definite integral of this: $$\int\limits_{R1}^{R2} \frac{t\, dt}{(t^2 + K^2)^{3/2}}$$ Where $\,K,\, R1,\, R2\,$ are constants, $\,R2>R1\,$ , ... 1answer 126 views ### A multiple integral question II We know from the previous post that \lim_{n\to\infty}\underbrace{\int_0^1 \int_0^1 \cdots \int_0^1}_{n \text{ times}}\frac{1}{(x_1\cdot x_2\cdots x_n)^2+1} ... 1answer 90 views ### A multiple integral question Proving that $$\lim_{n\to\infty}\underbrace{\int_0^1 \int_0^1 \cdots \int_0^1}_{n \text{ times}}\frac{1}{(x_1\cdot x_2\cdots x_n)^2+1} \mathrm{d}x_1\cdot\mathrm{d}x_2\cdots\mathrm{d}x_n=1$$ 2answers 54 views ### Definite integral with functions in the sides Im trying to resolve the next definite integral: $$\int_{1-x^2}^{1+x^2}{\ln(t^2)\ dt}$$ Im not sure if I can use the Barrow's theorem, I think I have to use the fundamental theorem of integral ... 0answers 89 views ### Integral representation of Euler's constant Prove that : $$\gamma=-\int_0^{1}\ln \ln \left ( \frac{1}{x} \right) \ \mathrm{d}x.$$ where $\gamma$ is Euler's constant ($\gamma \approx 0.57721$). This integral was mentioned in Wikipedia as in ... 1answer 70 views ### Prove that $\int_a^cf(x)\mathrm{d}x+(c-a)g(c)=\int_c^bg(x)\mathrm{d}x+(b-c)f(c)$ Let $f$ , $g$ be real continuous functions in $[a,b]$. Prove that there is $c\in(a,b)$ such that $$\int_a^cf(x)\mathrm{d}x+(c-a)g(c)=\int_c^bg(x)\mathrm{d}x+(b-c)f(c)$$ What would you suggest me to ... 3answers 89 views ### Integration by parts question,, possibly a circular example [duplicate] I am having trouble figuring this out. $$\int_0^{1/3} \sec^3(\pi x) \, dx$$ We are currently doing integration by parts,, so I set $g(x)=\sec^3(\pi x)$ and $f'(x)=1$. I arrived at: x\sec^3(\pi x) ... 4answers 194 views ### Calculate : $\int_1^{\infty} \frac{1}{x} -\sin^{-1} \frac{1}{x}\ \mathrm{d}x$ Find : $\displaystyle \int_1^{\infty} \frac{1}{x} -\sin^{-1} \frac{1}{x}\ \mathrm{d}x$. I've done some work but I've got stuck, you may try to help me continue or give me another way , in both cases ... 1answer 124 views ### Characteristic function of the Smith-Volterra-Cantor set Let the characteristic function of the SVC set be denoted by $\beta$. Does the Riemann integral $\displaystyle \int_{0}^{1} \beta ~ d{x}$ exist? I think it does since $\beta$ is bounded, but I ... 3answers 366 views ### Evaluating the integral $\int_{-\infty}^\infty \frac {dx}{\cos x + \cosh x}$ Many recent questions have been asked here similar to this integral $$\int_{-\infty}^\infty \frac {dx}{\cos x + \cosh x} = 2.39587\dots$$ whose "closed form" I cannot seem to figure out. I have ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 37, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277710914611816, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/86495/recurrence-relations-binary-substrings/86499
# Recurrence relations - binary substrings Let $S_n$ be the number of binary strings of length = $n$ which do not contain the sub-string $010$. Find a recurrence relation for $S_n$. edit: I tried for $n=4$. There are two positions in string, how to place $010$. []010 or 010[] ... so last place is choosen from 2 possible numbers -> $2^1$ and multiplied by $2!$ because of permutation of two items. Number of all strings is $2^4$. Number of string with $010$ for $n=4$ is $2^1*2! = 4$. Result is 12 strings doesn't containt substring. It is ok for $n=3$ but failed for $n=5$. - 1 What have you tried? – Henning Makholm Nov 28 '11 at 19:59 I've tried use combinatorics to determine how many string containt this substring and then subtract. – Andrew Nov 28 '11 at 20:09 Please edit your question to show some of this work, and some thoughts about how that would have led to a recurrence relation. – Henning Makholm Nov 28 '11 at 20:12 Ok, check my question again, it is edited. – Andrew Nov 28 '11 at 20:24 ## 2 Answers The comment at the OEIS entry looks pretty easy to me. Maybe this is a tiny bit simpler. Any such string must end in exactly one of the following: 1; 110; 1100; 11000; etc. So $$a_n=a_{n-1}+a_{n-3}+a_{n-4}+a_{n-5}+\cdots$$ Now replace $n$ everywhere with $n-1$, and subtract the new equation from the old one. EDIT: Maybe I should expand on this somewhat. Suppose we have a string of length $n$ with no 010. It could end in 1. In that case, it's a string of length $n-1$ with no 010, with a 1 tacked on at the end. There are $a_{n-1}$ such things. Or, it could end in a 0. In that case, it might end in any number of zeros. If it isn't all zeros, then it ends in a 1 followed by those ending zeros, and it can't end in 01 followed by those zeros (since that would give a 010), so it must end in 11 then zeros; it must end in 110, or 1100, or 11000, etc., etc. If it ends in 110, it's a sequence of length $n-3$ with no 010, with 110 tacked on at the end; the number of these is $a_{n-3}$. If it ends in 1100, it's a sequence of length $n-4$ with no 010, with 1100 tacked on at the end; there are $a_{n-4}$ of these. And so on. So we get $$a_n=a_{n-1}+a_{n-3}+a_{n-4}+a_{n-5}+\cdots$$ But $n$ is arbitrary. Replacing it with $n-1$ everywhere, we get $$a_{n-1}=a_{n-2}+a_{n-4}+a_{n-5}+a_{n-6}+\cdots$$ Now subtracting the last equation from the one before, we get $$a_n-a_{n-1}=a_{n-1}-a_{n-2}+a_{n-3}$$ All the other terms cancel. So we are left with the recurrence, $$a_n=2a_{n-1}-a_{n-2}+a_{n-3}$$ Let's check it. It's easy to see $a_1=2,a_2=4,a_3=7$. Also, $a_4=12$, because there are 16 strings of length 4 of which we must omit 4, namely, 0100, 0101, 0010, and 1010. Putting $n=4$ into the formula yields $$12=(2)(7)-4+2$$ which is correct, so maybe the answer is right. - +1. You omitted the case it's all $0$s. This adds $1$ to each of $a_n$ and $a_{n-1}$, which then cancel, yielding the same result. You also omitted the case the string is $100$: how to deal with that? – msh210 Nov 30 '11 at 18:18 @msh210, you are right, I have been careless. For every $n$, there is the string of $n$ zeros, and also the string with a 1 followed by $n-1$ zeros. So it should be $a_n=2+a_{n-1}+a_{n-3}+\cdots$ (except when $n=1$; then $a_1=1+a_0$), but as in your comment the $+2$ cancels when we do $a_n-a_{n-1}$. – Gerry Myerson Dec 1 '11 at 5:53 From http://oeis.org/A000253: $a_n = 2a_{n-1}-a_{n-2}+a_{n-3}+2^{n-1}$ ... number of binary strings of length $n+2$ containing the pattern $010$ And subtract. - This answer assumes the asker is actually seeking a recurrence relation: it provides one. If, OTOH, the asker is actually seeking help with homework (as I suspect), then he'll (presumably) need to explain why the recurrence relation holds. In that case, he can check the proof in the OEIS comment. – msh210 Nov 28 '11 at 20:11 I do not need help with homework, I just missed last seminar and I have absolutely no idea, how to solve problem like this. – Andrew Nov 28 '11 at 20:14 @Andrew: Like I said: the OEIS comment indicates how to do so. – msh210 Nov 28 '11 at 20:16 I think there should be easier solution. Still, it is first example from seminar. Second is much more complicated. – Andrew Nov 28 '11 at 20:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359041452407837, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/24927/what-are-the-ramifications-of-the-fact-that-the-first-homotopy-group-can-be-non
# What are the ramifications of the fact that the first homotopy group can be non-commutative, whilst the higher homotopy groups can't be? Does this mean that the first homotopy group in some sense contains more information than the higher homotopy groups? Is there another generalization of the fundamental group that can give rise to non-commutative groups in such a way that these groups contain more information than the higher homotopy groups? - 2 Interesting question. I don't think that there's necessarily less information in an abelian group: the commutativity of the higher groups is really a consequence of a geometrical fact about spheres, and not really a restriction. Might also be worth considering the relative homotopy groups: $\pi_1$ becomes a set and $\pi_2$ non-abelian. But these are just thoughts, not a real answer. – Paul VanKoughnett Mar 4 '11 at 2:52 ## 4 Answers Thinking about the higher homotopy groups as just groups is in some sense missing the point. The higher homotopy groups are not just abelian groups: they are $\pi_1$-modules, for one thing. More loftily, from the n-categorical point of view, the homotopy groups are really just a convenient stand-in for a more fundamental structure, the fundamental $\infty$-groupoid of a space. Roughly speaking, the fundamental $\infty$-groupoid is a gadget that incorporates information about the paths between points, homotopies between paths, homotopies between homotopies between paths, and so forth. It is possible to truncate the fundamental $\infty$-groupoid into a collection of easier-to-understand objects, the fundamental $n$-groupoids $\Pi_n$: • The fundamental $0$-groupoid $\Pi_0$ is just the set of connected components. • The fundamental $1$-groupoid is the groupoid of homotopy classes of paths between points; it is a generalization of the fundamental group that is independent of basepoint. If the space is connected, the fundamental $1$-groupoid is equivalent to the category with a single object whose morphisms are the elements of the fundamental group $\pi_1$. • The fundamental $2$-groupoid is the $2$-groupoid of paths and homotopy classes of homotopies between them; it is a generalization of the action of $\pi_1$ on $\pi_2$ that is independent of basepoint. And so forth: more generally the fundamental $n$-groupoid is a generalization of the relationship between the first $n$ homotopy groups. Unfortunately I can't think of a nice reference to these ideas off the top of my head; I've gleaned them from several sources. The references in Baez and Shulman's Lectures on n-categories and cohomology might be a good start. - Although, if the space is simply connected the $\pi_1$-module structure doesn't give any information. – Grumpy Parsnip Mar 5 '11 at 1:43 2 A generalization of the $\pi_1$-action is the graded quasi-Lie algebra structure coming from the Whitehead product, which takes the form $\pi_m \otimes \pi_n \rightarrow \pi_{m+n-1}$. This structure (tensored with $\mathbb{Q}$) is what Sullivan exploits for his fundamental theorem of rational homotopy theory, that taking a simply-connected space (up to rational homotopy equivalence) to its commutative $\mathbb{Q}$-dga induces an equivalence of categories. – Aaron Mazel-Gee Mar 5 '11 at 19:39 The notion of a fundamental 2-groupoid of a space is more delicate; see the paper Hardie, K. A.; Kamps, K. H.; Kieboom, R. W. A homotopy 2-groupoid of a Hausdorff space. Papers in honour of Bernhard Banaschewski (Cape Town, 1996). Appl. Categ. Structures 8 (2000), no. 1-2, 209–234. and the papers which follow from this. However one can define a fundamental 2-groupoid of a pair of spaces, or more generally a map of spaces. – Ronnie Brown Apr 13 '12 at 14:21 Let $\mathcal{C}$ be a category with finite limits and a final object. In general, if $Y$ is an object in $\mathcal{C}$ such that $\hom(X, Y)$ is naturally a group for each $X \in \mathcal{C}$, then $Y$ is called a "group object" in $\mathcal{C}$; that is, there is a multiplication map $Y \times Y \to Y$ and an inversion $Y \to Y$ and an identity $\ast \to Y$ (for $\ast$ the final object) that satisfy a categorical version of the usual group axioms (stated arrow-theoretically). In the case of interest here, $\mathcal{C}$ is the homotopy category of pointed topological spaces, and the statement that the homotopy groups are groups is the statement that the spheres $S^n$ are group objects in the opposite category -- in other words, $S^n$ is a so-called "H cogroup." When one writes out the arrows, one ends up with a "comultiplication map" $S^n \to S^n \vee S^n$ and a map $S^n \to \ast$ ($\ast$ the point) that satisfy the dual of the usual group axioms, up to homotopy. The reason that the higher homotopy groups are abelian and $\pi_1$ is not is that $S^n$ is an abelian H cogroup for $n \geq 2$ and not for $n=1$. This is basically a consequence of the Eckmann-Hilton argument (namely, there are two natural and mutually distributive ways of defining the H cogroup structure of $S^n$, depending on which coordinate one chooses; they must be equal and both commutative). Now to your more general question. So one can define covariant functors from the pointed homotopy category to the category of groups: just pick any H cogroup object and to consider maps from it into the given space. An easy way of getting these is to take the reduced suspension of any space $X$, $\Sigma X$, and to note that $\Sigma X$ can be made into an H cogroup (in kind of the same way as $S^n$ is---actually, the $S^n$ is a special case of this). One may object that considering suspensions is not really anything new, because homotopy classes of $\Sigma X = S^1 \wedge X$ into a space $Y$ is the same as considering homotopy classes of maps $S^1 \to Y^X$ when $X$ is reasonable (say locally compact and Hausdorff), so we really have a variant of the fundamental group. Finally, there is the question of whether all functors from the pointed homotopy category to the category of groups can be expressed in this way, that is, whether it is representable. On the pointed homotopy category of CW complexes, there are fairly weak conditions that will ensure representability. - For $hom(X,Y)$ to be a group, isn't it $Y$ that you need to be a group object (or $X$ to be a cogroup object)? By definition of a product, for $f,g\in hom(X,Y)$ you get $X\stackrel{f\times g}{\rightarrow} Y\times Y \stackrel{\mu}{\rightarrow} Y$. – Aaron Mazel-Gee Mar 4 '11 at 18:23 @Aaron: Dear Aaron, thanks for the correction. – Akhil Mathew Mar 5 '11 at 1:21 These problems puzzled the early topologists: in fact Cech's paper on higher homotopy groups was rejected for the 1932 Int. Cong. Math. at Zurich by Hopf and Alexandroff, who quickly proved they were abelian. We now know this is because group objects in groups are abelian groups. However group objects in the category of groupoids are NOT just abelian groups, but are equivalent to crossed modules, which occurred in the 1940s in relation to second relative homotopy groups, $\pi_2(X,A,x)$. It turns out that there is a nice double groupoid $\rho_2(X,A,x)$ consisting of homotopy classes of maps of a square $I^2$ to $X$ which map the edges to $A$ and the vertices to $x$. (The proof that the compositions are well defined is not quite trivial!). Using this Philip Higgins and I proved a 2-d van Kampen Theorem, published in Proc. LMS 1978, i.e. 34 years ago, from which one can deduce new results on the nonabelian second relative homotopy groups, as crossed modules over the fundamental group. This is the start of using strict higher homotopy groupoids for obtaining nonabelian calculations in higher homotopy theory -- see the web page in my comment, and references there. This idea came from examining in 1965 a proof of the 1-dim van Kampen theorem for the fundamental groupoid, and observing that it ought to generalise to higher dimensions if one had the right homotopical gadgets. It took years to get the idea that this could be done for pairs of spaces, filtered spaces, or $n$-cubes of spaces, but apparently not easily just for spaces, or spaces with base point. - One geometrical fact from which the non-commutativity of the fundamental group is the following: two objects on a line can't switch relative position (i.e. left and right) through homotopy, as they are unable to "pass" each other. Two objects in a higher dimensional space can, however; so intuitively, it seems that a homotopy theory based on mappings of $I^n$ will naturally be abelian for $n \ge 2$. - 1 this seems off-topic. OP is not looking for an intuitive explanation for why $\pi_n$ is abelian for $n \ge 2$. – Soarer Mar 4 '11 at 5:40 @Soarer This is not off topic: see my answer in preparation, on using higher homotopy groupoids of filtered spaces or $n$-cubes of spaces, and my page Higher dimensional group theory – Ronnie Brown Apr 13 '12 at 11:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224221706390381, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/54704/finding-the-smallest-positive-integer-a
# Finding the smallest positive integer a Can we find the smallest positive integer $a$ such that $1971|50^n+a.23^n$ where n is odd? Source:Problem Solving Strategies by Arthur Engel - Presumably you want this to hold for all odd values of $n$? Is that period between $a$ and $23^n$ a multiplication sign? The TeX-code \cdot gives you that. – Jyrki Lahtonen Jul 31 '11 at 6:32 I added an IMHO relevant tag. – Jyrki Lahtonen Jul 31 '11 at 6:42 ## 5 Answers HINT: 1971 = 27 * 73. Use modular arithmetic and congruences. - My answer is 512.I used Diophantine equations to minimize a.Thank you for the suggestion. – Eisen Jul 31 '11 at 7:02 Hint: $50^2\equiv 23^2\pmod{1971}$ - Nicer idea than the solution in Engel's book. – André Nicolas Jul 31 '11 at 6:56 @Andre: I don't have the book. What is the solution? – mixedmath♦ Jul 31 '11 at 7:20 @mixedmath: $50^n+23^n a\equiv (-4)^n +(-4)^na\equiv -4^n(a+1) \pmod{27}$, $50^n+23^n a \equiv (-23)^n +23^n a \equiv 23^n(a-1)\pmod{73}$. Then standard solving of $a\equiv -1\pmod{27}$, $a\equiv 1\pmod{73}$. – André Nicolas Jul 31 '11 at 12:41 For coprime $\rm\: b,c\in \mathbb Z\:,\ \ a\: =\: -(b/c)^{2\:k+1} =\: -(b^2/c^2)^{k}\ b/c\: \equiv\: -b/c \pmod{b^2-c^2}\:.\:$ So the extended Euclidean algorithm will efficiently compute $\rm\:a \equiv -b/c\pmod{b^2-c^2}\:.$ Alternatively note $\rm\:a\equiv -1\pmod{b-c}$ since then $\rm\ b\equiv c\ \Rightarrow\ a = -b/c \equiv -c/c\equiv -1\:.\:$ Similarly we infer $\rm\ a\:\equiv\ 1\ \pmod{b+c}\:.\:$ When $\rm\:b,c\:$ have opposite parity, $\rm\:b-c,\ b+c\:$ are coprime, so we may employ $\rm CRT$ to efficiently compute the unique solution $\rm\: (mod\ \ b^2-c^2)\:.$ Such nontrivial $(\ne \pm 1)$ square-roots of $1\:$ exist modulo composite $\rm\:m\:$ that are not prime powers. In fact, given such a nontrivial square root $\rm\:a\:$ one may compute a factor of $\rm\:m\:$ by $\rm\:gcd(a\pm1,m)\:,\:$ e.g. above $\rm\ a = 512,\ \ gcd(511,1971) = 73,\ \ gcd(513,1971) = 27\:.\:$ This is the way many integer factoring algorithms work, e.g. Fermat's method of difference of squares and its generalizations, e.g. MPQS. See here for more on relations between factorization, nontrivial sqrts and idempotents. - Find the smallest positive integer $n$ for: $$\left(\frac{1+j}{1-j}\right)^n =1;\quad (j^2 =-1)$$ - How is this relevant to the question? – robjohn♦ Mar 27 at 9:19 Since $50^2\equiv23^2\equiv529\pmod{1971}$ and $(529,1971)=1$, we have $$\begin{align} 50^{2n+1}+a\cdot23^{2n+1}&\equiv0\pmod{1971}\\ 50\cdot529^n+a\cdot23\cdot529^n&\equiv0\pmod{1971}\\ 50+a\cdot23&\equiv0\pmod{1971} \end{align}$$ Using the Euclid-Wallis Algorithm $$\begin{array}{r} &&85&1&2&3&2\\\hline 1&0&1&-1&3&-10&23\\ 0&1&-85&86&-257&857&-1971\\ 1971&23&16&7&2&1&0\\ \end{array}$$ we get that $857\cdot23\equiv1\pmod{1971}$. Therefore, $$a\equiv-50\cdot857\equiv512\pmod{1971}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8366126418113708, "perplexity_flag": "middle"}
http://nrich.maths.org/5627/index?nomenu=1
## 'Quaternions and Rotations' printed from http://nrich.maths.org/ ### Show menu In this question we see how quaternions are used to give rotations of ${\bf R^3}$. (1) Consider the quaternion $$q = {1\over \sqrt 2} + {1\over \sqrt 2}{\bf i} + 0{\bf j} + 0 {\bf k}.$$ (a) Show that the multiplicative inverse of $q$ is given by $$q^{-1} = {1\over \sqrt 2} - {1\over \sqrt 2}{\bf i}$$ (b) Show that for all scalar multiples $x = t{\bf i}$ of the vector ${\bf i}$, $q x = x q$ and hence $q x q^{-1} = x$. This proves that the map $F(x) = q x q^{-1}$ fixes every point on the x axis. (c) What happens to points on the y axis under the mapping $F$? To answer this work out $F({\bf j})$. Also compute $F({\bf k})$ and show that ${\bf k} \to {\bf -j}.$ (2) Consider the quaternion $q = \cos \theta + \sin \theta {\bf k}$ (a) Show that $\cos \theta - \sin \theta {\bf k}$ is the multiplicative inverse of $q$. (b) Show that $q{\bf k}q^{-1}={\bf k}$. (c) Show that $$q v q^{-1}= r(\cos (2\theta + \phi) {\bf i} + \sin (2\theta + \phi){\bf j})$$ where $v = (r\cos \phi {\bf i} + \sin \phi {\bf j}+0{\bf k})$ and hence that the map $G(v)= q v q^{-1}$ is a rotation about the z axis by an angle $2\theta$. To read about number systems, where quaternions fit in, why there are no three dimensional numbers and numbers in higher dimensions, see the NRICH article What Are Numbers? If you want to know how quaternions are used in computer graphics and animation in film making read the Plus Article Maths goes to the movies . .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8994830250740051, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/convolution?page=1&sort=unanswered&pagesize=30
# Tagged Questions Questions on the (continuous or discrete) convolution of two functions. 0answers 183 views ### Is there a closed form solution of $f(x)^2+(g*f)(x)+h(x)=0$ for $f(x)$? When $g(x)$ and $h(x)$ are given functions, can $f(x)^2+(g*f)(x)+h(x)=0$ be solved for $f(x)$ in closed form (at least with some restrictions to $g,h$)? (The $*$ is not a typo, it really means ... 0answers 58 views ### Properties of a continued fraction convolution operation Usually the partial numerators of a continued fraction are all 1s. Has anyone considered the operation where you convolve 1 continued fraction with another, in other words, make a new continued ... 0answers 87 views ### Infinite self-convolution for a function I have a mathematical problem that leads me to a particular necessity. I need to calculate the convolution of a function for itself for a certain amount of times. So consider a generic function \$f : ... 0answers 74 views ### The norm of an operator Let $\rho(x)$ be a weight function in a unit sphere, such that \begin{equation} \begin{array}{l} \displaystyle 1. \rho(x)\ge 0,\int_{\mathbb{R}^n}\rho(x)=1\\ \displaystyle 2. \rho(x)\in ... 0answers 68 views ### Convolution Exercise Homework Put $\varphi(t)= 1- \cos \;t\;\;\;$ if $\;\;\;0 \leq t \leq 2 \pi$, $\varphi(t) = 0$ for all other real $t$. For $-\infty < x < \infty$, define f(x)= 1,\;\;\;\;\;\;\;\;\;\;g(x) = ... 0answers 235 views ### A special case of Young's inequality for convolutions The problem: Suppose $f,g\in L^1(\mathbb{R})$. Let $x\in \mathbb{R}$ and $\phi_x(y) = f(y)g(x-y)$. Show that for almost all $x$, $\phi_x$ is integrable. For such $x$ let \$\psi(x) = ... 0answers 22 views ### changing the parameters of a function Lets say we have $h[n] = ((1/2)^n )(u[n])$ now if we are ask, find h[k-n], then isn't it we should just swapped every 'n' with 'k-n'. So it turns out $h[k-n] = ((1/2)^{k-n})(u[k-n])$ But why here ... 0answers 74 views ### clarification asked for 'difference between convolution and crosscorrelation?' I don't understand answer formulated in ways like this "Thus, $p\ast q$ is the distribution of $X+Y$. The cross-correlation $p\circ q$ is the distribution $c=(c_n)_n$ defined by ... 0answers 52 views ### Why matrix representation of convolution cannot explain the convolution theorem? A record saying that Convolution Theorem is trivial since it is identical to the statement that convolution, as Toeplitz operator, has fourier eigenbasis and, therefore, is diagonal in it, has ... 0answers 51 views ### Convolution and Smoothness Conditions Suppose $f(x),g(x)\in L_1(\mathbb{R})$, with both $|f(x)| \leq 1$, $|g(x)| \leq 1$ and $|f(x)| \rightarrow 0$, $|g(x)| \rightarrow 0$ for $|x| \rightarrow \infty$. Given that we have two other ... 0answers 45 views ### Show compactness of an evolution operator Consider the heat equation $$u_{t}=u_{xx},~~~~~u_0(x)=u(0,x)$$ with $u\colon [0,T]\times\mathbb{R}\to\mathbb{R}, (t,x)\mapsto u(t,x)$ and the evolution operator $E(T)$ with $E(T)u_0=u(T,x)$. 1.) ... 0answers 68 views ### When does $|f*g|_{p}=|f|_{1}|g|_{p}$? From Rudin, Real and Complex Analysis, 1st edition, Chapter 7, Problem 4 Suppose $1\le p\le \infty$, $f\in L^{1}(\mathbb{R}^{1})$, $g\in L^{p}(\mathbb{R}^{1})$. Show that the the integral defining ... 0answers 45 views ### bound on Hilbert transform Consider $\widehat{Tf(\xi)}=m(\xi)\hat{f}(\xi)$, where $m(\xi)=(1-\vert\xi\vert)1_{[-1,1]}$, i.e. $T$ is the operation of taking Fourier transform and multiplying with the function $m(\xi)$. I am ... 0answers 41 views ### Fubini theorum for integrating 1 dimension of a 3d convolution I have 3D volume that is convoluted with a 3D blur function. Both are positive and integrate to a finite value. I can see experimentally (meaning playing with matlab) that this is true: \$\int_{-a}^{a ... 0answers 28 views ### Cancellation of summations I am working on some stuff related to the convolution property of the discrete Fourier transform. If we consider: \sum_{p = 0}^{N-1}\hat{s}_{p}e^{ik_{p}x_{m}} = \sum_{p = ... 0answers 81 views ### convolution of L1 function with a harmonic oscillation I have to show that the convolution of a function $f \in L^1(\mathbf{R})$ with the harmonic oscillation $\phi_\omega (t) = \exp(2 \pi i t \omega)$ is equal to the Fourier Transform of $f$, ... 0answers 48 views ### Convolutions of Path Integrals of Gaussian Functions I was looking at a question on a physics forum (http://physics.stackexchange.com/questions/45955/splitting-light-into-colors-mathematical-expression-fourier-transforms) and I wanted a more ... 0answers 41 views ### Maxium value of discrete convolution I'm trying to calculate the maximum possible short-term energy $E[n]$ of a sampled signal $s$ in terms of $N$ and $\text{bitdepth}$. $$E[n] =\sum_{m=-\infty}^{\infty} s^2[n]w[n-m]$$ where w(n) ... 0answers 72 views ### Poisson exponentiation distribution family and convolution Assume $\xi_i \sim \mathbb{F}_{\lambda_i}(x)$ are random variables from Poisson distribution. Consider random variables $\eta_i \sim \tilde{F}_{\lambda_i,t}(x)$, where \$\tilde{F}_{\lambda_i,t}(x) = ... 0answers 70 views ### Convolution with a special approximation to the identity function I'm working my way through Stein and Shakarchi's Real Analysis, and I'm having some trouble figuring this exercise out. Given the function $K_\delta$ that satisfies the normal approximation to the ... 0answers 65 views ### Consider the correlation of two functions, what is the derivative of the result with respect to one of those functions? I have a problem that comes up from time to time in signal processing applications. Let $f(x)\geq0\, \forall x$ and $g(x)$ be real functions with finite range and support. Let \$I(f(x),g(x)) = ... 0answers 54 views ### Integrability and differentiability of convolution of the fundamental solution and an integrable function Define a function $\Gamma(\cdot)$ as $$\Gamma(x-y)=\frac{1}{2\pi}\log\|x-y\|,\quad x\neq y$$ where $x=(x_1,x_2),y=(y_1,y_2)\in R^2$, and $\|x-y\|^2=(x_1-y_1)^2+(x_2-y_2)^2$. Note that $\Gamma(x-y)$ ... 0answers 113 views ### Cross Correlation The cross-correlation function is defined as follows if $\bar{f}$ is the complex conjugate of $f$ and we assume that $f$ is real, such that $\bar{f} = f$. \begin{align} f \star g &= ... 0answers 102 views ### n-th self discrete convolution Lets define discrete $f_N(i) = 1,\space i = 1...N$ I need to find $G_N^m = \underbrace {f_N * f_N * ... * f_N}_{m}$ For example $G_6^3$ have value (1,3,6,10,15,21,25,27,27,25,21,15,10,6,3,1) , ... 0answers 176 views ### Convolution of two functions for example I have somethink like that, \begin{align*} f(x) &= \begin{cases} \frac{1}{3}x - \frac{2}{3} &\text{where }2 < x \leq 4, \\ \frac{-2}{3}x + \frac{10}{3} ... 0answers 68 views ### Solution for this Convolution We have $f(z)=z+ \sum_{n=2}^{\infty} a_{n}z^{n}$ where $a_{n}$ is a constant and $g(z)=z$, $(f*g)(z)$ is equal to what? i still wondering to confirm that $(f*g)(z)=z$. 0answers 190 views ### FFT signal post processing This is more a "post a suggestion" topic rather than a question. And thank you if you are willing to read this whole. I've been studing the code in the Nvidia Cuda SDK regarding how to operate a ... 0answers 19 views ### Support of the convolution of two test funtions. If $g\in C^{\infty}_c$ defined on $\Bbb R^n$ and K is the support of function $g$. I want to find the support of $g_\epsilon$. Where $g_\epsilon$ is regularization of $g$. Regularization of $g$ is ... 0answers 6 views ### Closeness of a family of function under convolution. I'm interested in functions defined over the non-negative integers that are a product of an exponential function and a polynomial. So a standard term of such a function is something like f(k) = ... 0answers 21 views ### Convolution of logistic function and gaussian distribution I am trying to solve the folowing problem: $$\int \exp\left(-\frac{(x-u)^2}{2\sigma^2}\right) \log(1+\exp(ax + b)) \,dx$$ which I think is very complicated and there is no closed form solution(?) ... 0answers 21 views ### Convolution of Inverse Gaussian distribution Im having problems with showing that the sum of two inverse gaussian distributed random variables are stable under convolution, i.e. let \$f_{T_a}(t) = \frac{a}{\sqrt{2 \pi t^3}} e^{-\frac{(a- \nu ... 0answers 24 views ### Inclusion regarding the support of the convolution of two functions Let $u \in L^1(\mathbb R^n)$ and $v \in L^p(\mathbb R^n)$ where $1 \le p \le \infty$. Show that $$\textrm{spt}(u \ast v) \subseteq \overline{\mathrm{spt}(u) + \mathrm{spt}(v)}$$ where the addition ... 0answers 34 views ### Discrete convolution: where do I go from there? I took this from the book Signals and Systems by Haykin. I have the following discrete system: $y[n] = u[n]*u[n-3]$, where $*$ is the discrete convolution and $u[n]$ is the unit step function, and ... 0answers 31 views ### Strange convolution equation In an article ( https://www.dropbox.com/s/3012v4s1ngpimvg/gridding_Schomberg_Trimmer.pdf ) about implementation of Gridding method for parallel-beam tomography there's an equation(#47 in the article): ... 0answers 45 views ### Obtaining Impulse Response from Graph I want to know how to solve those types of problems.. is it by inspection ? Consider the linear system below. When the inputs to the system $x_1[n]$, $x_2[n]$ and $x_3[n]$, the responses of the ... 0answers 130 views ### Convolution of two functions (pdfs) I want to convolve two signals . The range of each of the signal is 0 to 1. ... 0answers 27 views ### Distribution function approximation: Poisson exponentiation I want to find normal approximation of Poisson exponentiation distribution. Okay, some introduction to problem: Assume that $\xi_i \sim F_{\lambda_i}(x)$ - Poisson distribution' random variables ... 0answers 76 views ### What is the purpose and usage of convolution? I am curious of what the purpose and usage of convolution are. Why is convolution created? In layman's term (and in mathematical term), what defines convolution? 0answers 37 views ### Vestigial Filter, find modulated signal? I have been stuck on this question for a while now. It has to do with vestigial sideband. I wasn't sure if I should be dividing $H(\omega)$ graph values by 2 because only the positive side of the ... 0answers 39 views ### Extension of Convolution theorem Is it possible to extend the convolution theorem to convolve tensors, as we do with discrete matrices? 0answers 54 views ### Weighted convolution? Suppose I have two arbitrary discrete probability distributions with the same domain. I want to convolve the two together to come up with third distribution, however I want them to be weighted. ... 0answers 327 views ### how does one convolve two matrices so in OpenCV I retrieve a Gabor kernel for image processing which is a 10:10 matrix. I have a gray matrix of the original image. How do I convolve the two and get the output of the convolution? I'm ... 0answers 83 views ### Convolutions, Compact Support and the Divergence Theorem Ok, first off, this is a long question, so apologies for that. My LateX isn't up to par, so I've coded what I can and I've linked the rest. I've just proved this, and it probably leads on to the bit ... 0answers 63 views ### convolution related question: shifting? i was wondering, for convolution, when we do the graph shifting for h(t-tou) we flip the graph on the y axis and then if t = 0.5, then shouldn't we shift the graph left by 1/2? In the examples I am ... 0answers 198 views ### Numerical solution of an integro-differential equation with convolution I have an integro-differential equation that I need to solve numerically. The equation is of the form: $$\frac{dX}{dt} = cX - X\left(b + q\frac{dX}{dt}\right),$$ where $q\frac{dX}{dt}$ denotes the ... 0answers 238 views ### Circular to linear convolution with matrices I know how to perform a circular convolution with vectors (http://engineering-matlab.blogspot.it/2010/12/matlab-program-for-implementing_5864.html) and I know that circular convolution can be obtained ... 0answers 187 views ### Convolution between a kernel and an image with FFT In the FFT2D paper (Fast Fourier transform used for a convolution with a kernel in the frequency domain), I'm lost at the second page first picture: ... 0answers 26 views ### An argument for error accumulation during complex DFT I am doing FFT-based multiplication of polynomials with integer coefficients (long integers, in fact). The coefficients have a maximum value of $BASE-1, \quad BASE \in \mathbb{n},\quad BASE > 1$. ... 0answers 169 views ### Convolution problem Hi i am really stuck trying to do this convolution in order to find zero state response. The convolution table only contains $(e^t)u(t)$ not $u(-t)$ can someone show me the steps with some brief ... 0answers 14 views ### Uniform Convergence of Convolutions $\displaystyle Q_n(t) = ne^{-nt}$ $Qn(t)*f(x) = \int_0^{\infty} Q_n(t)*f(x-t) dt$ How do I prove that this converges uniformly? It seems to be similar to the proof of the Weierstrass ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 101, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8957637548446655, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/29805/richtmyer-meshkov-instability-in-mhd/29809
# Richtmyer Meshkov instability in MHD In magnetohydrodynamics, the Richtmyer Meshkov instability is found to get suppressed by application of longitudinal magnetic field. Exactly what happens at the interface? Why instability gets suppressed? (How one can get the physical intuition of what is happening?) - Yes, I think that is correct. It is more of a physics question that Computational science. I was studying a paper on RM instability in which numerical simulation of the MHD system was done (Finite volume method applied to the system of equations in MHD). I could see the simulation results, but could not get any physical intuition. – Subodh Jun 4 '12 at 6:49 ## 1 Answer If you consider the case of ideal MHD (perfectly conducting fluid) we have the limiting case where the magnetic field is frozen into the fluid. Thus, manipulating the magnetic field yields manipulation of the fluid and visa-versa. The Richtmyer Meshkov (RM) instability is suppressed in this limiting case by the application of a longitudinal magnetic field (one which is parallel to the fluid interface) due to the 'control' on the fluid motion provided by the frozen-in magnetic field. To be more mathematical; in Ideal MHD the Maxwell Stress Tensor can be defined as $T_{ij} = [(P + B^{2}/{2\mu_{0}}) \delta_{ij} - B_{i}B_{j}/\mu_{0}]$ and the momentum equation can be written $\frac{\partial T_{ij}}{\partial r_{i}} = 0$ with a transformation to the principle axis $T_{ij}$ can be reduced to diagonal form (with i, j running from 1 to 3). The principle axis being orientated so that axis corresponding to i = 3 is parallel to $\mathbf{B}$ and the other two perpendicular. So the eigenvalues for this system may be obtained via $|T_{ij} - \delta_{ij} \lambda| = 0$ The solution yeilds a stress tensor of the form $T_{ij} = \mathrm{diag}(P + B^{2}/2\mu_{0}, P + B^{2}/2\mu_{0}, P - B^{2}/2\mu_{0})$ From this we see that the stress caused by the magnetic field amounts to a pressure $B^{2}/2\mu_{0}$ in directions transverse to the field and a tension $B^{2}/2\mu_{0}$ along the lines of force. In other words, the total stress amounts to an isotropic pressure which is the sum of the fluid pressure and magnetic pressures and tension $B^{2}/\mu_{0}$ along the lines of force. It is this tension that provide the suppression you are pondering about. To form the RM instability the configuration must be perturbed, this perturbation in the case of RM instabilities is provided by MHD shocks (the particular nature of each type of MHD shock is defined by using the Rankine-Hugoniot conditions and the relevant conservation laws). [Assuming the shock is parallel to the fluid interface] Not all MHD shocks will perturb the magnetic field orientation (perpendicular/parallel shocks) but most will (oblique shocks - Alfven shocks, switch-on/switch-off shocks/ fast/slow shocks). In these cases the magnetic field is abruptly altered which provides the perturbation required for the system to enter a Rayleigh-Taylor instability phase. I hope this helps. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9044796228408813, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/240635/examination-of-convergence-of-a-few-series/240706
# Examination of convergence of a few series Determine whether the following series converge and whether they even converge absolutely: 1. $\displaystyle\sum\limits_{k=2}^\infty\frac{(-1)^k}{(\log(k))^k}$ 2. $\displaystyle\sum\limits_{k=1}^\infty\left(\frac{1}{k!}-\frac{3}{(k+1)!}\right)$ 3. $\displaystyle\sum\limits_{k=1}^\infty\frac{2(k!)^2}{(2k)!}$ 4. $\displaystyle\sum\limits_{k=1}^\infty\frac{3^{2k-1}}{k^2+k}$ 5. $\displaystyle\sum\limits_{k=1}^\infty\left(\frac{2+(-1)^k}{4}\right)^k$ 6. $\displaystyle\sum\limits_{k=1}^\infty(-1)^k\frac{k}{(k+1)(k+2)}$ So I have been able to work through half of the series and I would like to know whether my current attempts are correct and then I would like to get some hints on how to solve the other series I intentionally left out by now. My current solutions 1. I know that I have to use the alternating series test and show that $1/(\log(k))^k$ is a null sequence, however I do not know whether it suffices to just mention the monotonicity of $\log$. 2. $\displaystyle\sum\limits_{k=1}^\infty\left(\frac{1}{k!}-\frac{3}{(k+1)!}\right) = \sum\limits_{k=1}^\infty\left(\frac{k!(k-2)}{k!(k+1)!}\right) = \sum\limits_{k=1}^\infty\left(\frac{k-2}{(k+1)!}\right)$. If we apply the ratio test we get: $$\large\left|\frac{\frac{k-1}{(k+2)!}}{\frac{k-2}{(k+1)!}}\right|\normalsize=\frac{k+1}{k^2+4k}=\frac{k(1+1/k)}{k(k-4/k)}\longrightarrow 0<1,\text{ hence the series converges absolutely.}$$ 3. The ratio test yields $$\large\left|\frac{\frac{2((k+1)!)^2}{(2(k+1))!}}{\frac{2(k!)^2}{(2k)!}}\right| \normalsize = \frac{2((k+1)!)^2(2k)!}{(2(k+1))!2(k!)^2}=\frac{(k+1)^2(2k)!}{(2(k+1))!}=\frac{(k+1)^2}{(2k+2)(2k+1)}=\frac{k^2+2k+1}{4k^2+6k+2}=\frac{k^2(1+2/k+1/k^2)}{k^2(4+6/k+2/k^2)}\longrightarrow \frac{1}{4}<1,\text{ hence the series converges absolutely.}$$ 4. The ratio test yields $$\large\left|\frac{\frac{3^{2(k+1)-1}}{(k+1)^2+(k+1)}}{\frac{3^{2k-1}}{k^2+k}}\right|\normalsize = \frac{3^{2k+1}(k^2+k)}{3^{2k-1}((k+1)^2+(k+1))} = \frac{9k^2+9k}{k^2+3k+2}=\frac{k^2(9+9/k)}{k^2(1+3/k+2/k^2)}\longrightarrow 9>1,\text{ hence the series diverges.}$$ 5. I have no idea at all... a hint might help me out. 6. Using the alternating series test, we have to show that $\frac{k}{(k+1)(k+2)}$ is a null sequence. This is easily done by $$\frac{k}{(k+1)(k+2)}=\frac{1}{k+3+2/k}\longrightarrow 0.$$ The series does not converge absolutely based on the comparison test where $$\sum\limits_{k=1}^\infty \frac{1}{k+3+2/k}\geq \sum\limits_{k=1}^\infty \frac{1}{k}.$$ Thanks for your time and reviewing my attempts. - For the first, note that very soon $(\log k)^k$ is awfully big. For the second, both halves converge trivially. Need not manipulate so much. – André Nicolas Nov 19 '12 at 15:43 This isn't true: $$\sum\limits_{k=1}^\infty \frac{1}{k+3+2/k}\geq \sum\limits_{k=1}^\infty \frac{1}{k}$$ Or rather, it isn't true that each term of the left side is $\geq$ each term of the right side. – Thomas Andrews Nov 19 '12 at 15:54 @ThomasAndrews: Would it be correct to remove the sums and to just look at the sequences rather than the series? – Christian Ivicevic Nov 19 '12 at 15:59 @ChristianIvicevic It's just not true that $$\frac{1}{k+3+2/k}\geq \frac{1}{k}$$, so the comparison test fails to show what you want. You can easily adjust it to show that your series is not absolutely convergent, but what you have here is wrong. – Thomas Andrews Nov 19 '12 at 16:01 1 @ChristianIvicevic Is that even true for $k=1$? Is $\frac{1}{6}\geq \frac{1}{2}$? – Thomas Andrews Nov 19 '12 at 17:36 show 3 more comments ## 2 Answers Good work. One observation: Because you are skilled at computation, perhaps you start computing too quickly. Below are some comments on the problems. $1.$ Very soon $\log k \gt 2$. After that point, the absolute value of the $k$-th term is $\lt \frac{1}{2^k}$. By comparison with the geometric series $\sum_{k=2}^\infty \frac{1}{2^k}$, our series converges absolutely, and hence converges. Your observation that our series is an alternating series is correct. The terms do indeed go to $0$. In fact they go to $0$ very fast, fast enough, by a lot, to ensure absolute convergence. $2.$ Before "simplifying," note that $\sum_1^\infty \frac{1}{n!}$ is a standard series that converges (absolutely). If you do not wish to assume that, do a Ratio Test on that. Similarly, $3\sum_{1}^\infty \frac{1}{(n+1)!}$ converges absolutely, so our series converges. Note that this "splitting" strategy is not always appropriate. $3, 4.$ Ratio Test is appropriate, and well executed. $5.$ The $k$-th term is positive, and $\le \left(\frac{3}{4}\right)^k$, so by comparison with the geometric series $\sum_{k=1}^\infty \left(\frac{3}{4}\right)^k$, our series converges. $6.$ The terms alternate in sign, and have limit $0$. If we can show that the terms (ultimately) go down steadily in absolute value, we will be able to conclude that our series is an alternating series, and therefore converges. One way to show that the terms are steadily decreasing in absolute value is to look at $\frac{k}{(k+1)(k+2)}-\frac{k+1}{(k+2)(k+3)}$. Bring to a common denominator. The numerator is positive, with the unimportant exception of the case $k=1$. Alternately, one can use calculus to show that after a while $\frac{x}{(x+1)(x+2)}$ is decreasing. The series does not converge absolutely. This is because, informally, in the long run the terms behave like $\frac{1}{k}$. This observation can be made formal: Look up the Limit Comparison Test. However, we can do a formal proof in a simpler way. Note that $k+1\le 2k$ and $k+2\le 3k$. It follows that $\frac{k}{(k+1)(k+2)}\ge \frac{k}{(2k)(3k)}=\frac{1}{6}\cdot \frac{1}{k}$. Since $\sum \frac{1}{k}$ diverges, so does $\sum \frac{1}{6}\cdot\frac{1}{k}$, and therefore so does $\sum \frac{1}{(k+1)(k+2)}$. So our series converges, but not absolutely. - Concerning 5 I do not understand what you mean by saying the $k$-th term is positive - shouldn't that include something like "for every even k"? Furthermore does that mean the series does not converge absolutely? – Christian Ivicevic Nov 19 '12 at 17:10 Look at $2+(-1)^k$. This is alternately $1$ and $3$, so it is always positive. Definitely the series converges absolutely. All the terms are positive. If you still doubt it, calculate the first $3$ or $4$ terms. – André Nicolas Nov 19 '12 at 17:19 After a break I will work through your comments and rethink my ideas. Thanks for your effort. – Christian Ivicevic Nov 19 '12 at 17:23 no.5 $\sum_{k=1}^{\infty}(\frac{2+(-1)^{2k}}{4})^{2k}+\sum_{k=1}^{\infty}(\frac{2+(-1)^{2k-1}}{4})^{2k-1}$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9424118399620056, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/23865-please-help-final-tom-word-problem.html
# Thread: 1. ## Please Help Final Tom!! Word Problem Can someone please explain how to set up the equation for the following word problem, -Express the surface area (S) of a rectangular box with a square base and of volume 100 ft.3 in terms of the length (x) of the square base. 2. Originally Posted by B1GG13 Can someone please explain how to set up the equation for the following word problem, -Express the surface area (S) of a rectangular box with a square base and of volume 100 ft.3 in terms of the length (x) of the square base. Hello, the volume of the box is calculated by: $V=x^2 \cdot h~\iff~\boxed{h=\frac{V}{x^2}}$ The surface area consists of 2 squares and 4 congruent rectangles: $s=2 \cdot x^2+4 \cdot x\cdot h~\implies~s(x)=2x^2+4\cdot x \cdot \frac{V}{x^2}~\iff~ s(x)=2x^2+4 \cdot \frac Vx$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8865541815757751, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=1874946
Physics Forums Blog Entries: 3 ## Composition of Inverse Functions In Micheal C. Gemignani, "Elementary Topology" in section 1.1 there is the following exercise 2) i) If $$f:S \rightarrow T$$ and $$G: T \rightarrow W$$, then $$(g \circ f)^{-1}(A) = f^{-1}(g^{-1}(A))$$ for any $$A \subset W$$. I think the above is only true if A is in the image of g yet the book says to prove the above. I have what I believe is a counter example. Any comments? I will give people two days to prove the above or post a counter example. After this time I'll post my counter example for further comment. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help How is $$\mathbf{W}$$ used here - does $$g$$ map to all of $$\mathbf{W}$$ or only into $$\mathbf{W}$$? That could explain the possible confusion. The way the problem has been written you can only prove that: .........f^-1[g^-1(A)] IS a subset of (g*f)^-1(A) i.e the right hand side of the above is a subset of the left hand side FOR the above to be equal we must have that: f(S)={ f(x): xεS } MUST be a subset of g^-1(A) ## Composition of Inverse Functions Quote by John Creighto In Micheal C. Gemignani, "Elementary Topology" in section 1.1 there is the following exercise 2) i) If $$f:S \rightarrow T$$ and $$G: T \rightarrow W$$, then $$(g \circ f)^{-1}(A) = f^{-1}(g^{-1}(A))$$ for any $$A \subset W$$. I think the above is only true if A is in the image of g yet the book says to prove the above. I have what I believe is a counter example. Any comments? I will give people two days to prove the above or post a counter example. After this time I'll post my counter example for further comment. xε$$f^{-1}(g^{-1}(A))$$<====> xεS & f(x)ε$$g^{-1}(A)$$====> xεS & g(f(x))εA <====> xε$$(g\circ f)^{-1}(A)$$ since $$g^{-1}(A)$$ = { y: yεT & g(y)εΑ} ΙΝ the above proof all arrows are double excep one which is single and for that arrow to become double we must have : .........................................f(S)$$\subseteq g^{-1}(A)$$..................................... and then we will have ; $$(g\circ f)^{-1}(A) = f^{-1}(g^{-1}(A))$$ Let $x \in (g \circ f)^{-1}(A)$. Then $x \in S$ with $g(f(x)) \in A$. This means $f(x) \in g^{-1}(A)$ and thus $x \in f^{-1}(g^{-1}(A))$. The other direction has been shown. How are those not all double arrows, evagelos? If $g(f(x)) \in A$, then certainly $f(x) \in g^{-1}(A)$ by definition. We already know that $f(x) \in T$. I'm curious as to what this supposed counter-example is. Blog Entries: 3 Quote by Moo Of Doom Let $x \in (g \circ f)^{-1}(A)$. Then $x \in S$ with $g(f(x)) \in A$. This means $f(x) \in g^{-1}(A)$ and thus $x \in f^{-1}(g^{-1}(A))$. The other direction has been shown. How are those not all double arrows, evagelos? If $g(f(x)) \in A$, then certainly $f(x) \in g^{-1}(A)$ by definition. We already know that $f(x) \in T$. I'm curious as to what this supposed counter-example is. Your proof looks correct. There appears to be a mistake in my counter example. I'll spend a few futile minutes anyway trying to think up a counterexample anyway. Quote by Moo Of Doom If $g(f(x)) \in A$, then certainly $f(x) \in g^{-1}(A)$ by definition. We already know that $f(x) \in T$. . Recognitions: Gold Member Science Advisor Staff Emeritus Quote by evagelos What definition,write down please g-1(A) is defined as the set of all x such that g(x) is in A. If g(f(x)) is in A, then, by that definition, f(x) is in g-1(A). write a proof where you justify each of your steps ,if you wish. The above proof is not very clear Thread Tools | | | | |-------------------------------------------------------|--------------------------------------------|---------| | Similar Threads for: Composition of Inverse Functions | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 3 | | | Introductory Physics Homework | 1 | | | Set Theory, Logic, Probability, Statistics | 6 | | | Calculus | 3 | | | General Math | 7 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205405116081238, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2010/09/23/reducibility/?like=1&source=post_flair&_wpnonce=d0199d491e
# The Unapologetic Mathematician ## Reducibility We say that a module is “reducible” if it contains a nontrivial submodule. Thus our examples last time show that the left regular representation is always reducible, since it always contains a copy of the trivial representation as a nontrivial submodule. Notice that we have to be careful about what we mean by each use of “trivial” here. If the $n$-dimensional representation $V$ has a nontrivial $m$-dimensional submodule $W\subseteq V$ — $m\neq0$ and $m\neq n$ — then we can pick a basis $\{w^1,\dots,w^m\}$ of $W$. And then we know that we can extend this to a basis for all of $V$: $\{w^1,\dots,w^m,v^{m+1},\dots,v^n\}$. Now since $W$ is a $G$-invariant subspace of $V$, we find that for any vector $w\in W$ and $g\in G$ the image $\left[\rho(g)\right](w)$ is again a vector in $W$, and can be written out in terms of the $w^i$ basis vectors. In particular, we find $\left[\rho(g)\right](w^i)=\rho_j^iw^j$, and all the coefficients of $v^{m+1}$ through $v^n$ are zero. That is, the matrix of $\rho(g)$ has the following form: $\displaystyle\left(\begin{array}{c|c}\alpha(g)&\beta(g)\\\hline{0}&\gamma(g)\end{array}\right)$ where $\alpha(g)$ is an $m\times m$ matrix, $\beta(g)$ is an $m\times(n-m)$ matrix, and $\gamma(g)$ is an $(n-m)\times(n-m)$ matrix. And, in fact, this same form holds for all $g$. In fact, we can use the rule for block-multiplying matrices to find: $\displaystyle\begin{aligned}\left(\begin{array}{c|c}\alpha(gh)&\beta(gh)\\\hline{0}&\gamma(gh)\end{array}\right)&=\rho(gh)\\&=\rho(g)\rho(h)\\&=\left(\begin{array}{c|c}\alpha(g)&\beta(g)\\\hline{0}&\gamma(g)\end{array}\right)\left(\begin{array}{c|c}\alpha(h)&\beta(h)\\\hline{0}&\gamma(h)\end{array}\right)\\&=\left(\begin{array}{c|c}\alpha(g)\alpha(h)&\alpha(g)\beta(h)+\beta(g)\gamma(h)\\\hline{0}&\gamma(g)\gamma(h)\end{array}\right)\end{aligned}$ and we see that $\alpha(g)$ actually provides us with the matrix for the representation we get when restricting $\rho$ to the submodule $W$. This shows us that the converse is also true: if we can find a basis for $V$ so that the matrix $\rho(g)$ has the above form for every $g\in G$, then the subspace spanned by the first $m$ basis vectors is $G$-invariant, and so it gives us a subrepresentation. As an example, consider the defining representation $V$ of $S_3$, which is a permutation representation arising from the action of $S_3$ on the set $\{1,2,3\}$. This representation comes with the standard basis $\{\mathbf{1},\mathbf{2},\mathbf{3}\}$, and it’s easy to see that every permutation leaves the vector $\mathbf{1}+\mathbf{2}+\mathbf{3}$ — along with the subspace $W$ that it spans — fixed. Thus $W$ carries a copy of the trivial representation as a submodule of $V$. We can take the given vector as a basis and throw in two others to get a new basis for $V$: $\{\mathbf{1}+\mathbf{2}+\mathbf{3},\mathbf{2},\mathbf{3}\}$. Now we can take a permutation — say $(1\,2)$ — and calculate its action in terms of the new basis: $\displaystyle\begin{aligned}\left[\rho((1\,2))\right](\mathbf{1}+\mathbf{2}+\mathbf{3})&=\mathbf{1}+\mathbf{2}+\mathbf{3}\\\left[\rho((1\,2))\right](\mathbf{2})&=\mathbf{1}=(\mathbf{1}+\mathbf{2}+\mathbf{3})-\mathbf{2}-\mathbf{3}\\\left[\rho((1\,2))\right](\mathbf{3})&=\mathbf{3}\end{aligned}$ The others all work similarly. Then we can write these out as matrices: $\displaystyle\begin{aligned}\rho(e)&=\begin{pmatrix}1&0&0\\{0}&1&0\\{0}&0&1\end{pmatrix}\\\rho((1\,2))&=\begin{pmatrix}1&1&0\\{0}&-1&0\\{0}&-1&1\end{pmatrix}\\\rho((1\,3))&=\begin{pmatrix}1&0&1\\{0}&1&-1\\{0}&0&-1\end{pmatrix}\\\rho((2\,3))&=\begin{pmatrix}1&0&0\\{0}&0&1\\{0}&1&0\end{pmatrix}\\\rho((1\,2\,3))&=\begin{pmatrix}1&0&1\\{0}&0&-1\\{0}&1&-1\end{pmatrix}\\\rho((1\,3\,2))&=\begin{pmatrix}1&1&0\\{0}&-1&1\\{0}&-1&0\end{pmatrix}\end{aligned}$ Notice that these all have the required form: $\displaystyle\left(\begin{array}{c|cc}1&\ast&\ast\\\hline{0}&\ast&\ast\\{0}&\ast&\ast\end{array}\right)$ Representations that are not reducible — those modules that have no nontrivial submodules — are called “irreducible representations”, or sometimes “irreps” for short. They’re also called “simple” modules, using the general term from category theory for an object with no nontrivial subobjects. ## 9 Comments » 1. [...] Today I’d like to cover a stronger condition than reducibility: decomposability. We say that a module is “decomposable” if we can write it as the [...] Pingback by | September 24, 2010 | Reply 2. [...] than any particular invariant form is this: if we have an invariant form on our space , then any reducible representation is decomposable. That is, if is a submodule, we can find another submodule so that [...] Pingback by | September 27, 2010 | Reply 3. [...] saw last time that in the presence of an invariant form, any reducible representation is decomposable, and so any representation with an invariant form is completely [...] Pingback by | September 28, 2010 | Reply 4. [...] that we call a -module irreducible or “simple” if it has no nontrivial submodules. In general, an object in any category [...] Pingback by | September 30, 2010 | Reply 5. [...] start our considerations by letting by any matrix irrep, and let’s calculate its commutant algebra. By definition for any we have for all . We can [...] Pingback by | October 1, 2010 | Reply 6. [...] We want to calculate commutant algebras of matrix representations. We already know that if is an irrep, then , and we’ll move on from [...] Pingback by | October 4, 2010 | Reply 7. [...] More Commutant Algebras We continue yesterday’s discussion of commutant algebras. But today, let’s consider the direct sum of a bunch of copies of the same irrep. [...] Pingback by | October 5, 2010 | Reply 8. [...] one copy of the trivial representation and no copies of the signum representation. In fact, we already knew about the copy of the trivial representation, but it’s nice to see it confirmed again. [...] Pingback by | October 26, 2010 | Reply 9. [...] The secret is to look at the block diagonal form from when we defined reducibility: [...] Pingback by | October 28, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146178364753723, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/26420/cup-product-well-definedness
# cup product well definedness So the cup product is not well defined over co-chain groups, but all the books claim it is well defined over co-homology groups. The only thing I am not clear on is invariance under ordering/re-ordering of simplices when we go to the co-homology level. Every book seems to gloss over this, and after doing a few examples, I can't seem to figure out how to get this to work out right. Can someone fill me in? - How is the cup product "not well defined over cochain groups"? – Mariano Suárez-Alvarez♦ Mar 14 '11 at 3:23 ## 2 Answers Strictly speaking, the cup product is not commutative, though it is commutative up to sign on the level of cohomology. There is an abstract way of seeing this: namely, we can use the method of acyclic models. Consider the following two functors from the category of spaces to the category of chain complexes. The first is $X \mapsto C_*(X \times X)$; the second is $X \mapsto C_*(X)\otimes C_*(X)$. Since these are free and acyclic functors on the subset of standard simplices (this means that a) both can be represented as a sum of free abelian groups on sets which are representable functors in $X$, represented by simplices and b) evaluated on a simplex, they lead to acyclic complexes), there is a natural chain equivalence between two $$C_*(X \times X) \simeq C_*(X) \otimes C_*(X)$$ which itself is unique up to chain homotopy. This is the acyclic model theorem (as in Spanier, for instance). Now the category of chain complexes over a commutative ring is not just an abelian category; it is a monoidal category. We can tensor two chain complexes and get a new chain complex. Moreover, it is a symmetric monoidal category because there is an isomorphism $A_* \otimes B_* \simeq B_* \otimes A_*$ for chain complexes $A, B$. Thus if we are given one chain equivalence (fixed throughout the following) $C_*(X \times X) \simeq C_*(X) \otimes C_*(X)$, we get another by composing it with the swap map on the latter. These are both natural in $X$ and so, by the uniqueness (up to chain homotopy) in the acyclic model theorem, we find that the two maps $$C_*(X \times X) \rightrightarrows C_*(X) \otimes C_*(X)$$ are naturally chain homotopic. But the dual of this means that the two maps $$C^*(X) \otimes C^*(X) \rightrightarrows C^*(X \times X)$$ are naturally chain homotopic. On the level of cohomology, the two maps $H^*(X) \otimes H^*(X) \rightrightarrows H^*(X \times X)$\$ are thus equal. Now the map $C_*(X \times X) \to C_*(X) \otimes C_*(X)$ is precisely the homology cross product, and its dual is the cohomology cross product. So we have seen that if we consider the cross-product map $H^*(X) \otimes H^*(X) \to H^*(X \times X)$, it is invariant under switching the two factors. You might now object that I said that the cross-product (and thus the cup-product, which is obtained from the cross product by pulling back by the diagonal) is skew-commutative, not commutative. This comes from a feature of how the tensor product of complexes is actually defined: as a result, when you define the swap morphism, you have to introduce a sign (for it to be a chain map). - Akhil, thank you for the time, but i'm not sure you have answered my question. Very simply, for example a simplicial complex with maximal simplex [a,b,c] how is $(\alpha \cup \beta)([a,b,c]) = (\alpha \cup \beta)([c,a,b])$? Similarly if by $[a,b,c]$ I mean the cohomology class to which $[a,b,c]$ belongs. I'm asking about this in the general sense, but, I am looking for a specific meaningful computation to explain it. – rhl Mar 13 '11 at 15:39 @rhl: Dear rhl, for instance, if you mean that $[a,b,c]$ is the standard 3-simplex, then the cohomology groups are all zero. (In general, it will not be true that $(\alpha \cup \beta)([a, b, c]) = \pm (\alpha \cup \beta)([c, a, b])$; the point is that the difference will differ by a coboundary, which can be constructed explicitly via the acyclic model business; of course, in the case of the standard 3-simplex both will already be coboundaries.) You can, however, get explicit examples in simplicial cohomology by using the torus. – Akhil Mathew Mar 13 '11 at 15:45 right, ok. I asked about if cup product is well defined, here is why I am confused. You wrote: Strictly speaking, the cup product is not commutative, though it is commutative up to sign on the level of cohomology. There is an abstract way of seeing this... The cup product being commutative should have nothing to do with it being well defined. Commutativity of the the product should be talking about if $a \smile b = b \smile a$. This seems irrelevant for a conversation about the cup product being well defined. So again, what I want to know if why is the cup product well defined. What I am asking is: is it the case that $(\alpha \smile \beta)(a)$ is the same as $(\alpha \smile \beta)(b)$ whenever $a$ is equivalent to $b$. Where equivalent is as as co-homology classes. Now in your second comment, you have said that that it is not the case that that the cup product is well defined, and it's again unclear if you mean over co-chain groups or co-homology groups. I n either case, what I want to know is: When is the cup product well defined? Does your definition of well defined differ from the classical one? I do not at this point want to underst - Dear rhl, I apologize for having misunderstood your question (I interpreted the discussion of "reordering" as an indication that you were confused about the commutativity). The cup product is defined as a natural transformation of functors $H^p(X, M) \otimes_R H^q(X, N) \to H^{p+q}(X, M \otimes_R N)$ whenever $M, N$ are modules over a commutative ring $R$. This follows because there is (as I discuss in my answer) a natural transformation of chain complexes $C_*(X, M) \otimes_R C_*(X, N) \to C_*(X \times X, M \otimes_R N)$ (which is actually a chain equivalence)... – Akhil Mathew Mar 14 '11 at 1:00 ...and if we dualize this, we get a map on the cochain complexes in the inverse direction. Since cohomology is functorial on the category of cochain complexes, we get the natural transformation that is the cup product. (Also we are using the fact that whenever $Q, Q'$ are complexes, there is a natural morphism $H^p(Q) \otimes H^q(Q') \to H^{p+q}(Q \otimes Q')$; this is easy to check.) – Akhil Mathew Mar 14 '11 at 1:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473825693130493, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2009/08/17/the-singular-value-decomposition/?like=1&source=post_flair&_wpnonce=eb8d9d939c
# The Unapologetic Mathematician ## The Singular Value Decomposition Now the real and complex spectral theorems give nice decompositions of self-adjoint and normal transformations, respectively. Each one is of a similar form $\displaystyle\begin{aligned}S&=O\Lambda O^*\\H&=U\Lambda U^*\end{aligned}$ where $O$ is orthogonal, $U$ is unitary, and $\Lambda$ (in either case) is diagonal. What we want is a similar decomposition for any transformation. And, in fact, we’ll get one that even works for transformations between different inner prouct spaces. So let’s say we’ve got a transformation $M:A\rightarrow B$ (we’re going to want to save $U$ and $V$ to denote transformations). We also have its adjoint $M^*:B\rightarrow A$. Then $M^*M:A\rightarrow A$ is positive-semidefinite (and thus self-adjoint and normal), and so the spectral theorem applies. There must be a unitary transformation $V:A\rightarrow A$ (orthogonal, if we’re working with real vector spaces) so that $\displaystyle V^*M^*MV=\begin{pmatrix}D&0\\{0}&0\end{pmatrix}$ where $D$ is a diagonal matrix with strictly positive entries. That is, we can break $A$ up as the direct sum $A=A_1\oplus A_2$. The diagonal transformation $D:A_1\rightarrow A_1$ is positive-definite, while the restriction of $V^*M^*MV$ to $A_2$ is the zero transformation. We will restrict $V$ to each of these subspaces, giving $V_1:A_1\rightarrow A$ and $V_2:A_2\rightarrow A$, along with their adjoints $V_1^*:A\rightarrow A_1$ and $V_2^*:A\rightarrow A_2$. Then we can write $\displaystyle\begin{pmatrix}V_1^*\\V_2^*\end{pmatrix}M^*M\begin{pmatrix}V_1&V_2\end{pmatrix}=\begin{pmatrix}V_1^*M^*MV_1&V_1^*M^*MV_2\\V_2^*M^*MV_1&V_2^*M^*MV_1\end{pmatrix}=\begin{pmatrix}D&0\\{0}&0\end{pmatrix}$ From this we conclude both that $V_1^*M^*MV_1=D$ and that $MV_2=0$. We define $U_1=MV_1D^{-\frac{1}{2}}:A_1\rightarrow B$, where we get the last matrix by just taking the inverse of the square root of each of the diagonal entries of $D$ (this is part of why diagonal transformations are so nice to work with). Then we can calculate $\displaystyle\begin{aligned}U_1D^\frac{1}{2}V_1^*&=MV_1D^{-\frac{1}{2}}D^\frac{1}{2}V_1^*\\&=MV_1V_1^*\\&=MV_1V_1^*+MV_2V_2^*\\&=M\left(V_1V_1^*+V_2V_2^*\right)\\&=MVV^*=M\end{aligned}$ This is good, but we don’t yet have unitary matrices in our decomposition. We do know that $V_1^*V_1=1_{A_1}$, and we can check that $\displaystyle\begin{aligned}U_1^*U_1&=\left(MV_1D^{-\frac{1}{2}}\right)^*MV_1D^{-\frac{1}{2}}\\&=D^{-\frac{1}{2}}V_1^*M^*MV_1D^{-\frac{1}{2}}\\&=D^{-\frac{1}{2}}DD^{-\frac{1}{2}}=1_{A_1}\end{aligned}$ Now we know that we can use $V_2:A_2\rightarrow A$ to “fill out” $V_1$ to give the unitary transformation $V$. That is, $V_1^*V_1=1_{A_1}$ (as we just noted), $V_2^*V_2=1_{A_2}$ (similarly), $V_1^*V_2$ and $V_2^*V_1$ are both the appropriate zero transformation, and $V_1V_1^*+V_2V_2^*=1_A$. Notice that these are exactly stating that the adjoints $V_1^*$ and $V_2^*$ are the projection operators corresponding to the inclusions $V_1$ and $V_2$ in a direct sum representation of $A$ as $A_1\oplus A_2$. It’s clear from general principles that there must be some projections, but it’s the unitarity of $V$ that makes the projections be exactly the adjoints of the inclusions. What we need to do now is to similarly fill out $U_1$ by supplying a corresponding $U_2:B_2\rightarrow B$ that will similarly “fill out” a unitary transformation $U$. But we know that we can do this! Pick an orthonormal basis of $A_1$ and hit it with $U_1$ to get a bunch of orthonormal vectors in $B$ (orthonormal because $U_1^*U_1=1_{A_1}$. Then fill these out to an orthonormal basis of all of $B$. Just set $B_2$ to be the span of all the new basis vectors, which is the orthogonal complement of the image of $U_1$, and let $U_2$ be the inclusion of $B_2$ into $B$. We can then combine to get a unitary transformation $\displaystyle U=\begin{pmatrix}U_1&U_2\end{pmatrix}$ Finally, we define $\displaystyle\Sigma=\begin{pmatrix}D^\frac{1}{2}&0\\{0}&0\end{pmatrix}$ where there are as many zero rows in $\Sigma$ as we needed to add to fill out the basis of $B$ (the dimension of $B_2$). I say that $U\Sigma V^*$ is our desired decomposition. Indeed, we can calculate $\displaystyle\begin{aligned}U\Sigma V^*&=\begin{pmatrix}U_1&U_2\end{pmatrix}\begin{pmatrix}D^\frac{1}{2}&0\\{0}&0\end{pmatrix}\begin{pmatrix}V_1^*\\V_2^*\end{pmatrix}\\&=\begin{pmatrix}U_1D^\frac{1}{2}&0\end{pmatrix}\begin{pmatrix}V_1^*\\V_2^*\end{pmatrix}\\&=U_1D^\frac{1}{2}V_1^*=M\end{aligned}$ where $U$ and $V$ are unitary on $B$ and $A$, respectively, and $\Sigma$ is a “diagonal” transformation (not strictly speaking in the case where $A$ and $B$ have different dimensions). ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 3 Comments » 1. [...] Meaning of the SVD We spent a lot of time yesterday working out how to write down the singular value decomposition of a transformation , [...] Pingback by | August 18, 2009 | Reply 2. [...] Okay, let’s take the singular value decomposition and do something really neat with it. Specifically, we’ll start with an endomorphism and [...] Pingback by | August 19, 2009 | Reply 3. [...] any linear transformation between two inner product spaces we can find orthonormal bases giving the singular value decomposition. There are two unitary (or orthogonal) transformations and and a “diagonal” [...] Pingback by | August 24, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 72, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9138296246528625, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/27665-quick-coin-flip-question.html
# Thread: 1. ## quick coin flip question A professor flips two balanced coins. Both fall to the floor and roll under his desk. A student informs the professor that he can see only one coin and it shows tails. What is the probability that the other coin is also tails? Is it 1/2? I said it was 1/2 because they are independent events? If the question asked what is the probability of getting a combination of T-T , it would be 1/4. is this right? 2. Originally Posted by xfyz A professor flips two balanced coins. Both fall to the floor and roll under his desk. A student informs the professor that he can see only one coin and it shows tails. What is the probability that the other coin is also tails? Is it 1/2? I said it was 1/2 because they are independent events? If the question asked what is the probability of getting a combination of T-T , it would be 1/4. is this right? Probability of a combination is 25%, but the probability of a combination (T-T) given a T is 50% because as you say, the events are independent. 3. Hello, xfyz! This is a classic trick question . . . A professor flips two balanced coins. Both fall to the floor and roll under his desk. A student informs the professor that he can see only one coin and it shows tails. What is the probability that the other coin is also tails? When two coins are flipped, there are four possible outcomes: . $HH,\:HT,\:TH,\:TT$ Since the student saw a Tail, there are only three possible situations: . $HT,\:TH,\:TT$ Among them, only one of them has both Tails. The probabitlity is . ${\color{blue}\frac{1}{3}}$ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ This is the basis for a "Sucker Bet". I have four playing cards, one of each suit, face down on the table. You pick any two of them. You will bet that the cards are of the same color. I will bet that they have opposite colors. . . And we bet "even money". Are you being hustled? Let's reason it out . . . There are only two outcomes: the colors match or they do not match. . . Since these outcomes are equally likely, the bet is "fair". Okay, there are four outcomes: (Red, Red), (Red, Black), (Black, Red), (Black, Black) . . Since we each win half the time, the bet is fair. The above explanations are comforting and reasonable . . . but wrong! . . There are six outcomes: . $(\heartsuit\,\diamondsuit),\;(\heartsuit\,\spadesu it),\;(\heartsuit\,\clubsuit),\;(\diamondsuit\,\sp adesuit),\;(\diamondsuit\,\clubsuit),\;(\spadesuit \,\clubsuit)$ . . And in only two of them, $(\heartsuit\,\diamondsuit),\;(\spadesuit\,\clubsui t)$, the colors match. Your probability of winning is: . $\frac{2}{6}\:=\:\frac{1}{3}$ 4. Originally Posted by Soroban This is a classic trick question . . . you know a lot of those, don't you? 5. Originally Posted by Soroban This is the basis for a "Sucker Bet". I have four playing cards, one of each suit, face down on the table. You pick any two of them. You will bet that the cards are of the same color. I will bet that they have opposite colors. . . And we bet "even money". Are you being hustled? Let's reason it out . . . There are only two outcomes: the colors match or they do not match. . . Since these outcomes are equally likely, the bet is "fair". Okay, there are four outcomes: (Red, Red), (Red, Black), (Black, Red), (Black, Black) . . Since we each win half the time, the bet is fair. The above explanations are comforting and reasonable . . . but wrong! . . There are six outcomes: . $(\heartsuit\,\diamondsuit),\;(\heartsuit\,\spadesu it),\;(\heartsuit\,\clubsuit),\;(\diamondsuit\,\sp adesuit),\;(\diamondsuit\,\clubsuit),\;(\spadesuit \,\clubsuit)$ . . And in only two of them, $(\heartsuit\,\diamondsuit),\;(\spadesuit\,\clubsui t)$, the colors match. Your probability of winning is: . $\frac{2}{6}\:=\:\frac{1}{3}$ [/size] i'll keep that in mind next time i go gambling, thanks 6. Hi, Jhevon! You know a lot of those, don't you? Yes, I do . . . Over the years, I've been surprised (or been had) by dozens of these "sucker" bets. . . The first was probably the "Birthday Paradox". A similar bet can be made with a friend while standing on a street corner. Bet your friend that, among the next 15 license plates that go by, . . some two will end in the same two-digit number. [You might agree to disregard those ending in letters.] Argument: "Hey, there are a hundred different two-digit numbers. . . . . . . . . What are the chances?" Super Hustle: The same bet with 18 cars. . . . . . . . . . . The odds are about 4-to-1 in your favor. 7. Originally Posted by Soroban Hello, xfyz! This is a classic trick question . . . [size=3] When two coins are flipped, there are four possible outcomes: . $HH,\:HT,\:TH,\:TT$ Since the student saw a Tail, there are only three possible situations: . $HT,\:TH,\:TT$ Among them, only one of them has both Tails. The probabitlity is . ${\color{blue}\frac{1}{3}}$ I was just posed this problem today, and a quick google search led me to this thread. I still can't seem to get my head around the explanation presented to me, both by the guy who asked the question and by Soroban. Can someody explain where my train of throught is going wrong here? According to Soroban's explanation, if the student can see one coin, which is tails, that eliminates the HH possiblity, leaving HT, TH, and TT. Out of those possilbilities, there is a 2/3 chance of the other coin being heads, and only 1/3 being tails. HOWEVER, to me this implies that you are choosing which coin you want to see AFTER they've been flipped. In other words, you will flip a HT/TH combo 50% of the time, and when this happens, the coin you see WILL ALWAY BE THE ONE SHOWING TAILS. This simply does not make sense to me. You will flip a HT/TH combo 50% of the time, but when that happens, you will only see the tails coin half the time, and the other half you will see heads, so that situation doesn't count in this riddle. The way I see it, 50% of the time the student will see heads, thus removing those situations from consideration (this would be the HH and HT flips). The other 50% of the time, the student would see tails, and this would be the TT and TH flips. Of those where the student sees tails, the other coin being tails would have a 50/50 chance. Can someone clearly explain to me how i'm misinterpreting this? 8. I was going with the fact that a known coin showed up as Tails... hmmm!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582725763320923, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/109443/fourier-transform-of-eit-xi-alpha/109457
## Fourier transform of $e^{it|\xi|^{\alpha}}$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider the fourier transform of $e^{it|\xi|^{2\alpha}}$ ($\alpha>0$)in $\mathbb{R}^n$,let $K_{\alpha}=\mathcal{F}(e^{it|\xi|^{\alpha}})$,so $K$ is a tempered distribution.Now I want to know if there is a explicit expression of $K$,for the simpliest case,namely $\alpha=1$,it's well known that $$K_1=(4\pi it)^{-\frac{n}{2}}e^{-\frac{|x|^2}{4it}}$$ Another special case is $\alpha=\frac{1}{2}$,since we know that $\mathcal{F}e^{-t|\xi|}=C_{n}\frac{t}{(t^{2}+|\xi|^2)^{\frac{n+1}{2}}}$,where $t>0$,let $t=-it$,so at least formally, $$K_{\frac{1}{2}}=C_{n}\frac{-it}{(|\xi|^2-t^{2})^{\frac{n+1}{2}} }$$ My question is how about general $\alpha$ ?,so far I have known that when $0<\alpha<\frac{1}{2}$,$\alpha=\frac{1}{2}$,$\alpha>\frac{1}{2}$,the singularity of $K$ lies at $0$,$t=|x|$,$\infty$ respectively. - ## 2 Answers For $t=i$ there is a formula involving an integral of a Bessel function, so I doubt there is a simple closed formula for $K_{\alpha}$ in general. You can find the formula I mentioned in the first page of the article "Some theorems on stable processes" by Blumenthal and Getoor. Also see this related MO question. - @Shanlin: I could not get your link to work when I click on it. – Abdelmalek Abdesselam Oct 12 at 13:57 @Abdelmalek Abdesselam :sorry for that,see springerlink.com/content/h4g567q026617364 Proposition 2.1 for the transform of $e^{-|\xi|^\alpha}$ –  Shanlin Huang Oct 12 at 14:25 @Shanlin: the link works now. The paper you linked to has the asymptotic expansion not the exact calculation which I think goes back to Polya. – Abdelmalek Abdesselam Oct 12 at 15:44 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. We consider $t=1$ for simplicity and wright $K_{\alpha}=\mathcal{F}(\eta(|\xi|) e^{i|\xi|^\alpha})+\mathcal{F}((1-\eta )e^{i|\xi|^\alpha})$,where $\eta\in C^{\infty}(\mathbb{R})$,and $\eta=0$,near 0,$\eta(t)=1$,when $t\ge 1$,the second term in the RHS is smooth and has good behaviour at $\infty$,so we look at the first term,in A.Miyachi's paper "On some singular fourier multipliers"see http://repository.dl.itc.u-tokyo.ac.jp/dspace/bitstream/2261/6297/1/jfs280206.pdf it has a thoroughly analysis on it.When $0<\alpha<\frac{1}{2}$,we have $K\in C^{\infty}(\mathbb{R}^{n}\backslash{0})$ and $$K_{\alpha}=C|x|^{\frac{n(\alpha-1)}{1-2\alpha}}e^{iB|x|^{-\frac{2\alpha}{1-2\alpha}}}+o(|x|^{\frac{n(\alpha-1)}{1-2\alpha}})\quad \text{as}|x|\to 0$$ When $\alpha>\frac{1}{2}$,$K$ is smooth throughout $\mathbb{R}^{n}$,and $$K_{\alpha}=C|x|^{\frac{n(\alpha-1)}{1-2\alpha}}e^{iB|x|^{-\frac{2\alpha}{1-2\alpha}}}+o(|x|^{\frac{n(\alpha-1)}{1-2\alpha}})\quad \text{as}|x|\to\infty$$ In this case we can see that unlike $\alpha=1$, for $\alpha>1$,$K_{\alpha}$ has decay of $|x|^{-\frac{n(\alpha-1)}{2\alpha-1}}$ at $\infty$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8926782011985779, "perplexity_flag": "middle"}