url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://math.stackexchange.com/questions/tagged/vector-bundles+representation-theory
# Tagged Questions 0answers 28 views ### Representation of Homogeneous vectorbundle = Induced representation Hello friends of mathematics :) I have a question about the induced representation. Suppose $G$ is a group and $H$ a subgroup of $G$. Suppose $\rho$ is a representation of $H$ on the vectorspace $V$, ... 1answer 200 views ### On Frobenius reciprocity theorem The classical Frobenius reciprocity theorem asserts the following: If $W$ is a representation of $H$, and $U$ a representation of $G$, then (\chi_{Ind W},\chi_{U})_{G}=(\chi_{W},\chi_{Res ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8815034627914429, "perplexity_flag": "head"}
http://mathoverflow.net/questions/81730/regular-homotopy
## regular homotopy ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello. I am trying to give a seminar in my University about the Whitney-Graustein Theorem. There are many elementary proofs for that including Whitney's paper. The conclusion is that the connected components ($π_0$) of regular immersions $S^1 \rightarrow R^2$ are equal to $Z$ (mod regular homotopies). Is there an elementary way to find the fundamental group of the space of immersions ? There are many books and papers that treat fundamental group of mapping spaces including Smale's,Michor's etc, but they are far from elementary and the audience are undergraduates. Any idea would be much appreciated - 1 An obvious question: Whitney's paper is 8 pages long, and at the time there was probably no big machine available -- have you tried looking at it? – Igor Rivin Nov 23 2011 at 18:28 Yes i have read it but maybe you misunderstood my question. In Whitney's paper, he computes the connected components of the space of immersions of the circle , not the fundamental group. – nikitas Nov 23 2011 at 18:57 1 Perhaps knowing the answer will help you outline a proof. It is known by Smale's work that The space of immersed loops in the plane is homotopy equivalent to the free loop space of the unit tangent bundle of the plane. In other words, the homotopy type is that of $S^1×\mathbb{Z}$. Therefore, each component of the space of immersions has $\mathbb{Z}=\pi_1(S^1)$ as its fundamental group. Moreover, given a loop $\gamma$, as you rotate the loop you generate a based loop $\Gamma$ in the space of immersed loops. This is a generator of the fundamental group of the component containing $\gamma$. – Somnath Basu Nov 23 2011 at 19:02 Ah, ok, I did misread your question... – Igor Rivin Nov 23 2011 at 19:18 @Basu, thank you , but i have read Smale's paper but the thing is i want to show it to the classroom with elementary methods if it possible. Not just say " in his paper smale defined certain fibrations etc" It is the technical part not the presentation – nikitas Nov 23 2011 at 22:46 ## 1 Answer See theorem 2.10 (with elementary proof) for the case of rotation idex $\ne 0$ of the paper: Peter W. Michor; David Mumford: Riemannian geometries on spaces of plane curves. J. Eur. Math. Soc. (JEMS) 8 (2006), 1-48. pdf For rotation index $=0$ (with a somewhat surprising answer) see the paper: Hiroki Kodama, Peter W. Michor: The homotopy type of the space of degree 0 immersed curves. Revista Matemática Complutense 19 (2006), no. 1, 227-234. pdf -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.917669415473938, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/18235/simplification-of-double-symbolic-sums-containing-a-discretedelta-without-explic?answertab=active
# Simplification of double symbolic sums containing a DiscreteDelta without explicit summation range I am trying to get Mathematica to automatically do simplifications like the following: $$\sum\limits_{q}^{q\in qV}\sum\limits_{q'}^{q'\in q'V}{f(q)g(q')\delta(q-q')}=\sum_{q}^{q\in qV}{f(q)g(q)}.$$ Where the range of values that $q$ and $q'$ are denoted by $qV$ and $q'V$ respectively and are the same. With pen an paper it is really easy to do that kind of simplifications because I know that both variables $q$ and $q'$ have the same range and that the Discrete Delta function $\delta(q-q')$ kills one of the sums. I have solved it, but it seems to me that I haven't done it in the most elegant/efficient way. My solution is: ````Clear[qV, qpV, f,g, MyDiscreteDelta] a = Sum[f[q] Sum[g[qp] MyDiscreteDelta[q, qp], {qp, qpV}], {q, qV}] b = % /. {qpV -> {q, q1, q2, q3, q4}} c = % /. {MyDiscreteDelta[x_, y_] -> HoldForm[If[x =!= y, 0, 1]]} d = ReleaseHold[%] ```` $a$ defines the sum and I deliberately leave the sum range undefined as even though I know the real range of number I will be summing, it's too large to be put directly and more importantly, at this stage of the calculation I don't care about it, I am just interested in simplifying the result as much as possible, i.e., get rid of one of the sums by using the delta function. I am not using Mathematica defined function DiscreteDelta because a) it takes too long and b) when used symbolically it is left unevaluated. First thing I need to do to get rid of one of the sums is to give it a range to sum through, in doing so, I am actually giving it a list of possible variables that it could take(assuming that in the general case I will be having more than two summations and I will have more than one delta function in many of the variables). That's done in $b$. $c$ changes the empty definition of MyDiscreteDelta but I need to hold it as if I don't do so, it will all be evaluated to $0$. The last step, $d$ evaluates the expression and gives the results I want. This works, but I was wondering if there is a simpler way of doing it. Thanks in advance. - 1 Why not using the built-in `KroneckerDelta` ? – b.gatessucks Jan 22 at 12:56 I did a test, changing `code` MyDiscreetDelta[q,qp] `code` to `code` KroneckerDelta[q,qp] `code` in (a) and only evaluating (a) and (b). It is very slow. It does simplify to 1 when q==qp but on the other cases, it is just left unevaluated, the same as DiscreteDelta. I can individually reproduce the expected behaviour if I write `code` Assuming[q != qp, Simplify[KroneckerDelta[q, qp]]] `code` or `code` Assuming[q != qp, Simplify[DiscreteDelta[q, qp]]] `code`. But it doesn't work if I do something like `code` Assuming[q != q1, Simplify[b]]`code`. – Paco Jan 22 at 13:57 @Paco, use backticks for `code`! – Andrew Jaffe Jan 22 at 16:25 ## 1 Answer So your goal is to have a function similar to the delta function in that it can deal with purely symbolic variables instead of having attribute `NumericFunction` as does `KroneckerDelta`. Here is a solution that doesn't require holding the expressions during the evaluation. What I do instead is to define `MyDiscreteDelta` only for situations where it occurs in an explicit sum, i.e., an expression whose `Head` has been converted from `Sum` to `Plus`. At that stage, you'll know that the dummy variable of `Sum` has been replaced by the actual values, which in your case includes symbolic names drawn from the set `qpV`, not just numbers. I changed the definition slightly so that `MyDiscreteDelta` takes only one argument instead of two, more like the `DiracDelta` function. That way you can also insert more complicated expression as a condition in `MyDiscreteDelta`, such as `MyDiscreteDelta[q^2 - qp^2]` etc. This is also what your initial $\LaTeX$ example looked like. To make the definitions active in the desired situations, I use `TagSetDelayed`. This can only be done using another auxiliary function that represents the multiplications in which `MyDiscreteDelta` might occur - which is a level below the summation: ````ClearAll[MyDiscreteDelta, DiscreteDeltaTimes] MyDiscreteDelta /: Times[any_, MyDiscreteDelta[expr_]] := DiscreteDeltaTimes[any, expr] MyDiscreteDelta /: Plus[any_, MyDiscreteDelta[expr_]] := If[expr === 0, any + 1, any] DiscreteDeltaTimes /: Plus[DiscreteDeltaTimes[any_, expr_], other_] := If[expr === 0, any, 0] + other DiscreteDeltaTimes /: MakeBoxes[DiscreteDeltaTimes[x_, y_], StandardForm] := RowBox[{ToBoxes[x], "\[ThinSpace]", "MyDiscreteDelta", "[", ToBoxes[y], "]"}] Clear[qV, qpV, f, g] a = Sum[f[q] Sum[g[qp] MyDiscreteDelta[q - qp], {qp, qpV}], {q, qV}] ```` $\displaystyle\sum _q^{\text{qV}}$ `f[q]` $\displaystyle\sum _{\text{qp}}^{\text{qpV}}$ `g[qp] MyDiscreteDelta[q-qp]` ````a/.{qpV->{q,q1,q2,q3,q4}} ```` $\displaystyle\sum _q^{\text{qV}}$ `f[q] g[q]` In the first output, the `MakeBoxes` definition above gets applied so that the auxiliary function `DiscreteDeltaTimes` is hidden from view. - lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8808386325836182, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/27402/what-are-the-justifying-foundations-of-statistical-mechanics-without-appealing-t/27403
# What are the justifying foundations of statistical mechanics without appealing to the ergodic hypothesis? This question was listed as one of the questions in the proposal (see here), and I didn't know the answer. I don't know the ethics on blatantly stealing such a question, so if it should be deleted or be changed to CW then I'll let the mods change it. Most foundations of statistical mechanics appeal to the ergodic hypothesis. However, this is a fairly strong assumption from a mathematical perspective. There are a number of results frequently used in statistical mechanics that are based on Ergodic theory. In every statistical mechanics class I've taken and nearly every book I've read, the assumption was made based solely on the justification that without it calculations become virtually impossible. Hence, I was surprised to see that it is claimed (in the first link) that the ergodic hypothesis is "absolutely unnecessary". The question is fairly self-explanatory, but for a full answer I'd be looking for a reference containing development of statistical mechanics without appealing to the ergodic hypothesis, and in particular some discussion about what assuming the ergodic hypothesis does give you over other foundational schemes. - I believe the term 'justifying foundations' is a misnomer, and this question arises only through the use of this term. My understanding is that experiments are the only foundation of any area of physics. The ergodic hypothesis is just a math trick one uses to show the rationale for the laws of statistics. These laws, within their applicability range, are quite good at explaining a number of observable thermodynamical phenomena. And this is the justification of statistical physics. Statistical mechanics is not `derived' from the ergodic hypothesis, even if Landau and Lifshitz make it seem so. – drlemon Sep 15 '11 at 0:42 maybe this should be an answer :) – Suresh Sep 15 '11 at 3:43 9 I disagree with +drlemon. Statistical mechanics is not a phenomenological model, as drlemon claims. Statistical mechanics, as used by physicists, is a method for deriving properties of a system of a large (infinite, actually) number of constituents from the postulated (or measured) behaviour of the individual components. For example it is a tool for deriving the thermodynamic gas laws from the laws of motion for the individual molecules. The fact that a gas of non-interacting particles that obey Newton's laws satisfies the ideal gas law is something one derives, not an experimental fact. – Gustav Delius Sep 15 '11 at 10:49 @drlemon the phrase «justifying foundations» is grammatically incorrect in that context, too. I suppose the O.P. means just plain «foundations» since foundations are supposed to do some justifying even while they are at their other tasks. But your point of view, while widespread, is a) anti-foundational. Experiments are not the foundations of a theory, they are the proof of a theory. your point of view in effect denies that physics has or needs any foundations. You are correct if the definition of physics is getting a grant b) ignores the problem of connecting theory with experiment: see below – joseph f. johnson Feb 12 at 15:54 @josephf.johnson Alas, while I used the words "justifying foundations," I must admit that particular turn of phrase is not my own, and I can't comment on the intent contained therein. The title of this question was copied from a question posed on the Area 51 proposal of the now-defunct Theoretical Physics site. I agree with you that the phrase "justifying foundations" is a bit strange, but it seemed imprudent to copy the idea for the question but change the title; instead I tried as best I could to maintain the intent of the original asker and cited the location which I had found it. – Logan Maingi Feb 12 at 17:09 show 2 more comments ## 6 Answers The ergodic hypothesis is not part of the foundations of statistical mechanics. In fact, it only becomes relevant when you want to use statistical mechanics to make statements about time averages. Without the ergodic hypothesis statistical mechanics makes statements about ensembles, not about one particular system. To understand this answer you have to understand what a physicist means by an ensemble. It is the same thing as what a mathematician calls a probability space. The “Statistical ensemble” wikipedia article explains the concept quite well. It even has a paragraph explaining the role of the ergodic hypothesis. The reason why some authors make it look as if the ergodic hypothesis was central to statistical mechanics is that they want to give you a justification for why they are so interested in the microcanonical ensemble. And the reason they give is that the ergodic hypothesis holds for that ensemble when you have a system for which the time it spends in a particular region of the accessible phase space is proportional to the volume of that region. But that is not central to statistical mechanics. Statistical mechanics can be done with other ensembles and furthermore there are other ways to justify the canonical ensemble, for example it is the ensemble that maximises entropy. A physical theory is only useful if it can be compared to experiments. Statistical mechanics without the ergodic hypothesis, which makes statements only about ensembles, is only useful if you can make measurements on the ensemble. This means that it must be possible to repeat an experiment again and again and the frequency of getting particular members of the ensemble should be determined by the probability distribution of the ensemble that you used as the starting point of your statistical mechanics calculations. Sometimes however you can only experiment on one single sample from the ensemble. In that case statistical mechanics without an ergodic hypothesis is not very useful because, while it can tell you what a typical sample from the ensemble would look like, you do not know whether your particular sample is typical. This is where the ergodic hypothesis helps. It states that the time average taken in any particular sample is equal to the ensemble average. Statistical mechanics allows you to calculate the ensemble average. If you can make measurements on your one sample over a sufficiently long time you can take the average and compare it to the predicted ensemble average and hence test the theory. So in many practial applications of statistical mechanics, the ergodic hypothesis is very important, but it is not fundamental to statistical mechanics, only to its application to certain sorts of experiments. In this answer I took the ergodic hypothesis to be the statement that ensemble averages are equal to time averages. To add to the confusion, some people say that the ergodic hypothesis is the statement that the time a system spends in a region of phase space is proportional to the volume of that region. These two are the same when the ensemble chosen is the microcanonical ensemble. So, to summarise: the ergodic hypothesis is used in two places: 1. To justify the use of the microcanonical ensemble. 2. To make predictions about the time average of observables. Neither is central to statistical mechanics, as 1) statistical mechanics can and is done for other ensembles (for example those determined by stochastic processes) and 2) often one does experiments with many samples from the ensemble rather than with time averages of a single sample. - That's a great explanation of why ergodic hypothesis is not the best foundation for statistical mechanics, but the question seems to be more about what are right starting points (basic principles/postulates) to define/choose physically correct ensembles? – Slaviks Sep 15 '11 at 13:18 I appreciate the in-depth response, and it certainly answers most of my question. As Slaviks suggests, I was also interested in what are the right starting points. Anything along those lines (even if just pointing to a reference where foundations are discussed thoroughly) would be appreciated. I wasn't aware the ergodic hypothesis could mean two different things. I've always seen it as the statement you chose. For the moment I haven't accepted this yet, but I plan to do so later today. – Logan Maingi Sep 15 '11 at 14:23 1 +Logan Maingi, I did however in my answer not address the question of how to choose the appropriate ensemble. That is a more difficult question than that about the fundamentals of statistical mechanics, because it requires knowledge of the particular domain where you want to apply statistical mechanics. My view of statistical mechanics is currently influenced by the domain in which I last encountered it, which is the statistical mechanics of random graphs, see next comment. – Gustav Delius Sep 16 '11 at 7:37 1 In the context mentioned in my previous comment, rather than studying gases consisting of many particles, one studies graphs consisting of many nodes. There the ensemble of random graphs to work with is either simply postulated (for example some people use the ensemble of random graphs with a given degree distribution after measuring the distribution in a real-world graph) or it is obtained by specifying a stochastic process for the assembly of the graph (for example a process that attaches new nodes randomly by the rule of preferential attachment). – Gustav Delius Sep 16 '11 at 7:39 1 +joseph f. jonhson: Why do you say that every measurement is a long-time average? For example, in what sense does measuring the volume and pressure of a gas in a container involve a long-time average? – Gustav Delius Feb 11 at 16:36 show 5 more comments As for references to other approaches to the foundations of Statistical Physics, you can have a look at the classical paper by Jaynes; see also, e.g., this paper (in particular section 2.3) where he discusses the irrelevance of ergodic-type hypotheses as a foundation of equilibrium statistical mechanics. Of course, Jaynes' approach also suffers from a number of deficiencies, and I think that one can safely says that the foundational problem in equilibrium statistical mechanics is still widely open. You may also find it interesting to look at this paper by Uffink, where most of the modern (and ancient) approaches to this problem are described, together with their respective shortcomings. This will provide you with many more recent references. Finally, if you want a mathematically more thorough discussion of the role of ergodicity (properly interpreted) in the foundations of statistical mechanics, you should have a look at Gallavotti's Statistical Mechanics - short treatise, Springer-Verlag (1999), in particular Chapters I, II and IX. EDIT (June 22 2012): I just remembered about this paper by Bricmont that I read long ago. It's quite interesting and a pleasant read (like most of what he writes): Bayes, Boltzmann and Bohm: Probabilities in Physics. - Could you provide some references to critiques of Jaynes' approach? I think his thinking subtly changed over the years, and actually I think at one point or another he actually had a fully defensible theory... – genneth Oct 7 '11 at 16:03 @genneth: There are several. I must confess to being somewhat biased (I find Jaynes' approach infinitely better than than the ergodic one). That being said: one major criticism is somewhat philosophical. In Jaynes' approach, stat. mech. is not really a physical theory as usually meant, but rather a particular example of statistical inference. – Yvan Velenik Oct 7 '11 at 16:55 Second, application of MaxEnt are fine when the underlying configuration space is a finite set, but becomes much less convincing when dealing with more complicated situations. For example, if one wants to describe a gas (not a lattice model!), why should one favour Liouville measure? Things get even worse when particles have internal degrees of freedom: e.g., for di-atomic molecules, why should we take the action-angle coordinates? One can find arguments, but these are pretty weak. Of course, similar difficulties are also present in the ergodic approach (initial conditions must be "typical"). – Yvan Velenik Oct 7 '11 at 16:59 There are many other critics, of course. See, e.g., Sklar's book (ref. given in Steve's answer). – Yvan Velenik Oct 7 '11 at 17:04 interesting; I leave my opinions on those points aside since it's off-topic, but at least the Amazon review on Sklar says that the critique of MaxEnt is not particularly thorough. I have to confess to having difficulty in finding truly well-presented arguments to it --- again, like you, I am biased. Thanks for the replies. – genneth Oct 7 '11 at 17:38 show 1 more comment I searched for "mixing" and didn't find it in other answers. But this is the key. Ergodicity is largely irrelevant, but mixing is the property that makes equilibrium statistical physics tick for many-particle systems. See, e.g., Sklar's Physics and Chance or Jaynes' papers on statistical physics. The chaotic hypothesis of Gallavotti and Cohen basically suggests that the same holds true for NESSs. - I do not agree with Marek's statement that ''in many practial applications of statistical mechanics, the ergodic hypothesis is very important, but it is not fundamental to statistical mechanics, only to its application to certain sorts of experiments.'' The ergodic hypothesis is nowhere needed. See Part II of my book Classical and Quantum Mechanics via Lie algebras for a treatment of statistical mechanics independent of assumptions of ergodicity or mixing, but still recovering the usual formulas of equilibrium thermodynamics. - You may be interested in these lectures: Entanglement and the Foundations of Statistical Mechanics The smallest possible thermal machines and the foundations of thermodynamics held by Sandu Popescu at the Perimeter Institute, as well as in this paper Entanglement and the foundations of statistical mechanics. There is argued that: 1. "the main postulate of statistical mechanics, the equal a priori probability postulate, should be abandoned as misleading and unnecessary" (the ergodic hypothesis is one way to ensure the equal a priori probability postulate) 2. instead, it is proposed a quantum basis for statistical mechanics, based on entanglement. In the Hilbert space, it is argued, almost all states are close to the canonical distribution. You may find in the paper some other interesting references on this subject. - It has been realised for a long time, even in classical mechanics and classical statistical mechanics, e.g., the theory of Brownian motion, that it should be possible, in principle, to dispense with the equal a priori probability postulate. The thermodynamic limits we get have been noticed to be largely independent of which initial probability distribution you impose on the phase space. A rigorous mathematical investigation of this robustness is felt to be like a Millenium Problem... but in physical terms, the intuition goes back to Sir James Jeans. – joseph f. johnson Feb 12 at 16:04 I have recently published an important paper, Some special cases of Khintchine's conjectures in statistical mechanics: approximate ergodicity of the auto-correlation function of an assembly of linearly coupled oscillators. REVISTA INVESTIGACIÓN OPERACIONAL VOL. 33, NO. 3, 99-113, 2012 http://rev-inv-ope.univ-paris1.fr/files/33212/33212-01.pdf which advances the state of knowledge as to the answer to this question. In a nutshell: one needs to justify the conclusion of the ergodic hypothesis, without assuming the ergodic hypothesis itself. The desirability of doing this has been realised for a long time, but rogorous progress has been slow. Terminology: the erdodic hypothesis is that every path wanders through (or at least near) every point. This hypothesis is almost never true. The conclusion of the ergodic hypothesis: almost always, infinite time averages of an observable over a trajectory are (at least approximately) equal to the average of that observable over the ensemble. (Even if the ergodic hypothesis holds good, the conclusion does not follow. Sorry, but this terminology has become standard, traditional, orthodox, and it's too late to change it.) The ergodic theorem: unless there are non-trivial distinct invariant subspaces, then the conclusions of the ergodic hypothesis hold. Darwin (http://www-gap.dcs.st-and.ac.uk/history/Obits2/Darwin_C_G_RAS_Obituary.html) and Fowler (http://www-history.mcs.st-andrews.ac.uk/Biographies/Fowler.html), important mathematical physicists (Fowler was Darwin's student and Dirac was Fowler's), found the correct foundational justification for Stat Mech in the 1920s, and showed that it agreed with experiment in every case usually examined up to that time, and also for stellar reactions. Khintchine, the great Soviet mathematician, re-worked the details of their proofs (The Introduction to his slim book on the subject has been posted on the web at http://www-history.mcs.st-andrews.ac.uk/Extras/Khinchin_introduction.html), made them accessible to a wider audience, and has been much studied by mathematicians and philosophers of science interested in the foundations of statistical mechanics or, indeed, any scientific inference (see, for one example, http://igitur-archive.library.uu.nl/dissertations/1957294/c7.pdf and, for another example, Jan von Plato Ergodic theory and the foundations of probability, in B. Skyrms and W.L. Harper, eds, Causation, Chance and Credence. Proceedings of the Irvine Conference on Probability and Causation, vol. 1, pp. 257-277, Kluwer, Dordrecht 1988). Khintchine's work went further, and in some conjectures, he hoped that any dynamical system with a sufficiently large number of degrees of freedom would have the property that the physically interesting observables would approximately satisfy the conclusions of the ergodic theorem even though the dynamical system did not even approximately satisfy the hypotheses of the ergodic theorem. His arrest, he died in prison, interrupted the possible formation of a school to carry out his research program, but Ruelle and Lanford III made some progress. In my paper I was able to prove Khintchine's conjectures for basically all linear classical dynamical systems. For quantum mechanics the situation is much more controversial, of course. Nevertheless Fowler actually based his theorems about Classical Statistical Mechanics on Quantum Theory, although Khintchine did the reverse: first proving the classical case and then attempting, unsuccessfully, to deal with the modifications needed for QM. In my opinion, the quantum case does not introduce anything new. Why measurement is modelled by an infinite time-average in Statistical Mechanics This is the point d'appui for the ergodic theorem or its substitutes. Masani, P., and N. Wiener, "Non-linear Prediction," in Probability and Statistics, The Harald Cramer Volume, ed. U. Grenander, Stockholm, 1959, p. 197: «As indicated by von Neumann ... in measuring a macroscopic quantity $x$ associated with a physical or biological mechanism... each reading of $x$ is actually the average over a time-interval $T$ [which] may appear short from a macroscopoic viewpoint, but it is large microscopically speaking. That the limit $\overline x$, as $T \rightarrow \infty$, of such an average exists, and in ergodic cases is independent of the microscopic state, is the content of the continuous-parameter $L_2$-Ergodic Theorem. The error involved in practice in not taking the limit is naturally to be construed as a statistical dispersion centered about $\overline x$.» Cf. also Khintchine, A., op. cit., p. 44f., «an observation which gives the measurement of a physical quantity is performed not instantaneously, but requires a certain interval of time which, no matter how small it appears to us, would, as a rule, be very large from the point of view of an observer who watches the evolution of our physical system. [...] Thus we will have to compare experimental data ... with time averages taken over very large intervals of time.» And not the instantaneous value or instantaneous state. Wiener, as quoted in Heims, op. cit., p. 138f., «every observation ... takes some finite time, thereby introducing uncertainty.» Benatti, F. Deterministic Chaos in Infinite Quantum Systems, Berlin, 1993, Trieste Notes in Physics, p. 3, «Since characteristic times of measuring processes on macrosystems are greatly longer than those governing the underlying micro-phenomena, it is reasonable to think of the results of a measuring procedure as of time-averages evaluated along phase-trajectories corresponding to given initial conditions.» And Pauli, W., Pauli Lectures on Physics, volume 4, Statistical Mechanics, Cambridge, Mass., 1973, p. 28f., «What is observed macroscopically are time averages... » Wiener, "Logique, Probabilite et Methode des Sciences Physiques," «Toutes les lois de probabilite connues sont de caractere asymptotique... les considerations asymptotiques n'ont d'autre but dans la Science que de permettre de connaitre les proprietes des ensembles tres nombreux en evitant de voir ces proprietes s'evanouir dans la confusion resultant de las specificite de leur infinitude. L'infini permet ainsi de considere des nombres tres grands sans avoir a tenir compte du fait que ce sont des entites distinctes.» Why we need to replace ensemble averages by phase averages, which can be accomplished in different ways, the traditional way is to use the ergodic hypothesis. These quotations express the orthodox approach to Classical Stat Mech. The classical mechanics system is in a particular state, and a measurement of some property of that state is modelled by a long-time average over the trajectory of the system. We approximate this by taking the infinite time average. Our theory, however, cannot calculate this, anyway we don't even know the initial conditions of the system so we do not know which trajectory... what our theory calculates is the phase average or ensemble average. If we cannot justify some sort of approximate equality of the ensemble average with the time average, we cannot explain why the quantities our theory calculates agree with the quantities we measure. Some people, of course, do not care. That is to be anti-foundational. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929289698600769, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/26839/list
## Return to Question 5 added 9 characters in body There is a unique nonempty set $B$ of nonnegative integers such that every positive integer can be written in the form $$b + s^2, b\in B, s\ge0$$ in an even number of ways. $B = \{0, 1, 2, 3, 5, 7, 8, 9, 13, 17, 18, 23, 27, 29, 31, 32, 35, 37,$ $39, 41, 45, 47, 49, 50, 53, 55, 59, 61, 63, 71, 72,$ $73, 79, 81, 83, 87, 89, 91, 97, 98, 101, 103, 107,$ $109, 113, 115, 117, 121, 127, 128, 137, 139, 149,$ $151, 153, 157, 159, 162, 167, 171, 173, 181, 183,$ $191, 193, 197,\dots\}$ Does the set $B$ have positive density? Now for some context. Every set $A$ of nonnegative integers that contains 0 has a unique set $B$ of nonnegative integers so that $$\left( \sum_{a\in A} q^a \right) \, \left( \sum_{b\in B} q^b \right) = 1$$ in the ring ${\mathbb F}_2[[q]]$ of binary power series. We call $B$ the reciprocal of $A$. As a consequence of a Euler's pentagonal number theorem, the reciprocal of the set $\{n(3n+1)/2 \colon n \in \mathbb{Z}\}$ is the set $\{ n \colon p(n)\equiv 1 \bmod 2\}$, where $p(n)$ is ordinary partition function. Almost nothing interesting is known about the parity of the partition function, but computationally it seems to be even and odd with equal frequency. This question arises out of an effort to put the parity of the partition function into some context. In this article (arxiv, Int. J. Number Theory 2 (2006), no. 4, 499--522), Josh Cooper, Dennis Eichhorn and I investigated the properties of $A$ that lead to $B$ having positive density, and all of our data and partial results can be summed up in the following conjecture: Conjecture: If $A$ contains 0, is not periodic, and is uniformly distributed in every congruence class modulo every power of 2, then $B$ has positive density. Letting $A$ be the set of squares, we were able to prove that the even numbers in $B$ are exactly $\{2k^2 \colon k\ge 0\}$, and we were able to classify the $1\mod 4$ elements of $B$. Update Greg Kuperberg's answer concerning the conjecture displayed above is, while not quite a disproof, utterly convincing. So convincing, I can no longer understand how I thought the conjecture could plausibly be true. In our paper, we described it as "the strongest conjecture that is consistent with our theorems, our experiments, and Conjecture 1.1", so I see we weren't too enthusiastic about its truth. We should have been even less so! The question directly asked, the density of the reciprocal of the squares, remains unanswered. Paul Monsky has introduced a new (to me, at least) approach, and has made striking progress both in the answer below and in his answer to this question. I love Greg's answer to the question I didn't dare ask, and want to accept it, but Paul's is more directly relevant to the question I did ask. Here are some computational counts of the number of elements of $B\cap[0,2^{23}]$ in particular congruence classes. ````(1 mod 4, 371867), (3 mod 4, 760697) (1 mod 8, 185336), (5 mod 8, 186531), (3 mod 8, 294045), (7 mod 8, 466652) (1 mod 16, 92703), (5 mod 16, 93236), (9 mod 16, 92633), (13 mod 16, 93295), (3 mod 16, 147232), (11 mod 16, 146813), (7 mod 16, 204808), (15 mod 16, 261844) (7 mod 32, 102487), (23 mod 32, 102321), (15 mod 32, 130895), (31 mod 32, 130949) ```` Since there was a specific request for 15 mod 32 data, here are the first 10 such numbers in $B$: (47,79,271,559,623,687,719,815,879,911). Here are the last 10 that I've computed: (8388539, 8388551, 8388559, 8388563, 8388567, 8388571, 8388581, 8388591, 8388593, 8388603, 8388607) 4 added some more data Here are some computational counts of the number of elements of $B\cap[0,2^{23}]$ in particular congruence classes. ````(1 mod 4, 371867), (3 mod 4, 760697)(1 mod 8, 185336), (5 mod 8, 186531), (3 mod 8, 294045), (7 mod 8, 466652)(1 mod 16, 92703), (5 mod 16, 93236), (9 mod 16, 92633), (13 mod 16, 93295),(3 mod 16, 147232), (11 mod 16, 146813),(7 mod 16, 204808), (15 mod 16, 261844)(7 mod 32, 102487), (23 mod 32, 102321), (15 mod 32, 130895), (31 mod 32, 130949)Since there was a specific request for 15 mod 32 data, here are the first 10 such numbers in $B$: (47,79,271,559,623,687,719,815,879,911). Here are the last 10 that I've computed: (8388539, 8388551, 8388559, 8388563, 8388567, 8388571, 8388581,8388591, 8388593, 8388603, 8388607) ```` ``` ``` ``` ``` ``` ``` 3 Fixed statement of Conjecture, added "update" summarizing my opinion of the two posted answers Conjecture: If $A$ contains 0, is not periodic, and is uniformly Update Greg Kuperberg's answer concerning the conjecture displayed above is, while not quite a disproof, utterly convincing. So convincing, I can no longer understand how I thought the conjecture could plausibly be true. In our paper, we described it as "the strongest conjecture that is consistent with our theorems, our experiments, and Conjecture 1.1", so I see we weren't too enthusiastic about its truth. We should have been even less so! The question directly asked, the density of the reciprocal of the squares, remains unanswered. Paul Monsky has introduced a new (to me, at least) approach, and has made striking progress both in the answer below and in his answer to this question. I love Greg's answer to the question I didn't dare ask, and want to accept it, but Paul's is more directly relevant to the question I did ask. 2 restated problem for pithiness Let $B$ be There is a unique set $B$ of nonnegative integers with the property such thatevery positive $k$ integer can be written in the form$b+s^2$ (with $$b + s^2, b\in B$B, $s\in \mathbb{N}\cup\{0\}$) s\ge0$$in an even number of ways.How thick can $B$ be? Must its counting function grow like B = \{0, 1, 2, 3, 5, 7, 8, 9, 13, 17, 18, 23, 27, 29, 31, 32, 35, 37, C n/\log(n)\$? For every pair of sets 39, 41, 45, 47, 49, 50, 53, 55, 59, 61, 63, 71, 72,$$A,B$ of nonnegative integers73, set 79, 81, 83, 87, 89, 91, 97, 98, 101, 103, 107,$$ $r_{A,B}(k)=\left|\{(a,b) \colon a+b=k109, a\in A113, b\in B\}\right|.$$Now115, it's an elementary observation that if 117, 121, 127, 128, 137, 139, 149,$$ 0\in A$151, then there is a unique 153, 157, 159, 162, 167, 171, 173, 181, 183, 191, 193, 197,\dots\}\$ Does the set $B$ have positive density? Now for some context. Every set $A$ of nonnegative integers (which we will call the reciprocal of $A$) so that $r_{A,B}(k)$ is even for all positive contains 0 has a unique set $k$; another way B$of saying this that may be preferred on Math Overflow is nonnegative integers so that$$\sum_{a\in $\left( \sum_{a\in A} q^a$$is a unit q^a \right) \, \left( \sum_{b\in B} q^b \right) = 1$$in the ring$\mathbb{F}_2[[q]]${\mathbb F}_2[[q]]$ of binary power seriesif and only if $0\in A$. Inverses are unique, so these two statements are truly We call $B$ the same thing. Now for some context. reciprocal of $A$. Letting $A$ be the set of squares, we were able to identify prove that the even numbers in $B$ except in are exactly $\{2k^2 \colon k\ge 0\}$, and we were able to classify the class 3 (mod 4).$1\mod 4$ elements of $B$. 1 # How thick is the reciprocal of the squares Let $B$ be a set of nonnegative integers with the property that every positive $k$ can be written in the form $b+s^2$ (with $b\in B$, $s\in \mathbb{N}\cup\{0\}$) in an even number of ways. How thick can $B$ be? Must its counting function grow like $C n/\log(n)$? For every pair of sets $A,B$ of nonnegative integers, set $$r_{A,B}(k)=\left|\{(a,b) \colon a+b=k, a\in A, b\in B\}\right|.$$ Now, it's an elementary observation that if $0\in A$, then there is a unique set $B$ of nonnegative integers (which we will call the reciprocal of $A$) so that $r_{A,B}(k)$ is even for all positive $k$; another way of saying this that may be preferred on Math Overflow is that $$\sum_{a\in A} q^a$$ is a unit in the ring $\mathbb{F}_2[[q]]$ of binary power series if and only if $0\in A$. Inverses are unique, so these two statements are truly the same thing. Now for some context. As a consequence of a Euler's pentagonal number theorem, the reciprocal of the set $\{n(3n+1)/2 \colon n \in \mathbb{Z}\}$ is the set $\{ n \colon p(n)\equiv 1 \bmod 2\}$, where $p(n)$ is ordinary partition function. Almost nothing interesting is known about the parity of the partition function, but computationally it seems to be even and odd with equal frequency. This question arises out of an effort to put the parity of the partition function into some context. In this article (arxiv, Int. J. Number Theory 2 (2006), no. 4, 499--522), Josh Cooper, Dennis Eichhorn and I investigated the properties of $A$ that lead to $B$ having positive density, and all of our data and partial results can be summed up in the following conjecture: Conjecture: If $A$ is uniformly distributed in every congruence class modulo every power of 2, then $B$ has positive density. Letting $A$ be the set of squares, we were able to identify $B$ except in the class 3 (mod 4).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9171867966651917, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/45506/scalar-product-between-fock-states/45529
# Scalar product between Fock states Suppose to have a chain (of size $L$) with bosons, and $\hat{a}_i^\dagger$,$\hat{a}_i$ are the associated creation and annihilation operators at site $i$. A Fock state can be written as: \begin{equation} | n_1 \dots n_L \rangle = \prod_{i} \frac{1}{\sqrt{n_i!}} \left( \hat{a}_i^\dagger \right)^{n_i} |\rangle \end{equation} where $|\rangle$ is the empty state. Now we define a new set of bosons: \begin{align} \hat{b}_{k} &= \frac{1}{\sqrt{L}} \sum_{j} e^{-ikj} \hat{a}_j & \hat{b}_{k}^\dagger &= \frac{1}{\sqrt{L}} \sum_{j} e^{ikj} \hat{a}_j^\dagger \end{align} where $k$ are such that some boundary condition is fulfilled. Now a Fock state can be written as: \begin{equation} | \dots \tilde{n}_k \dots \rangle = \prod_{k} \frac{1}{\sqrt{\tilde{n}_k!}} \left( \hat{b}_k^\dagger \right)^{\tilde{n}_k} |\rangle \end{equation} The question is: is there a simple formula to express or compute the scalar product $\langle \dots \tilde{n}_k \dots | n_1 \dots n_L \rangle$? - 2 Do you mean the scalar product $<original | new>$ or $<new | new>$? Introduction of some different notation for the new bosons would help, eg. $|n_1' \ldots n_k' \ldots >$. – au700 Nov 30 '12 at 10:30 You are right, I edited the question and changed the notation. Anyway, the question is about Fock states in different bases otherwise the answer would be trivial. And of course the two states have the same number of bosons: $\sum_k \tilde{n}_k = \sum_j n_j$. – Hari Nov 30 '12 at 13:32 You can easily make $< new | original > = \sum c_{\ldots} < original | original >$ by substituting the definition of new creation / anihilation operators in $< new |$. – au700 Nov 30 '12 at 13:50 If I use your recipe I get: \begin{align} \langle \dots \tilde{n}_k\dots |n_1 \dots n_L \rangle & = \langle | \prod_k \left( \hat{b}_k \right)^{\tilde{n}_k } |n_1 \dots n_L \rangle \\ & = \langle | \prod_k \left( \frac{1}{\sqrt{L}} \sum_j e^{-ikj} \hat{a}_j \right)^{\tilde{n}_k } |n_1 \dots n_L \rangle \end{align} even if now it is quite straightforward which are the terms that survive (the ones that annihilate $|n_1 \dots n_L \rangle$ ) I'm not able to find a "clean" and useful equation ... :( – Hari Nov 30 '12 at 14:15 ## 2 Answers Define the operators $\hat a(f)=\sum f_j\hat a_j$ and $|f_1,...,f_n\rangle:=\hat a(f_1)...\hat a(f_n)|vac\rangle$. Then $\langle g_1,...,g_m|f_1,...,f_n\rangle$ vanishes for $m\ne n$ and is a sum of the products $\langle g_1|f_{j_1}\rangle...\langle g_n|f_{j_n}\rangle$ for all possible permutations $(j_1,...,j_n)$ of $1,...,n$. Note that the $\langle g|f\rangle$ are easy to compute. The formula you requested is a special case of this. - Ok, thank you very much (which is actually the answer I would got expanding the equation which is in the comment I wrote just above). I have to deal with the computation of all the possible permutations...I will let the computer do it for me...:) – Hari Nov 30 '12 at 14:28 I realise the question has already been answered, but just a suggestion to the topic starter: finite sums of the form $(\sum_i x_i)^N$ are expanded using objects called multinomial coefficients. They should make your life a little easier. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8602939248085022, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/189507-showing-ab-ba-finite-order-n.html
# Thread: 1. ## Showing ab=ba finite order n I'm not quite sure what this question is asking. Problem: Let a and b be elements of a group G. Show that if ab has finite order n, then ba also has order n. It is my understanding that the order of group is simply the number of elements in the group, and the order of a generator is the number of elements in the set it generates. But... what is the order of any old element in G, like the question asks? 2. ## Re: Showing ab=ba finite order n Originally Posted by tangibleLime I'm not quite sure what this question is asking. Problem: Let a and b be elements of a group G. Show that if ab has finite order n, then ba also has order n. It is my understanding that the order of group is simply the number of elements in the group, and the order of a generator is the number of elements in the set it generates. But... what is the order of any old element in G, like the question asks? By definition the order of $a\in G$ is the 'group order' of $\langle a\rangle$. 3. ## Re: Showing ab=ba finite order n there are two equivalent defintions for order. one is: the order of the element a, |a|, is the cardinality of the underlying set of the group generated by a, |a| = |<a>|. but that is a "non-constructive" definition. a more explicit definition, is the order of a, |a| is the smallest positive integer m such that a^m = e. to see that these two are equivalent: note that if m is the smallest positive integer for which a^m = e, {e,a,a^2,....,a^(m-1)} are all distinct, and this set is <a> (the tricky part is showing a^-1 is one of these postive powers). note that it may be the case that a is of infinite order, this is the case for the number 1 in the integers. EDIT: to attack this problem, note that if n is the order of ab, (ab)^n = e. now (ab)^n = (ab)(ab)........(ab) (n times) consider that (ba)^n = (ba)(ba)....(ba) (n times) = b(ab)^(n-1)a. now multiply on the left by e = (a^-1)a. 4. ## Re: Showing ab=ba finite order n Thanks for the reply! It cleared things up a lot. I was just wondering, is $ab$ in this case actually multiplication, or is it another one of those times in abstract algebra when it actually means something completely different? Because if it is multiplication, can't I just say that $ab=ba$ since multiplication is commutative and use the order formula $\frac{n}{gdc(n,m)}$ which calculates the number of elements generated by a generator? EDIT: Er actually, nevermind. Multiplication isn't commutative for all sets... such as matrices. 5. ## Re: Showing ab=ba finite order n "ab" is written multiplicatively when the implied operation is clear from the context. One definition of a group is a set G, together with an operation (say, *), such that... (axioms). You can call it "multiplication", but it could mean something very different when you aren't in a commutative (i.e. abelian) group. 6. ## Re: Showing ab=ba finite order n for a group, "ab" is just a notation meaning a(group operation)b. if the operation of (G,*) needs to be emphasized, a*b is often written instead. this is the hardest thing for most people to get used to in groups, that ab and ba are two different things (perhaps. in abelian groups, they aren't....but unfortunately many groups are not abelian. i know, right?).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356485605239868, "perplexity_flag": "middle"}
http://www.sagemath.org/doc/thematic_tutorials/group_theory.html
# Group Theory and Sage¶ Author: Robert A. Beezer, University of Puget Sound Changelog: • 2009/01/30 Version 1.0, first complete release • 2009/03/03 Version 1.1, added cyclic group size interact • 2010/03/10 Version 1.3, dropped US on license, some edits. This compilation collects Sage commands that are useful for a student in an introductory course on group theory. It is not intended to teach Sage or to teach group theory. (There are many introductory texts on group theory and more information on Sage can be found via www.sagemath.org.) Rather, by presenting commands roughly in the order a student would learn the corresponding mathematics they might be encouraged to experiment and learn more about mathematics and learn more about Sage. Not coincidentally, when Sage was the acronym SAGE, the “E” in Sage stood for “Experimentation.” This guide is also distributed in PDF format and as a Sage worksheet. The worksheet version can be imported into the Sage notebook environment running in a web browser, and then the displayed chunks of code may be executed by Sage if one clicks on the small “evaluate” link below each cell, for a fully interactive experience. A PDF and Sage worksheet versions of this tutorial are available at http://abstract.ups.edu/sage-aata.html. ## Basic properties of the integers¶ ### Integer division¶ The command a % b will return the remainder upon division of $$a$$ by $$b$$. In other words, the value is the unique integer $$r$$ such that: 1. $$0 \leq r < b$$; and 2. $$a = bq + r$$ for some integer $$q$$ (the quotient). Then $$(a - r) / b$$ will equal $$q$$. For example: ```sage: r = 14 % 3 sage: q = (14 - r) / 3 sage: r, q (2, 4) ``` will return 2 for the value of r and 4 for the value of q. Note that the “/” is integer division, where any remainder is cast away and the result is always an integer. So, for example, 14 / 3 will again equal 4, not 4.66666. ### Greatest common divisor¶ The greatest common divisor of $$a$$ and $$b$$ is obtained with the command gcd(a,b), where in our first uses, $$a$$ and $$b$$ are integers. Later, $$a$$ and $$b$$ can be other objects with a notion of divisibility and “greatness,” such as polynomials. For example: ```sage: gcd(2776, 2452) 4 ``` ### Extended greatest common divisor¶ The command xgcd(a, b) (“eXtended GCD”) returns a triple where the first element is the greatest common divisor of $$a$$ and $$b$$ (as with the gcd(a, b) command above), but the next two elements are the values of $$r$$ and $$s$$ such that $$ra + sb = \gcd(a, b)$$. For example, xgcd(633, 331) returns (1, 194, -371). Portions of the triple can be extracted using [ ] to access the entries of the triple, starting with the first as number 0. For example, the following should return the result True (even if you change the values of a and b). Studying this block of code will go a long way towards helping you get the most out of Sage’s output. (Note that “=” is how a value is assigned to a variable, while as in the last line, “==” is how we determine equality of two items.) ```sage: a = 633 sage: b = 331 sage: extended = xgcd(a, b) sage: g = extended[0] sage: r = extended[1] sage: s = extended[2] sage: g == r*a + s*b True ``` ### Divisibility¶ A remainder of zero indicates divisibility. So (a % b) == 0 will return True if $$b$$ divides $$a$$, and will otherwise return False. For example, (9 % 3) == 0 is True, but (9 % 4) == 0 is False. Try predicting the output of the following before executing it in Sage. ```sage: answer1 = ((20 % 5) == 0) sage: answer2 = ((17 % 4) == 0) sage: answer1, answer2 (True, False) ``` ### Factoring¶ As promised by the Fundamental Theorem of Arithmetic, factor(a) will return a unique expression for $$a$$ as a product of powers of primes. It will print in a nicely-readable form, but can also be manipulated with Python as a list of pairs $$(p_i, e_i)$$ containing primes as bases, and their associated exponents. For example: ```sage: factor(2600) 2^3 * 5^2 * 13 ``` If you just want the prime divisors of an integer, then use the prime_divisors(a) command, which will return a list of all the prime divisors of $$a$$. For example: ```sage: prime_divisors(2600) [2, 5, 13] ``` We can strip off other pieces of the prime decomposition using two levels of [ ]. This is another good example to study in order to learn about how to drill down into Python lists. ```sage: n = 2600 sage: decomposition = factor(n) sage: print n, "decomposes as", decomposition 2600 decomposes as 2^3 * 5^2 * 13 sage: secondterm = decomposition[1] sage: print "Base and exponent (pair) for second prime:", secondterm Base and exponent (pair) for second prime: (5, 2) sage: base = secondterm[0] sage: exponent = secondterm[1] sage: print "Base is", base Base is 5 sage: print "Exponent is", exponent Exponent is 2 sage: thirdbase = decomposition[2][0] sage: thirdexponent = decomposition[2][1] sage: print "Base of third term is", thirdbase, "with exponent", thirdexponent Base of third term is 13 with exponent 1 ``` With a bit more work, the factor() command can be used to factor more complicated items, such as polynomials. ### Multiplicative inverse, modular arithmetic¶ The command inverse_mod(a, n) yields the multiplicative inverse of $$a$$ mod $$n$$ (or an error if it doesn’t exist). For example: ```sage: inverse_mod(352, 917) 508 ``` (As a check, find the integer $$m$$ such that 352*508 == m*917+1.) Then try ```sage: inverse_mod(4, 24) Traceback (most recent call last): ... ZeroDivisionError: Inverse does not exist. ``` and explain the result. ### Powers with modular arithmetic¶ The command power_mod(a, m, n) yields $$a^m$$ mod $$n$$. For example: ```sage: power_mod(15, 831, 23) 10 ``` If $$m = -1$$, then this command will duplicate the function of inverse_mod(). ### Euler $$\phi$$-function¶ The command euler_phi(n) will return the number of positive integers less than $$n$$ and relatively prime to $$n$$ (i.e. having greatest common divisor with $$n$$ equal to 1). For example: ```sage: euler_phi(345) 176 ``` Experiment by running the following code several times: ```sage: m = random_prime(10000) sage: n = random_prime(10000) sage: euler_phi(m*n) == euler_phi(m) * euler_phi(n) True ``` Feel a conjecture coming on? Can you generalize this result? ### Primes¶ The command is_prime(a) returns True or False depending on if $$a$$ is prime or not. For example, ```sage: is_prime(117371) True ``` while ```sage: is_prime(14547073) False ``` since $$14547073 = 1597 * 9109$$ (as you could determine with the factor() command). The command random_prime(a, True) will return a random prime between 2 and $$a$$. Experiment with: ```sage: p = random_prime(10^21, True) sage: is_prime(p) True ``` (Replacing True by False will speed up the search, but there will be a very small probability the result will not be prime.) The command prime_range(a, b) returns an ordered list of all the primes from $$a$$ to $$b - 1$$, inclusive. For example, ```sage: prime_range(500, 550) [503, 509, 521, 523, 541, 547] ``` The commands next_prime(a) and previous_prime(a) are other ways to get a single prime number of a desired size. Give them a try. ## Permutation groups¶ A good portion of Sage’s support for group theory is based on routines from GAP (Groups, Algorithms, and Programming at http://www.gap-system.org. Groups can be described in many different ways, such as sets of matrices or sets of symbols subject to a few defining relations. A very concrete way to represent groups is via permutations (one-to-one and onto functions of the integers 1 through $$n$$), using function composition as the operation in the group. Sage has many routines designed to work with groups of this type and they are also a good way for those learning group theory to gain experience with the basic ideas of group theory. For both these reasons, we will concentrate on these types of groups. ### Writing permutations¶ Sage uses “disjoint cycle notation” for permutations, see any introductory text on group theory (such as Judson, Section 4.1) for more on this. Composition occurs left to right, which is not what you might expect and is exactly the reverse of what Judson and many others use. (There are good reasons to support either direction, you just need to be certain you know which one is in play.) There are two ways to write the permutation $$\sigma = (1\,3) (2\,5\,4)$$: 1. As a text string (include quotes): "(1,3) (2,5,4)" 2. As a Python list of “tuples”: [(1,3), (2,5,4)] ### Groups¶ Sage knows many popular groups as sets of permutations. More are listed below, but for starters, the full “symmetric group” of all possible permutations of 1 through $$n$$ can be built with the command SymmetricGroup(n). Permutation elements Elements of a group can be created, and composed, as follows ```sage: G = SymmetricGroup(5) sage: sigma = G("(1,3) (2,5,4)") sage: rho = G([(2,4), (1,5)]) sage: rho^(-1) * sigma * rho (1,2,4)(3,5) ``` Available functions for elements of a permutation group include finding the order of an element, i.e. for a permutation $$\sigma$$ the order is the smallest power of $$k$$ such that $$\sigma^k$$ equals the identity element $$()$$. For example: ```sage: G = SymmetricGroup(5) sage: sigma = G("(1,3) (2,5,4)") sage: sigma.order() 6 ``` The sign of the permutation $$\sigma$$ is defined to be 1 for an even permutation and $$-1$$ for an odd permutation. For example: ```sage: G = SymmetricGroup(5) sage: sigma = G("(1,3) (2,5,4)") sage: sigma.sign() -1 ``` since $$\sigma$$ is an odd permutation. Many more available functions that can be applied to a permutation can be found via “tab-completion.” With sigma defined as an element of a permutation group, in a Sage cell, type sigma. (Note the “.”) and then press the tab key. You will get a list of available functions (you may need to scroll down to see the whole list). Experiment and explore! It is what Sage is all about. You really cannot break anything. ### Creating groups¶ This is an annotated list of some small well-known permutation groups that can be created simply in Sage. (You can find more in the source code file ```SAGE_ROOT/devel/sage/sage/groups/perm_gps/permgroup_named.py ``` • SymmetricGroup(n): All $$n!$$ permutations on $$n$$ symbols. • DihedralGroup(n): Symmetries of an $$n$$-gon. Rotations and flips, $$2n$$ in total. • CyclicPermutationGroup(n): Rotations of an $$n$$-gon (no flips), $$n$$ in total. • AlternatingGroup(n): Alternating group on $$n$$ symbols having $$n!/2$$ elements. • KleinFourGroup(): The non-cyclic group of order 4. ## Group functions¶ Individual elements of permutation groups are important, but we primarily wish to study groups as objects on their own. So a wide variety of computations are available for groups. Define a group, for example ```sage: H = DihedralGroup(6) sage: H Dihedral group of order 12 as a permutation group ``` and then a variety of functions become available. After trying the examples below, experiment with tab-completion. Having defined H, type H. (note the “.”) and then press the tab key. You will get a list of available functions (you may need to scroll down to see the whole list). As before, experiment and explore—it is really hard to break anything. Here is another couple of ways to experiment and explore. Find a function that looks interesting, say is_abelian(). Type H.is_abelian? (note the question mark) followed by the enter key. This will display a portion of the source code for the is_abelian() function, describing the inputs and output, possibly illustrated with example uses. If you want to learn more about how Sage works, or possibly extend its functionality, then you can start by examining the complete Python source code. For example, try H.is_abelian??, which will allow you to determine that the is_abelian() function is basically riding on GAP’s IsAbelian() command and asking GAP do the heavy-lifting for us. (To get the maximum advantage of using Sage it helps to know some basic Python programming, but it is not required.) OK, on to some popular command for groups. If you are using the worksheet, be sure you have defined the group $$H$$ as the dihedral group $$D_6$$, since we will not keep repeating its definition below. ### Abelian?¶ The command ```sage: H = DihedralGroup(6) sage: H.is_abelian() False ``` will return False since $$D_6$$ is a non-abelian group. ### Order¶ The command ```sage: H = DihedralGroup(6) sage: H.order() 12 ``` will return 12 since $$D_6$$ is a group of with 12 elements. ### All elements¶ The command ```sage: H = DihedralGroup(6) sage: H.list() [(), (2,6)(3,5), (1,2)(3,6)(4,5), (1,2,3,4,5,6), (1,3)(4,6), (1,3,5)(2,4,6), (1,4)(2,3)(5,6), (1,4)(2,5)(3,6), (1,5)(2,4), (1,5,3)(2,6,4), (1,6,5,4,3,2), (1,6)(2,5)(3,4)] ``` will return all of the elements of $$H$$ in a fixed order as a Python list. Indexing ([ ]) can be used to extract the individual elements of the list, remembering that counting the elements of the list begins at zero. ```sage: H = DihedralGroup(6) sage: elements = H.list() sage: elements[3] (1,2,3,4,5,6) ``` ### Cayley table¶ The command ```sage: H = DihedralGroup(6) sage: H.cayley_table() * a b c d e f g h i j k l +------------------------ a| a b c d e f g h i j k l b| b a d c f e h g j i l k c| c k a e d g f i h l b j d| d l b f c h e j g k a i e| e j k g a i d l f b c h f| f i l h b j c k e a d g g| g h j i k l a b d c e f h| h g i j l k b a c d f e i| i f h l j b k c a e g d j| j e g k i a l d b f h c k| k c e a g d i f l h j b l| l d f b h c j e k g i a ``` will construct the Cayley table (or “multiplication table”) of $$H$$. By default the table uses lowercase Latin letters to name the elements of the group. The actual elements used can be found using the row_keys() or column_keys() commands for the table. For example to determine the fifth element in the table, the element named e: ```sage: H = DihedralGroup(6) sage: T = H.cayley_table() sage: headings = T.row_keys() sage: headings[4] (1,3)(4,6) ``` ### Center¶ The command H.center() will return a subgroup that is the center of the group $$H$$ (see Exercise 2.46 in Judson). Try ```sage: H = DihedralGroup(6) sage: H.center().list() [(), (1,4)(2,5)(3,6)] ``` to see which elements of $$H$$ commute with every element of $$H$$. ### Cayley graph¶ For fun, try show(H.cayley_graph()). ## Subgroups¶ ### Cyclic subgroups¶ If G is a group and a is an element of the group (try a = G.random_element()), then ```a = G.random_element() H = G.subgroup([a]) ``` will create H as the cyclic subgroup of G with generator a. For example the code below will: 1. create G as the symmetric group on five symbols; 2. specify sigma as an element of G; 3. use sigma as the generator of a cyclic subgroup H; 4. list all the elements of H. In more mathematical notation, we might write $$\langle (1\,2\,3) (4\,5) \rangle = H \subseteq G = S_5$$. ```sage: G = SymmetricGroup(5) sage: sigma = G("(1,2,3) (4,5)") sage: H = G.subgroup([sigma]) sage: H.list() [(), (4,5), (1,2,3), (1,2,3)(4,5), (1,3,2), (1,3,2)(4,5)] ``` Experiment by trying different permutations for sigma and observing the effect on H. ### Cyclic groups¶ Groups that are cyclic themselves are both important and rich in structure. The command CyclicPermutationGroup(n) will create a permutation group that is cyclic with n elements. Consider the following example (note that the indentation of the third line is critical) which will list the elements of a cyclic group of order 20, preceded by the order of each element. ```sage: n = 20 sage: CN = CyclicPermutationGroup(n) sage: for g in CN: ... print g.order(), " ", g ... 1 () 20 (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20) 10 (1,3,5,7,9,11,13,15,17,19)(2,4,6,8,10,12,14,16,18,20) 20 (1,4,7,10,13,16,19,2,5,8,11,14,17,20,3,6,9,12,15,18) 5 (1,5,9,13,17)(2,6,10,14,18)(3,7,11,15,19)(4,8,12,16,20) 4 (1,6,11,16)(2,7,12,17)(3,8,13,18)(4,9,14,19)(5,10,15,20) 10 (1,7,13,19,5,11,17,3,9,15)(2,8,14,20,6,12,18,4,10,16) 20 (1,8,15,2,9,16,3,10,17,4,11,18,5,12,19,6,13,20,7,14) 5 (1,9,17,5,13)(2,10,18,6,14)(3,11,19,7,15)(4,12,20,8,16) 20 (1,10,19,8,17,6,15,4,13,2,11,20,9,18,7,16,5,14,3,12) 2 (1,11)(2,12)(3,13)(4,14)(5,15)(6,16)(7,17)(8,18)(9,19)(10,20) 20 (1,12,3,14,5,16,7,18,9,20,11,2,13,4,15,6,17,8,19,10) 5 (1,13,5,17,9)(2,14,6,18,10)(3,15,7,19,11)(4,16,8,20,12) 20 (1,14,7,20,13,6,19,12,5,18,11,4,17,10,3,16,9,2,15,8) 10 (1,15,9,3,17,11,5,19,13,7)(2,16,10,4,18,12,6,20,14,8) 4 (1,16,11,6)(2,17,12,7)(3,18,13,8)(4,19,14,9)(5,20,15,10) 5 (1,17,13,9,5)(2,18,14,10,6)(3,19,15,11,7)(4,20,16,12,8) 20 (1,18,15,12,9,6,3,20,17,14,11,8,5,2,19,16,13,10,7,4) 10 (1,19,17,15,13,11,9,7,5,3)(2,20,18,16,14,12,10,8,6,4) 20 (1,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2) ``` By varying the size of the group (change the value of n) you can begin to illustrate some of the structure of a cyclic group (for example, try a prime). We can cut/paste an element of order 5 from the output above (in the case when the cyclic group has 20 elements) and quickly build a subgroup: ```sage: C20 = CyclicPermutationGroup(20) sage: rho = C20("(1,17,13,9,5)(2,18,14,10,6)(3,19,15,11,7)(4,20,16,12,8)") sage: H = C20.subgroup([rho]) sage: H.list() [(), (1,5,9,13,17)(2,6,10,14,18)(3,7,11,15,19)(4,8,12,16,20), (1,9,17,5,13)(2,10,18,6,14)(3,11,19,7,15)(4,12,20,8,16), (1,13,5,17,9)(2,14,6,18,10)(3,15,7,19,11)(4,16,8,20,12), (1,17,13,9,5)(2,18,14,10,6)(3,19,15,11,7)(4,20,16,12,8)] ``` For a cyclic group, the following command will list all of the subgroups. ```sage: C20 = CyclicPermutationGroup(20) sage: C20.conjugacy_classes_subgroups() [Subgroup of (Cyclic group of order 20 as a permutation group) generated by [()], Subgroup of (Cyclic group of order 20 as a permutation group) generated by [(1,11)(2,12)(3,13)(4,14)(5,15)(6,16)(7,17)(8,18)(9,19)(10,20)], Subgroup of (Cyclic group of order 20 as a permutation group) generated by [(1,6,11,16)(2,7,12,17)(3,8,13,18)(4,9,14,19)(5,10,15,20)], Subgroup of (Cyclic group of order 20 as a permutation group) generated by [(1,5,9,13,17)(2,6,10,14,18)(3,7,11,15,19)(4,8,12,16,20)], Subgroup of (Cyclic group of order 20 as a permutation group) generated by [(1,3,5,7,9,11,13,15,17,19)(2,4,6,8,10,12,14,16,18,20)], Subgroup of (Cyclic group of order 20 as a permutation group) generated by [(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20)]] ``` Be careful, this command uses some more advanced ideas and will not usually list all of the subgroups of a group. Here we are relying on special properties of cyclic groups (but see the next section). If you are viewing this as a PDF, you can safely skip over the next bit of code. However, if you are viewing this as a worksheet in Sage, then this is a place where you can experiment with the structure of the subgroups of a cyclic group. In the input box, enter the order of a cyclic group (numbers between 1 and 40 are good initial choices) and Sage will list each subgroup as a cyclic group with its generator. The factorization at the bottom might help you formulate a conjecture. ```%auto @interact def _(n = input_box(default=12, label = "Cyclic group of order:", type=Integer) ): cyclic = CyclicPermutationGroup(n) subgroups = cyclic.conjugacy_classes_subgroups() html( "All subgroups of a cyclic group of order $%s$\n" % latex(n) ) table = "$\\begin{array}{ll}" for sg in subgroups: table = table + latex(sg.order()) + \ " & \\left\\langle" + latex(sg.gens()[0]) + \ "\\right\\rangle\\\\" table = table + "\\end{array}$" html(table) html("\nHint: $%s$ factors as $%s$" % ( latex(n), latex(factor(n)) ) )``` ### All subgroups¶ If $$H$$ is a subgroup of $$G$$ and $$g \in G$$, then $$gHg^{-1} = \{ghg^{-1} \mid h \in G\}$$ will also be a subgroup of $$G$$. If G is a group, then the command G.conjugacy_classes_subgroups() will return a list of subgroups of G, but not all of the subgroups. However, every subgroup can be constructed from one on the list by the $$gHg^{-1}$$ construction with a suitable $$g$$. As an illustration, the code below: 1. creates K as the dihedral group of order 24, $$D_{12}$$; 2. stores the list of subgroups output by K.conjugacy_classes_subgroups() in the variable sg; 3. prints the elements of the list; 4. selects the second subgroup in the list, and lists its elements. ```sage: K = DihedralGroup(12) sage: sg = K.conjugacy_classes_subgroups() sage: print "sg:\n", sg sg: [Subgroup of (Dihedral group of order 24 as a permutation group) generated by [()], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(1,2)(3,12)(4,11)(5,10)(6,9)(7,8)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(1,7)(2,8)(3,9)(4,10)(5,11)(6,12)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(2,12)(3,11)(4,10)(5,9)(6,8)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(1,5,9)(2,6,10)(3,7,11)(4,8,12)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(2,12)(3,11)(4,10)(5,9)(6,8), (1,7)(2,8)(3,9)(4,10)(5,11)(6,12)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(1,2)(3,12)(4,11)(5,10)(6,9)(7,8), (1,7)(2,8)(3,9)(4,10)(5,11)(6,12)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(1,7)(2,8)(3,9)(4,10)(5,11)(6,12), (1,10,7,4)(2,11,8,5)(3,12,9,6)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(1,3,5,7,9,11)(2,4,6,8,10,12), (1,5,9)(2,6,10)(3,7,11)(4,8,12)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(1,2)(3,12)(4,11)(5,10)(6,9)(7,8), (1,5,9)(2,6,10)(3,7,11)(4,8,12)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(2,12)(3,11)(4,10)(5,9)(6,8), (1,5,9)(2,6,10)(3,7,11)(4,8,12)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(2,12)(3,11)(4,10)(5,9)(6,8), (1,7)(2,8)(3,9)(4,10)(5,11)(6,12), (1,10,7,4)(2,11,8,5)(3,12,9,6)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(2,12)(3,11)(4,10)(5,9)(6,8), (1,3,5,7,9,11)(2,4,6,8,10,12), (1,5,9)(2,6,10)(3,7,11)(4,8,12)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(1,2)(3,12)(4,11)(5,10)(6,9)(7,8), (1,3,5,7,9,11)(2,4,6,8,10,12), (1,5,9)(2,6,10)(3,7,11)(4,8,12)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(1,2,3,4,5,6,7,8,9,10,11,12), (1,3,5,7,9,11)(2,4,6,8,10,12), (1,5,9)(2,6,10)(3,7,11)(4,8,12)], Subgroup of (Dihedral group of order 24 as a permutation group) generated by [(2,12)(3,11)(4,10)(5,9)(6,8), (1,2,3,4,5,6,7,8,9,10,11,12), (1,3,5,7,9,11)(2,4,6,8,10,12), (1,5,9)(2,6,10)(3,7,11)(4,8,12)]] sage: print "\nAn order two subgroup:\n", sg[1].list() An order two subgroup: [(), (1,2)(3,12)(4,11)(5,10)(6,9)(7,8)] ``` It is important to note that this is a nice long list of subgroups, but will rarely create every such subgroup. For example, the code below: 1. creates rho as an element of the group K; 2. creates L as a cyclic subgroup of K; 3. prints the two elements of L; and finally 4. tests to see if this subgroup is part of the output of the list sg created just above (it is not). ```sage: K = DihedralGroup(12) sage: sg = K.conjugacy_classes_subgroups() sage: rho = K("(1,4) (2,3) (5,12) (6,11) (7,10) (8,9)") sage: L = PermutationGroup([rho]) sage: L.list() [(), (1,4)(2,3)(5,12)(6,11)(7,10)(8,9)] sage: L in sg False ``` ## Symmetry groups¶ You can give Sage a short list of elements of a permutation group and Sage will find the smallest subgroup that contains those elements. We say the list “generates” the subgroup. We list a few interesting subgroups you can create this way. ### Symmetries of an equilateral triangle¶ Label the vertices of an equilateral triangle as 1, 2 and 3. Then any permutation of the vertices will be a symmetry of the triangle. So either SymmetricGroup(3) or DihedralGroup(3) will create the full symmetry group. ### Symmetries of an $$n$$-gon¶ A regular, $$n$$-sided figure in the plane (an $$n$$-gon) has $$2n$$ symmetries, comprised of $$n$$ rotations (including the trivial one) and $$n$$ “flips” about various axes. The dihedral group DihedralGroup(n) is frequently defined as exactly the symmetry group of an $$n$$-gon. ### Symmetries of a tetrahedron¶ Label the 4 vertices of a regular tetrahedron as 1, 2, 3 and 4. Fix the vertex labeled 4 and rotate the opposite face through 120 degrees. This will create the permutation/symmetry $$(1\,2\, 3)$$. Similarly, fixing vertex 1, and rotating the opposite face will create the permutation $$(2\,3\,4)$$. These two permutations are enough to generate the full group of the twelve symmetries of the tetrahedron. Another symmetry can be visualized by running an axis through the midpoint of an edge of the tetrahedron through to the midpoint of the opposite edge, and then rotating by 180 degrees about this axis. For example, the 1–2 edge is opposite the 3–4 edge, and the symmetry is described by the permutation $$(1\,2) (3\,4)$$. This permutation, along with either of the above permutations will also generate the group. So here are two ways to create this group: ```sage: tetra_one = PermutationGroup(["(1,2,3)", "(2,3,4)"]) sage: tetra_one Permutation Group with generators [(2,3,4), (1,2,3)] sage: tetra_two = PermutationGroup(["(1,2,3)", "(1,2)(3,4)"]) sage: tetra_two Permutation Group with generators [(1,2)(3,4), (1,2,3)] ``` This group has a variety of interesting properties, so it is worth experimenting with. You may also know it as the “alternating group on 4 symbols,” which Sage will create with the command AlternatingGroup(4). ### Symmetries of a cube¶ Label vertices of one face of a cube with 1, 2, 3 and 4, and on the opposite face label the vertices 5, 6, 7 and 8 (5 opposite 1, 6 opposite 2, etc.). Consider three axes that run from the center of a face to the center of the opposite face, and consider a quarter-turn rotation about each axis. These three rotations will construct the entire symmetry group. Use ```sage: cube = PermutationGroup(["(3,2,6,7)(4,1,5,8)", ... "(1,2,6,5)(4,3,7,8)", "(1,2,3,4)(5,6,7,8)"]) sage: cube.list() [(), (2,4,5)(3,8,6), (2,5,4)(3,6,8), (1,2)(3,5)(4,6)(7,8), (1,2,3,4)(5,6,7,8), (1,2,6,5)(3,7,8,4), (1,3,6)(4,7,5), (1,3)(2,4)(5,7)(6,8), (1,3,8)(2,7,5), (1,4,3,2)(5,8,7,6), (1,4,8,5)(2,3,7,6), (1,4)(2,8)(3,5)(6,7), (1,5,6,2)(3,4,8,7), (1,5,8,4)(2,6,7,3), (1,5)(2,8)(3,7)(4,6), (1,6,3)(4,5,7), (1,6)(2,5)(3,8)(4,7), (1,6,8)(2,7,4), (1,7)(2,3)(4,6)(5,8), (1,7)(2,6)(3,5)(4,8), (1,7)(2,8)(3,4)(5,6), (1,8,6)(2,4,7), (1,8,3)(2,5,7), (1,8)(2,7)(3,6)(4,5)] ``` A cube has four distinct diagonals (joining opposite vertices through the center of the cube). Each symmetry of the cube will cause the diagonals to arrange differently. In this way, we can view an element of the symmetry group as a permutation of four “symbols”—the diagonals. It happens that each of the 24 permutations of the diagonals is created by exactly one symmetry of the 8 vertices of the cube. So this subgroup of $$S_8$$ is “the same as” $$S_4$$. In Sage: ```sage: cube = PermutationGroup(["(3,2,6,7)(4,1,5,8)", ... "(1,2,6,5)(4,3,7,8)", "(1,2,3,4)(5,6,7,8)"]) sage: cube.is_isomorphic(SymmetricGroup(4)) True ``` will test to see if the group of symmetries of the cube are “the same as” $$S_4$$ and so will return True. Here is an another way to create the symmetries of a cube. Number the six faces of the cube as follows: 1 on top, 2 on the bottom, 3 in front, 4 on the right, 5 in back, 6 on the left. Now the same rotations as before (quarter-turns about axes through the centers of two opposite faces) can be used as generators of the symmetry group: ```sage: cubeface = PermutationGroup(["(1,3,2,5)", "(1,4,2,6)", "(3,4,5,6)"]) sage: cubeface.list() [(), (3,4,5,6), (3,5)(4,6), (3,6,5,4), (1,2)(4,6), (1,2)(3,4)(5,6), (1,2)(3,5), (1,2)(3,6)(4,5), (1,3)(2,5)(4,6), (1,3,2,5), (1,3,4)(2,5,6), (1,3,6)(2,5,4), (1,4,3)(2,6,5), (1,4,5)(2,6,3), (1,4,2,6), (1,4)(2,6)(3,5), (1,5,2,3), (1,5)(2,3)(4,6), (1,5,6)(2,3,4), (1,5,4)(2,3,6), (1,6,3)(2,4,5), (1,6,5)(2,4,3), (1,6,2,4), (1,6)(2,4)(3,5)] ``` Again, this subgroup of $$S_6$$ is “same as” the full symmetric group, $$S_4$$: ```sage: cubeface = PermutationGroup(["(1,3,2,5)", "(1,4,2,6)", "(3,4,5,6)"]) sage: cubeface.is_isomorphic(SymmetricGroup(4)) True ``` It turns out that in each of the above constructions, it is sufficient to use just two of the three generators (any two). But one generator is not enough. Give it a try and use Sage to convince yourself that a generator can be sacrificed in each case. ## Normal subgroups¶ ### Checking normality¶ The code below: 1. begins with the alternating group $$A_4$$; 2. specifies three elements of the group (the three symmetries of the tetrahedron that are 180 degree rotations about axes through midpoints of opposite edges); 3. uses these three elements to generate a subgroup; and finally 4. illustrates the command for testing if the subgroup H is a normal subgroup of the group A4. ```sage: A4 = AlternatingGroup(4) sage: r1 = A4("(1,2) (3,4)") sage: r2 = A4("(1,3) (2,4)") sage: r3 = A4("(1,4) (2,3)") sage: H = A4.subgroup([r1, r2, r3]) sage: H.is_normal(A4) True ``` ### Quotient group¶ Extending the previous example, we can create the quotient (factor) group of $$A_4$$ by $$H$$. The commands ```sage: A4 = AlternatingGroup(4) sage: r1 = A4("(1,2) (3,4)") sage: r2 = A4("(1,3) (2,4)") sage: r3 = A4("(1,4) (2,3)") sage: H = A4.subgroup([r1, r2, r3]) sage: A4.quotient(H) Permutation Group with generators [(1,2,3)] ``` returns a permutation group generated by (1,2,3). As expected this is a group of order 3. Notice that we do not get back a group of the actual cosets, but instead we get a group isomorphic to the factor group. ### Simple groups¶ It is easy to check to see if a group is void of any normal subgroups. The commands ```sage: AlternatingGroup(5).is_simple() True sage: AlternatingGroup(4).is_simple() False ``` prints True and then False. ### Composition series¶ For any group, it is easy to obtain a composition series. There is an element of randomness in the algorithm, so you may not always get the same results. (But the list of factor groups is unique, according to the Jordan-Hölder theorem.) Also, the subgroups generated sometimes have more generators than necessary, so you might want to “study” each subgroup carefully by checking properties like its order. An interesting example is: ```DihedralGroup(105).composition_series() ``` The output will be a list of 5 subgroups of $$D_{105}$$, each a normal subgroup of its predecessor. Several other series are possible, such as the derived series. Use tab-completion to see the possibilities. ## Conjugacy¶ Given a group $$G$$, we can define a relation $$\sim$$ on $$G$$ by: for $$a,b \in G$$, $$a \sim b$$ if and only if there exists an element $$g \in G$$ such that $$gag^{-1} = b$$. Since this is an equivalence relation, there is an associated partition of the elements of $$G$$ into equivalence classes. For this very important relation, the classes are known as “conjugacy classes.” A representative of each of these equivalence classes can be found as follows. Suppose G is a permutation group, then G.conjugacy_classes_representatives() will return a list of elements of $G$, one per conjugacy class. Given an element $$g \in G$$, the “centralizer” of $$g$$ is the set $$C(g) = \{h \in G \mid hgh^{-1} = g\}$$, which is a subgroup of $$G$$. A theorem tells us that the size of each conjugacy class is the order of the group divided by the order of the centralizer of an element of the class. With the following code we can determine the size of the conjugacy classes of the full symmetric group on 5 symbols: ```sage: G = SymmetricGroup(5) sage: group_order = G.order() sage: reps = G.conjugacy_classes_representatives() sage: class_sizes = [] sage: for g in reps: ... class_sizes.append(group_order / G.centralizer(g).order()) ... sage: class_sizes [1, 10, 15, 20, 20, 30, 24] ``` This should produce the list [1, 10, 15, 20, 20, 30, 24] which you can check sums to 120, the order of the group. You might be able to produce this list by counting elements of the group $$S_5$$ with identical cycle structure (which will require a few simple combinatorial arguments). ## Sylow subgroups¶ Sylow’s Theorems assert the existence of certain subgroups. For example, if $$p$$ is a prime, and $$p^r$$ divides the order of a group $$G$$, then $$G$$ must have a subgroup of order $$p^r$$. Such a subgroup could be found among the output of the conjugacy_classes_subgroups() command by checking the orders of the subgroups produced. The map() command is a quick way to do this. The symmetric group on 8 symbols, $$S_8$$, has order $$8! = 40320$$ and is divisible by $$2^7 = 128$$. Let’s find one example of a subgroup of permutations on 8 symbols with order 128. The next command takes a few minutes to run, so go get a cup of coffee after you set it in motion. ```sage: G = SymmetricGroup(8) sage: subgroups = G.conjugacy_classes_subgroups() # long time (9s on sage.math, 2011) sage: map(order, subgroups) # long time [1, 2, 2, 2, 2, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 9, 10, 10, 10, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 14, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 18, 18, 18, 18, 18, 18, 18, 20, 20, 20, 21, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 30, 30, 30, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 40, 42, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 48, 56, 60, 60, 60, 60, 60, 64, 64, 64, 64, 64, 64, 64, 72, 72, 72, 72, 72, 72, 72, 72, 72, 72, 96, 96, 96, 96, 96, 96, 96, 96, 96, 96, 96, 96, 120, 120, 120, 120, 120, 120, 120, 128, 144, 144, 144, 168, 168, 168, 180, 192, 192, 192, 192, 192, 240, 240, 288, 288, 288, 336, 360, 360, 360, 360, 384, 576, 576, 576, 720, 720, 720, 720, 1152, 1344, 1440, 2520, 5040, 20160, 40320] ``` The map(order, subgroups) command will apply the order() method to each of the subgroups in the list subgroups. The output is thus a large list of the orders of many subgroups (296 to be precise). If you count carefully, you will see that the 259-th subgroup has order 128. You can retrieve this group for further study by referencing it as subgroups[258] (remember that counting starts at zero). If $$p^r$$ is the highest power of $$p$$ to divide the order of $$G$$, then a subgroup of order $$p^r$$ is known as a “Sylow $$p$$-subgroup.” Sylow’s Theorems also say any two Sylow $$p$$-subgroups are conjugate, so the output of conjugacy_classes_subgroups() should only contain each Sylow $$p$$-subgroup once. But there is an easier way, sylow_subgroup(p) will return one. Notice that the argument of the command is just the prime $p$, not the full power $$p^r$$. Failure to use a prime will generate an informative error message. ## Groups of small order as permutation groups¶ We list here constructions, as permutation groups, for all of the groups of order less than 16. ```--------------------------------------------------------------------------------------------- Size Construction Notes --------------------------------------------------------------------------------------------- 1 SymmetricGroup(1) Trivial 2 SymmetricGroup(2) Also CyclicPermutationGroup(2) 3 CyclicPermutationGroup(3) Prime order 4 CyclicPermutationGroup(4) Cyclic 4 KleinFourGroup() Abelian, non-cyclic 5 CyclicPermutationGroup(5) Prime order 6 CyclicPermutationGroup(6) Cyclic 6 SymmetricGroup(3) Non-abelian, also DihedralGroup(3) 7 CyclicPermutationGroup(7) Prime order 8 CyclicPermutationGroup(8) Cyclic 8 D1 = CyclicPermutationGroup(4) D2 = CyclicPermutationGroup(2) G = direct_product_permgroups([D1,D2]) Abelian, non-cyclic 8 D1 = CyclicPermutationGroup(2) D2 = CyclicPermutationGroup(2) D3 = CyclicPermutationGroup(2) G = direct_product_permgroups([D1,D2,D3])} Abelian, non-cyclic 8 DihedralGroup(4) Non-abelian 8 QuaternionGroup()} Quaternions, also DiCyclicGroup(2) 9 CyclicPermutationGroup(9) Cyclic 9 D1 = CyclicPermutationGroup(3) D2 = CyclicPermutationGroup(3) G = direct_product_permgroups([D1,D2]) Abelian, non-cyclic 10 CyclicPermutationGroup(10) Cyclic 10 DihedralGroup(5) Non-abelian 11 CyclicPermutationGroup(11) Prime order 12 CyclicPermutationGroup(12) Cyclic 12 D1 = CyclicPermutationGroup(6) D2 = CyclicPermutationGroup(2) G = direct_product_permgroups([D1,D2]) Abelian, non-cyclic 12 DihedralGroup(6) Non-abelian 12 AlternatingGroup(4) Non-abelian, symmetries of tetrahedron 12 DiCyclicGroup(3) Non-abelian Also semi-direct product $Z_3 \rtimes Z_4$ 13 CyclicPermutationGroup(13) Prime order 14 CyclicPermutationGroup(14) Cyclic 14 DihedralGroup(7) Non-abelian 15 CyclicPermutationGroup(15) Cyclic ----------------------------------------------------------------------------------------------``` ## Acknowledgements¶ The construction of Sage is the work of many people, and the group theory portion is made possible by the extensive work of the creators of GAP. However, we will single out three people from the Sage team to thank for major contributions toward bringing you the group theory portion of Sage: David Joyner, William Stein, and Robert Bradshaw. Thanks! ### Table Of Contents #### Previous topic Abelian Sandpile Model #### Next topic Lie Methods and Related Combinatorics in Sage ### Quick search Enter search terms or a module, class or function name.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 123, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8913223147392273, "perplexity_flag": "head"}
http://mathoverflow.net/questions/60856/hamilton-paths-in-k-2n/60859
## Hamilton Paths in $K_{2n}$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, I am teaching this semester graph theory for undergraduate students. Now, I am discussing with them about Hamilton Paths in finite graphs. Last time we meet, I presented the following theorem: Theorem. For $n\geq 3$ the complete graph $K_n$ is decomposable into edge disjoint Hamilton cycles iff n is odd. For $n\geq 2$ the complete graph $K_n$ is decomposable into edge disjoint Hamiltonian paths iff $n$ is even. During the class I noted that my argument to prove this theorem was not complete. I started proving that the second statement implies the first one, which is ok. But I had not a correct argument to show that there exist an edge disjoint decomposition of $K_n$ in $n/2$ Hamilton paths if $n$ is even. Can we explicitly construct such decomposition or just present an existence argument ? - ## 1 Answer We can explicitly construct such a decomposition. Label the vertices of the graph with ${0,1,...,n-1}$, take the first path to be $0, n-1, 1, n-2, 2,... ,n/2$ and generate the other paths by addition modulo $n$ (the $n$ paths come in pairs in which one is the reverse of the other). More generally, a symmetric sequencing in a group with a single involution is sufficient to construct the decomposition. - Thank you Matt. – Leandro Apr 6 2011 at 23:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916242241859436, "perplexity_flag": "head"}
http://lucatrevisan.wordpress.com/2008/01/
in theory "Marge, I agree with you - in theory. In theory, communism works. In theory." -- Homer Simpson # Monthly Archive You are currently browsing the monthly archive for January 2008. ## Overheard in San Francisco January 28, 2008 in San Francisco | 5 comments Young Homeless Guy is sitting on the floor with a cardboard sign. Another guy walks by, holding what look like large leftover bags from a restaurant. Guy With Bags: [stops and offers the bags] would you like something to eat? Young Homeless Guy: is there garlic or avocado in it? GWB: I don’t think so, why? YHG: I am allergic to both. Especially avocado: when I eat it, my throat gets all scratchy. ## An Unusual Recruiting Pitch January 27, 2008 in Larry Summers | 4 comments Women in their sophomore or junior year of college who are thinking about doing research and going to graduate school should read this article (via Andrew Sullivan). Living the life of the mind is very rewarding, and, apparently, the chances of dating male models are not bad either. (If the author could get some mileage out of being an undergrad at Harvard, just imagine what it can do for you to be a grad student at Berkeley!) ## Finally! After a hiatus of almost four year, the graduate computational complexity course returns to Berkeley. To get started, I proved Cook’s non-deterministic hierarchy theorem, a 1970s result with a beautifully clever proof, which I first learned from Sanjeev Arora. (And that is not very well known.) Though the full result is more general, say we want to prove that there is a language in NP that cannot be solved by non-deterministic Turing machines in time $o(n^3)$. (If one does not want to talk about non-deterministic Turing machines, the same proof will apply to other quantitative restrictions on NP, such as bounding the length of the witness and the running time of the verification.) In the deterministic case, where we want to find a language in P not solvable in time $o(n^3)$, it’s very simple. We define the language $L$ that contains all pairs $(\langle T\rangle,x)$ where: (i) $T$ is a Turing machine, (ii) $x$ is a binary string, (iii) $T$ rejects the input $(\langle T\rangle,x)$ within $|(\langle T\rangle,x)|^3$ steps, where $|z|$ denotes the length of a string $z$. It’s easy to see that $L$ is in P, and it is also easy to see that if a machine $M$ could decide this problem in time $\leq n^3$ on all sufficiently large inputs, then the behavior of $M$ on input $\langle M\rangle,x$, for every $x$ long enough, leads to a contradiction. We could try the same with NP, and define $L$ to contain pairs $(\langle T\rangle,x)$ such that $T$ is a non-deterministic Turing machine that has no accepting path of length $\leq |\langle T\rangle,x|^3$ on input $(\langle T\rangle,x)$. It would be easy to see that $L$ cannot be solved non-deterministically in time $o(n^3)$, but it’s hopeless to prove that $L$ is in NP, because in order to solve $L$ we need to decide whether a given non-deterministic Turing machine rejects, which is, in general, a coNP-complete problem. Here is Cook’s argument. Define the function $f(k)$ as follows: $f(1):=2$, $f(k):= 2^{(1+f(k-1))^3}$. Hence, $f(k)$ is a tower of exponentials of height $k$. Now define the language $L$ as follows. $L$ contains all pairs $\langle T \rangle,0^t$ where $\langle T\rangle$ is a non-deterministic Turing machine and $0^t$ is a sequence of $t$ zeroes such that one of the following conditions is satisfied 1. There is a $k$ such that $f(k)=t$, and $T$ has no accepting computation on input $\langle T\rangle,0^{1+f(k-1)}$ of running time $\leq (1+(f(k-1))^3$; 2. $t$ is not of the form $f(k)$ for any $k$, and $T$ has an accepting computation on input $\langle T\rangle,0^{1+t}$ of running time $\leq (t+1)^3$. Now let’s see that $L$ is in NP. When we are given an input $\langle T\rangle,0^t$ we can first check if there is a $k$ such that $f(k)=t$. 1. If there is, we can compute $t':=f(k-1)$ and deterministically simulate all computations of $T$ on inputs $\langle T\rangle,0^{t'}$ up to running time $t'^3$. This takes time $2^{O(t'^3)}$ which is polynomial in $t$. 2. Otherwise, we non-deterministically simulate $T$ on input $\langle T\rangle,0^{t+1}$ for up to $(t+1)^3$ steps. (And reject after time-out.) In either case, we are correctly deciding the language. Finally, suppose that $L$ could be decided by a non-deterministic Turing machine $M$ running in time $o(n^3)$. In particular, for all sufficiently large $t$, the machine runs in time $\leq t^3$ on input $\langle M\rangle,0^t$. Choose $k$ to be sufficiently large so that for every $t$ in the interval $1+f(k-1),...,f(k)$ the above property is true. Now we can see that $M$ accepts $(\langle M\rangle,0^{f(k-1)+1})$ if and only if $M$ accepts $(\langle M\rangle,0^{f(k-1)+2})$ if and only if … if and only if $M$ accepts $(\langle M\rangle,0^{f(k)})$ if and only if $M$ rejects $(\langle M\rangle,0^{f(k-1)+1})$, and we have our contradiction. ## Please, no pigs in the subway January 19, 2008 in China | 6 comments And that includes you! I could not figure out what’s the item on the bottom left. Incidentally, the recent spike in the price of pork was a major news item. ## Mmmm… Dangerously Delicious… January 13, 2008 in China | 6 comments ## Pseudorandomness for Polynomials January 11, 2008 in theory | Tags: Pseudorandomness | 3 comments I am currently in Hong Kong for my second annual winter break visit to the Chinese University of Hong Kong. If you are around, come to CUHK on Tuesday afternoon for a series of back-to-back talks by Andrej Bogdanov and me. First, I’d like to link to this article by Gloria Steinem. (It’s old but I have been behind with my reading.) I believe this presidential campaign will bring up serious reflections on issues of gender and race, and I look forward to the rest of it. Secondly, I’d like to talk about pseudorandomness against low-degree polynomials. Naor and Naor constructed in 1990 a pseudorandom generator whose output is pseudorandom against tests that compute affine functions in $\mathbb F_2$. Their construction maps a seed of length $O(\log n /\epsilon)$ into an $n$-bit string in ${\mathbb F}_2^n$ such that if $L: {\mathbb F}_2^n \to {\mathbb F}_2$ is an arbitrary affine function, $X$ is the distribution of outputs of the generator, and $U$ is the uniform distribution over ${\mathbb F}_2^n$, we have (1) $Pr [ L(X)=1] - Pr [ L(U)=1] | \leq \epsilon$ This has numerous applications, and it is related to other problems. For example, if $C\subseteq {\mathbb F}_2^m$ is a linear error-correcting code with $2^k$ codewords, and if it is such that any two codewords differ in at least a $\frac 12 - \epsilon$ fraction of coordinates, and in at most a $\frac 12 + \epsilon$ fraction, then one can derive from the code a Naor-Naor generator mapping a seed of length $\log m$ into an output of length $k$. (It is a very interesting exercise to figure out how.) Here is another connection: Let $S$ be the (multi)set of outputs of a Naor-Naor generator over all possible seeds, and consider the Cayley graph constructed over the additive group of ${\mathbb F}_2^n$ using $S$ as a set of generators. (That is, take the graph that has a vertex for every element of $\{0,1\}^n$, and edge between $u$ and $u+s$ for every $s\in S$, where operations are mod 2 and componentwise.) Then this graph is an expander: the largest eigenvalue is $|S|$, the degree, and all other eigenvalues are at most $\epsilon |S|$ in absolute value. (Here too it’s worth figuring out the details by oneself. The hint is that in a Cayley graph the eigenvectors are always the characters, regardless of what generators are chosen.) In turn this means that if we pick $X$ uniformly and $Y$ according to a Naor-Naor distribution, and if $A\subseteq {\mathbb F}_2^n$ is a reasonably large set, then the events $X\in A$ and $X+Y \in A$ are nearly independent. This wouldn’t be easy to argue directly from the definition (1), and it is an example of the advantages of this connection. There is more. If $f: \{0,1\}^n \rightarrow \{0,1\}$ is such that the sum of the absolute values of the Fourier coefficients is $t$, $X$ is a Naor-Naor distribution, and $U$ is uniform, we have $| Pr [ f(X)=1] - Pr [ f(U)=1] | \leq t \epsilon |$ and so a Naor-Naor distribution is pseudorandom against $f$ too, if $t$ is not too large. This has a number of applications: Naor-Naor distribution are pseudorandom against tests that look only at a bounded number of bits, it is pseudorandom against functions computable by read-once branching programs of width 2, and so on. Given all these wonderful properties, it is natural to ask whether we can construct generators that are pseudorandom against quadratic polynomials over ${\mathbb F}_2^n$, and, in general, low-degree polynomials. This question has been open for a long time. Luby, Velickovic, and Wigderson constructed such a generator with seed length $2^{(\log n)^{1/2}}$, using the Nisan-Wigderson methodology, and this was not improved upon for more than ten years. When dealing with polynomials, several difficulties arise that are not present when dealing with linear functions. One is the correspondence between pseudorandomness against linear functions and Fourier analysis; until the development of Gowers uniformity there was no analogous analytical tool to reason about pseudorandomness against polynomials (and even Gowers uniformity is unsuitable to reason about very small sets). Another difference is that, in Equation (1), we know that $Pr [L(U)=1] = \frac 12$, except for the constant function (against which, pseudorandomness is trivial). This means that in order to prove (1) it suffices to show that $Pr[L(X)=1] \approx \frac 12$ for every non-constant $L$. When we deal with a quadratic polynomial $p$, the value $Pr [p(U)=1]$ can be all over the place between $1/4$ and $3/4$ (for non-constant polynomials), and so we cannot simply prove that $Pr[p(X)=1]$ is close to a certain known value. A first breakthrough with this problem came with the work of Bogdanov on the case of large fields. (Above I stated the problem for ${\mathbb F}_2$, but it is well defined for every finite field.) I don’t completely understand his paper, but one of the ideas is that if $p$ is an absolutely irreducible polynomial (meaning it does not factor even in the algebraic closure of ${\mathbb F}$), then $p(U)$ is close to uniform over the field ${\mathbb F}$; so to analyze his generator construction in this setting one “just” has to show that $p(X)$ is nearly uniform, where $X$ is the output of his generator. If $p$ factors then somehow one can analyze the construction “factor by factor,” or something to this effect. This approach, however, is not promising for the case of small fields, where the absolutely irreducible polynomial $x_1 + x_2 x_3$ has noticeable bias. The breakthrough for the boolean case came with the recent work of Bogdanov and Viola. Their starting point is the proof that if $X$ and $Y$ are two independent Naor-Naor generators, then $X+Y$ is pseudorandom for quadratic polynomials. To get around the unknown bias problem, they divide the analysis into two cases. First, it is known that, up to affine transformations, a quadratic polynomial can be written as $x_1x_2 + x_3x_4 + \cdots + x_{k-1} x_k$, so, since applying an affine transformation to a Naor-Naor generator gives a Naor-Naor generator, we may assume our polynomial is in this form. • Case 1: if $k$ is small, then the polynomial depends on few variables, and so even just one Naor-Naor distribution is going to be pseudorandom against it; • Case 2: if $k$ is large, then the polynomial has very low bias, that is, $Pr[p(U)] \approx \frac 12$. This means that it is enough to prove that $Pr[p(X+Y)] \approx \frac 12$, which can be done using (i) Cauchy-Schwartz, (ii) the fact that $U$ and $U+X$ are nearly independent if $U$ is uniform and $X$ is Naor-Naor, and (iii) the fact that for fixed $x$ the function $y \rightarrow p(x+y) - p(x)$ is linear. Now, it would be nice if every degree-3 polynomial could be written, up to affine transformations, as $x_1x_2 x_3 + x_4x_5x_6 + \cdots$, but there is no such characterization, so one has to find the right way to generalize the argument. In the Bogdanov-Viola paper, they prove • Case 1: if $p$ of degree $d$ is correlated with a degree $d-1$ polynomial, and if $R$ is a distribution that is pseudorandom against degree $d-1$ polynomials, then $R$ is also pseudorandom against $p$; • Case 2: if $p$ of degree $d$ has small Gowers uniformity norm of dimension $d$, then $Pr [p(U)=1] \approx \frac 12$, which was known, and if $R$ is pseudorandom for degree $d-1$ and $X$ is a Naor-Naor distribution, then $Pr[p(R+X)=1] \approx \frac 12$ too. There is a gap between the two cases, because Case 1 requires correlation with a polynomial of degree $d-1$ and Case 2 requires small Gowers uniformity $U^d$. The Gowers norm inverse conjecture of Green Tao is that a noticeably large $U^d$ norm implies a noticeable correlation with a degree $d-1$ polynomial, and so it fills the gap. The conjecture was proved by Samorodnitsky for $d=3$ in the boolean case and for larger field and $d=3$ by Green and Tao. Assuming the conjecture, the two cases combine to give an inductive proof that if $X_1,\ldots X_d$ are $d$ independent Naor-Naor distributions then $X_1+\ldots+X_d$ is pseudorandom for every degree-$d$ polynomial. Unfortunately, Green and Tao and Lovett, Meshulam, and Samorodnitsky prove that the Gowers inverse conjecture fails (as stated above) for $d\geq 4$ in the boolean case. Lovett has given a different argument to prove that the sum of Naor-Naor generators is pseudorandom for low-degree polynomials. His analysis also breaks down in two cases, but the cases are defined based on the largest Fourier coefficient of the polynomial, rather than based on its Gowers uniformity. (Thus, his analysis does not differ from the Bogdanov-Viola analysis for quadratic polynomials, because the dimension-2 Gowers uniformity measures the largest Fourier coefficient, but it differs when $d\geq 3$.) Lovett’s analysis only shows that $X_1 +\cdots + X_{2^{d-1}}$ is pseudorandom for degree-$d$ polynomials, where $X_1,\ldots,X_{2^{d-1}}$ are $2^{d-1}$ independent Naor-Naor generators, compared to the $d$ that would have sufficed in the conjectural analysis of Bogdanov and Viola. The last word on this problem (for now) is this paper by Viola, where he shows that the sum of $d$ independent Naor-Naor generators is indeed pseudorandom for degree-$d$ polynomials. Again, there is a case analysis, but this time the cases depend on whether or not $Pr [p(U)=1] \approx \frac 12$. If $p(U)$ is noticeably biased (this corresponds to a small $k$ in the quadratic model case), then it follows from the previous Bogdanov-Viola analysis that a distribution that is pseudorandom against degree $d-1$ polynomials will also be pseudorandom against $p$. The other case is when $p(U)$ is nearly unbiased, and we want to show that $p(X_1+\ldots +X_d)$ is nearly unbiased. Note how weak is the assumption, compared to the assumption that $p$ has small dimension-$d$ Gowers norm (in Bogdanov-Viola) or that all Fourier coefficients of $p$ are small (in Lovett). The same three tools that work in the quadratic case, however, work here too, in a surprisingly short proof. ## Don Knuth is 70 January 10, 2008 in theory | 3 comments Alonzo Church and Alan Turing imagined programming languages and computing machines, and studied their limitations, in the 1930s; computers started appearing in the 1940s; but it took until the 1960s for computer science to become its own discipline, and to provide a common place for the logicians, combinatorialists, electrical engineers, operations researchers, and others, who had been studying the uses and limitations of computers. That was a time when giants were roaming the Earth, and when results that we now see as timeless classics were discovered. Don Knuth is one of the most revered of the great researchers of that time. A sort of pop-culture icon to a certain geek set (see for example these two xkcd comics here and here, and this story). Beyond his monumental accomplishments, his eccentricities, and humor are the stuff of legends. (Like, say, the fact that he does not use email, or how he optmized the layout of his kitchen.) As a member of a community whose life is punctuated by twice-yearly conferences, what I find most inspiring about Knuth is his dedication to perfection, whatever time it might take to achieve it. As the well known story goes, more than forty years ago Knuth was asked to write a book about compilers. As initial drafts started to run into the thousands of pages, it was decided the “book” would become a seven-volume series, The Art of Computer Programming, the first three of which appeared between 1968 and 1973. An unparalleled in-depth treatment of algorithms and data structures, the books defined the field of analysis of algorithms. At this point Knuth became frustrated with the quality of electronic typesetting systems, and decided he had to take matters in his own hands. In 1977 he started working on what would become TeX and METAFONT, a development that was completed only in 1989. Starting from scratch, he created a complete document preparation system (TeX) which became the universal standard for writing documents with mathematical content, along the way devising new algorithms for formatting paragraphs of texts. To generate the fonts to go with it, he created METAFONT, which is a system that converts a geometric description of a character into a bit-map representation usable by TeX. (New algorithmic work arose from METAFONT too.) And since he was not satisfied with the existing tools available to write a large program involving several non-trivial algorithms, he came up with the notion of “literate programming” and wrote an environment to support it. It is really too bad that he was satisfied enough with the operating system he was using. One now takes TeX for granted, but try to imagine a world without it. One shudders at the thought. We would probably be writing scientific articles in Word, and I would have probably spent the last month reading STOC submissions written in Comic Sans. Knuth has made mathematical exposition his life work. We may never see again a work of the breadth, ambition, and success of The Art of Computer Programming, but as theoretical computer science broadens and deepens, it is vital that each generation cherishes the work of accumulating, elaborating, systematizing and synthesizing knowledge, so that we may preserve the unity of our field. Don Knuth turns 70 tomorrow. I would send him my best wishes by email, but that wouldn’t work… [This post is part of a "blogfest" conceived and coordinated by Jeff Shallit, with posts by Jeff, Scott Aaronson, Mark Chu-Carroll, David Eppstein, Bill Gasarch, Suresh Venkatasubramanian, and Doron Zeilberger.]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 192, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499744176864624, "perplexity_flag": "head"}
http://mathoverflow.net/questions/41055?sort=oldest
## What is the Schouten bracket for the Chevalley-Eilenberg complex with coefficients in a nontrivial module? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\mathfrak g$ be a Lie algebra. The Chevalley-Eilenberg complex is defined to be $\wedge^* \mathfrak g$ with differential $d\colon \wedge^* \mathfrak g\to \wedge^{*-1}\mathfrak g$ defined by $$d(a_1\wedge\cdots \wedge a_k)=\sum_{i,j}(-1)^{i+j-1}[a_i,a_j] a_1\wedge \cdots\wedge\hat{a_i}\wedge\cdots\wedge\hat{a}_j\wedge\cdots\wedge a_k.$$ The differential $d$ is not a derivation with respect to the exterior product $\wedge$, but the deviation from being a derivation is a binary operation which defines a graded Lie algebra structure on $\wedge^* \mathfrak g$: If $\underline{a},\underline{b}\in\wedge^*\mathfrak g$, let $$[\underline{a},\underline{b}]_{s}=d(\underline{a}\wedge\underline{b})-d\underline{a}\wedge b+\underline{a}\wedge d\underline{b}$$ (I'm omitting some signs.) This bracket operation vanishes once you take homology, since if $d\underline{a}=d\underline{b}=0$ then it is obvious that $d(\underline{a}\wedge\underline{b})=[\underline{a},\underline{b}]_s$. However, I was talking to Jim Stasheff several years ago, and he mentioned that the Schouten bracket doesn't necessarily vanish on Lie algebra homology if there are coefficients in a nontrivial $\mathfrak g$ module, $M$. However, I don't know what the definition of the Schouten bracket is in this case. The Chevalley-Eilenberg complex is easy enough to understand: $\wedge^*\mathfrak g\otimes M$, where the differential includes terms where the $a_i$ act on $M$, but the obvious generalization of the above construction fails since two elements of $M$ somehow need to get combined into one element. So my basic question is how you define a Schouten bracket on the Chevalley-Eilenberg complex with coefficients in a nontrivial $\mathfrak g$-module? - The analog thing for associative algebras, which is the Gerstenhaber bracket, is not defined for not-regular values (which is the analogue of trivial values) – Mariano Suárez-Alvarez Oct 4 2010 at 20:06 By "values" I mean "coefficients"... Sometime ago I decided that homology takes coefficients in the module, and cohomology takes values in the module :) – Mariano Suárez-Alvarez Oct 4 2010 at 20:06 1 A truly terrible way to get at this bracket is as follows. If $\mathfrak g$ acts on $M$, then it also acts on the dual space $M^*$, which you should think of as a geometric space, and so there is a map $\mathfrak g \to \Gamma(\rm TM^*)$ (sections of tangent bundle). The Schouten bracket on $\wedge^\bullet\mathfrak g\otimes M$ is the pullback of said bracket on $\wedge^\bullet\Gamma(\rm TM^*)$ to $\mathfrak g$, and restricted to those sections that are linear in the base $M^*$. As I said, this is a terrible way to get at this bracket. – Theo Johnson-Freyd Oct 5 2010 at 2:46 ## 1 Answer Theo Johnson-Freyd: A truly terrible way to get at this bracket is as follows. If $g$ acts on $M$, then it also acts on the dual space `$M^*$`, which you should think of as a geometric space, and so there is a map `$g\to\Gamma(TM^*)$` (sections of tangent bundle). The Schouten bracket on `$\wedge^* g\otimes M$` is the pullback of said bracket on `$\wedge^*\Gamma(TM^*)$` to $g$ , and restricted to those sections that are linear in the base $M^*$ . As I said, this is a terrible way to get at this bracket. (JC: I'm trying to clear the unanswered question backlog.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303131103515625, "perplexity_flag": "head"}
http://mathoverflow.net/questions/10947?sort=votes
## What’s the analogue of the Hilbert class field in the following analogy? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) There's a wonderful analogy I've been trying to understand which asserts that field extensions are analogous to covering spaces, Galois groups are analogous to deck transformation groups, and algebraic closures are analogous to universal covering spaces, hence the absolute Galois group is analogous to the fundamental group. (My vague understanding is that the machinery around etale cohomology makes this analogy precise.) Does the Hilbert class field (of a number field) fit anywhere into this analogy, and how? Phrased another way, what does the Hilbert class field of the function field of a nonsinguar curve defined over $\mathbb{C}$ (say) look like geometrically? - ## 3 Answers This is a great question. Someone will come along with a better answer I'm sure, but here's a bit off the top of my head: 1) The Hilbert class field of a number field $K$ is the maximal everywhere unramified abelian extension of $K$. (Here when we say "$K$" we really mean "$\mathbb{Z}_K$", the ring of integers. That's important in the language of etale maps, because any finite separable field extension is etale.) In the case of a curve over $\mathbb{C}$, the "problem" is that there are infinitely many unramified abelian extensions. Indeed, Galois group of such is the abelianization of the fundamental group, which is free abelian of rank $2g$ ($g$ = genus of the curve). Let me call this group G. This implies that the covering space of C corresponding to G has infinite degree, so is a non-algebraic Riemann surface. In fact, I have never really thought about what it looks like. It's fundamental group is the commutator subgroup of the fundamental group of C, which I believe is a free group of infinite rank. I don't think the field of meromorphic functions on this guy is what you want. 2) On the other hand, the Hilbert class group $G$ of $K$ can be viewed as the Picard group of $\mathbb{Z}_K$, which classifies line bundles on $\mathbb{Z}_K$. This generalizes nicely: the Picard group of $C$ is an exension of $\mathbb{Z}$ by a $g$-dimensional complex torus $J(C)$, which has exactly the same abelian fundamental group as $C$ does: indeed their first homology groups are canonically isomorphic. $J(C)$ is called the Jacobian of $C$. 3) It is known that every finite unramified abelian covering of $C$ arises by pulling back an isogeny from $J(C)$. So there are reasonable claims for calling either $G \cong \mathbb{Z}^{2g}$ and $J(C)$ the Hilbert class group of $C$. These two groups are -- canonically, though I didn't explain why -- Pontrjagin dual to each other, whereas a finite abelian group is (non-canonically) self-Pontrjagin dual. [This suggests I may have done something slightly wrong above.] As to what the Hilbert class field should be, the analogy doesn't seem so precise. Proceeding most literally you might take the direct limit of the function fields of all of the unramified abelian extensions of $C$, but that doesn't look like such a nice field. Finally, let me note that things work out much more closely if you replace $\mathbb{C}$ with a finite field $\mathbb{F}_q$. Then the Hilbert class field of the function field of that curve is a finite abelian extension field whose Galois group is isomorphic to $J(C)(\mathbb{F}_q)$, the (finite!) group of $\mathbb{F}_q$-rational points on the Jacobian. - Minor point: Over a finite field, you still have arbitrarily large everywhere unramified extensions because can always just enlarge the base field. This corresponds to the degree in the Picard group. But if you pick a way of ruling these out, then you'll get Pic_0, which is finite. – James Borger Jan 7 2010 at 7:02 I completely agree. I decided not to mention this for the sake of simplicity, since extending the base is a phenomenon which does not occur over C. – Pete L. Clark Jan 7 2010 at 7:27 Thanks for the answer! Do you have a reference for that last statement you made? – Qiaochu Yuan Jan 8 2010 at 5:08 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The absolute Galois group is $\pi_1( Spec K)$ and the Galois group of the maximal unramified extension of $K$ is $\pi_1( Spec R)$, where $R$ is the ring of integers of $K$ and the class group of $K$ is the abelianization of $\pi_1( Spec R)$ and the Hilbert class field is the field corresponding to this quotient. A curve has a function field and therefore an absolute Galois group, which is much bigger than the fundamental group and describes all branched covers of the curve. The fundamental group corresponds of course to unramified covers. Working over C gives you infinite groups even for the abelianization. If you work over a finite field, then the analogy is more precise. - I guess you're looking for the maximal unramified abelian extension. The fundamental group of a genus g curve has abelianization isomorphic to $\mathbb{Z}^{2g}$, so if g is positive, you get an infinite extension. The corresponding cover is not an algebraic curve in the usual sense of the word, although I suppose you can write down a scheme of infinite type. Perhaps a better answer is: questions of this type fall into the domain of "geometric class field theory". If I'm not mistaken, Serre's Algebraic groups and class fields covers a lot the ideas. - Don't you feel, though, that the Jacobian has got to be better than some crazy inverse limit scheme? – Pete L. Clark Jan 6 2010 at 20:04 I totally agree, although the two objects serve different purposes. As far as I can tell, the crazy inverse limit scheme is the fiber product of the curve with the universal cover of the Jacobian (which is also pretty crazy). I think the difference in tractability is that the finite connected covers of the Jacobian are easier to see. – S. Carnahan♦ Jan 6 2010 at 21:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280617833137512, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/89547-basis-dimension-vector-spaces-print.html
# Basis and Dimension of Vector Spaces Printable View • May 18th 2009, 03:47 PM Maccaman Basis and Dimension of Vector Spaces Give a basis and the dimension of each of the following vector spaces... (a) The space of 3 × 3 matrices which are invarient under a 90-degree clockwise rotation; that is, the matrices satisfying $\begin{pmatrix} a&b&c\\d&e&f\\g&h&i\end{pmatrix} = \begin{pmatrix} g&d&a\\h&e&b\\i&f&c\end{pmatrix}$ (b) The space of polynomials in $P_n(\mathbb{R}) (n\geq 2)$ which are divisible by $x^2 +1$. (i.e. they can be written as the product of $x^2 + 1$ with another polynomial). • May 18th 2009, 06:24 PM NonCommAlg Quote: Originally Posted by Maccaman Give a basis and the dimension of each of the following vector spaces... (a) The space of 3 × 3 matrices which are invarient under a 90-degree clockwise rotation; that is, the matrices satisfying $\begin{pmatrix} a&b&c\\d&e&f\\g&h&i\end{pmatrix} = \begin{pmatrix} g&d&a\\h&e&b\\i&f&c\end{pmatrix}$ so we have $a=c=g=i$ and $b=d=f=h.$ so an elment of your vector space is in the form: $aX + bY + eZ,$ where: $X=\begin{pmatrix} 1 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 1 \end{pmatrix}, \ \ Y=\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix},$ and $Z=\begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix}.$ so the dimension is 3 and a basis for the space is $\{X,Y,Z \}.$ Quote: (b) The space of polynomials in $P_n(\mathbb{R}) (n\geq 2)$ which are divisible by $x^2 +1$. (i.e. they can be written as the product of $x^2 + 1$ with another polynomial). an element of of your vector space here is in the form $(x^2+1)(c_{n-2}x^{n-2} + \cdots + c_1 x + c_0)= c_{n-2}(x^n + x^{n-2}) + \cdots + c_1(x^3+x) + c_0(x^2+1).$ it's clear now that the dimension of your space is $n-1$ and a basis is $\{x^n + x^{n-2}, \cdots , x^3 + x , x^2 + 1 \}.$ All times are GMT -8. The time now is 11:13 AM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358064532279968, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/125412-more-complex-number.html
# Thread: 1. ## More Complex Number 1) Sketch on the Argand diagram the regions which satisfy $1\leq |z+3i| \leq 3$. so $1\leq x^2+(y+3)^2 \leq 9$ now what do i do? 2) Indicate clearly on an Argand diagram the region of points that satisfy both the conditions $|z-3+4i|\leq4$ , $|z|\geq |z-10|$ Thanks as always.! 2. Originally Posted by BabyMilo 1) Sketch on the Argand diagram the regions which satisfy $1\leq |z+3i| \leq 3$. so $1\leq x^2+(y+3)^2 \leq 9$ now what do i do? 2) Indicate clearly on an Argand diagram the region of points that satisfy both the conditions $|z-3+4i|\leq4$ , $|z|\geq |z-10|$ Thanks as always.! 1) $x^2+(y+3)^2 = 1$ and $x^2+(y+3)^2 = 9$ are concentric circles. $1\leq x^2+(y+3)^2 \leq 9$ is the area in between them. 2) $|z-3+4i| \leq 4$ is a circle. $|z| = |z-10|$ is the line x = 5. $|z-3+4i|\leq4$ and $|z|\geq |z-10|$ is the area between the circle and the line (you will need to decide which of the two possible areas it is ....) By the way, all these relations have a very simple geometric interpretation which makes it easy to get the cartesian equations .... 3. Originally Posted by mr fantastic 1) $x^2+(y+3)^2 = 1$ and $x^2+(y+3)^2 = 9$ are concentric circles. $1\leq x^2+(y+3)^2 \leq 9$ is the area in between them. 2) $|z-3+4i| \leq 4$ is a circle. $|z| = |z-10|$ is the line x = 5. $|z-3+4i|\leq4$ and $|z|\geq |z-10|$ is the area between the circle and the line (you will need to decide which of the two possible areas it is ....) By the way, all these relations have a very simple geometric interpretation which makes it easy to get the cartesian equations .... I guess i understand what you mean. But the thing is I am very bad at inequalities and geometric. I will post my answers later to check. Thanks! 4. Do you understand that |z| is the distance from the complex number z to the number 0= 0+ 0i? From that it follows that |a- b| is the distance between the two complex numbers a and b. Saying that |z+ 3i|= |z- (-3i)|= 1 means that the distance from z to -3i is 1: z can be any number on the circle with center at -3i and radius 1. Saying that $1\le |z+ 3i|$ means that z is on or outside that circle. Similarly |z+ 3i|= |z- (-3i)|= 4 means that the distance from z to -3i is 4: z can be any number on the circle with center at -3i and radius 4. $|z+ 3i|\le 4$ says that z is on or inside that circle. Similarly, $|z- 3+ 4i|= |z-(3-4i)|\le 4$ means that z is a point on or inside the circle with center at 3- 4i and radius 4. $|z|= |z- 10|$ means that the distance from z to the origin is equal to the distance from z to 10. And it is an easy geometry theorem that the set of all point equidistant from two points is the perpendicular bisector of the segement between the two points. Here, that is the line perpendicular to the x-axis (real axis) at (5, 0) which is given by z= 5+ yi for any y. $|z|\ge |z- 10|$ is that line plus the set of points on the right side of that line so that z is closer to 10 than to 0. 5. Would this be correct for Q2 ? thanks! Attached Thumbnails 6. Originally Posted by BabyMilo Would this be correct for Q2 ? thanks! The line in your diagram looks nothing like the equation of the line I gave you. And did you read the reply by HallsofIvy who gave the geometric details of how I derived that equation? 7. Your circle is correct but I don't know how you got that line- and you don't say what line it is. The set of z such that $|z|\ge |z- 10|$ is, as I said, the perpendicular bisector of the line from 0 to 10. The line containing 0 and 10, the real axis, is horizontal, so the perpendicular bisector is the vertical line z= 5+ yi for all y. 8. Originally Posted by HallsofIvy Your circle is correct but I don't know how you got that line- and you don't say what line it is. The set of z such that $|z|\ge |z- 10|$ is, as I said, the perpendicular bisector of the line from 0 to 10. The line containing 0 and 10, the real axis, is horizontal, so the perpendicular bisector is the vertical line z= 5+ yi for all y. the line is y=5x? 9. Originally Posted by BabyMilo the line is y=5x? As I said earlier, and which was subsequently explained by HoI, the equation of the line is x = 5. How do you get y = 5x? 10. Originally Posted by mr fantastic As I said earlier, and which was subsequently explained by HoI, the equation of the line is x = 5. How do you get y = 5x? would it be like this then? Attached Thumbnails 11. Originally Posted by BabyMilo would it be like this then? Test a value of z taken from the red area. Does it satisfy the inequality?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9643346071243286, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/29663-automorphism.html
# Thread: 1. ## automorphism If $\sigma_{1}, \sigma_2, \ldots, \sigma_{n}$ is a group of automorphisms of a field $E$ and if $F$ is a fixed field of $\sigma_{1}, \sigma_{2}, \ldots, \sigma_{n},$ then $(E/F) = n$. How would I prove this? 2. Originally Posted by heathrowjohnny If $\sigma_{1}, \sigma_2, \ldots, \sigma_{n}$ is a group of automorphisms of a field $E$ and if $F$ is a fixed field of $\sigma_{1}, \sigma_{2}, \ldots, \sigma_{n},$ then $(E/F) = n$. There is a result due to Artin . Let $G$ be a finite group of automorphism of $E$, let $F=E^G$ be the fixed subfield, then $E/F$ is finite and $[E:F] \leq |G|$. Note, you need the $\leq$ sign.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9090736508369446, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/116169-area-between-functions.html
# Thread: 1. ## Area between to functions Ok, so I was sick one lecture and couldn't make it but now I'm stuck on my homework assignment and cannot reach anybody to fill me in on how to complete certain problems (and our book is just awful in giving examples.) I've figured out how to do problems with just the two functions themselves [ie: f(x) = 5-x^2 ; g(x) = x^2-3)] but I don't understand what to do when two x values are already given. Problem: f(x) = 5-x^2 ; g(x) = x^2-3 ; x = 0, x = 4 The answer is 32. I keep getting stuck when I reduce the equation to -2x^3/3 + 8x (4 to 0) 2. Originally Posted by some_nerdy_guy Ok, so I was sick one lecture and couldn't make it but now I'm stuck on my homework assignment and cannot reach anybody to fill me in on how to complete certain problems (and our book is just awful in giving examples.) I've figured out how to do problems with just the two functions themselves [ie: f(x) = 5-x^2 ; g(x) = x^2-3)] but I don't understand what to do when two x values are already given. Problem: f(x) = 5-x^2 ; g(x) = x^2-3 ; x = 0, x = 4 The answer is 32. I keep getting stuck when I reduce the equation to -2x^3/3 + 8x (4 to 0) did you sketch the graphs of f and g ? $A = \int_0^2 f(x) - g(x) \, dx + \int_2^4 g(x) - f(x) \, dx$ 3. ## Interesting interpretation of bounded area The 2 parabolas intersect at 2. From 0 to 2, the 5-x^2 parabola is on top of the other one. From 2 to 4 it's the other parabola that's on top. The question is asking for the area bounded by the 2 curves, so you need to split the problem into 2 parts, one integration from 0 to 2 and the other from 2 to 4. That gives you 32. Make sure you flip the sign of the integrand, otherwise the signed area will come out wrong. 4. Thanks a bunch...the sketching the graph part went over my head.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.955769956111908, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/53306/what-can-we-learn-from-the-tropicalization-of-an-algebraic-variety
## What can we learn from the tropicalization of an algebraic variety? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I often hear people speaking of the many connections between algebraic varieties and tropical geometry and how geometric information about a variety can be read off from the associated tropical variety. Although I have seen some concrete examples of this, I am curious about how much we can get out of this correspondence in general. More precisely, my question is the following: Which information of $X=V(I)$ can be read off its tropicalization $\mbox{Trop(X)}=\bigcap_{f\in I}\mbox{trop}(f)$? As a very basic example, it is known that $\dim(X)=\dim_{\mathbb{R}}\mbox{Trop}(X)$. - ## 4 Answers By work of Matt Baker, the dimension of a linear system on a curve is bounded above by the dimension of the corresponding tropical linear system on the corresponding tropical curve -- see Lemma 2.8 of his paper, "Specialization of linear systems from curves to graphs." - I have no idea whether anything like this is known in higher dimension, by the way, or even whether it's completely clear how to articulate the correct question! – JSE Jan 26 2011 at 15:26 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I hope others will have lots more to say, but one nice property is $$g(X) \geq b_1(\mathrm{Trop}(X))$$ when $X$ is a (plane) curve, where $g$ is the genus and $b_1$ is the betti number (=number of cycles) of the graph. (Thanks to quim for the correcting the inequality.) - 2 Actually, $g(x)\ge b_1(Trop(X))$. For elliptic curves, the Betti number of the tropicalization will be zero or one depending on the sign of the valuation of the j-invariant (see Speyer arXiv:0711.2677 and Katz-Markwig-Markwig DOI:10.1112/S1461157000001522) – quim Jan 26 2011 at 10:19 Oops! Fixed it, but please leave your comment up with the references. – Dave Anderson Jan 26 2011 at 17:07 For general subvarieties of an algebraic torus, the tropicalization knows about the class of the subvariety in a suitable toric compactification of the algebraic torus. So you can compute intersection products of subvarieties of a torus tropically. See the recent preprint of Osserman-Payne for the state of the art. For subvarieties X that are schon (which is a natural smoothness condition), you can say a lot more. There is a natural dualizing complex $\Gamma_X$ which maps to Trop(X) whose homology reflects the lowest bit of the weight filtration on X. From this fact, you can get the natural generalization of $g(X)\geq b_1(\Gamma)$ (it is not in general true that $g(X)\geq b_1(\operatorname{Trop}(X))$ because the tropicalization map may have disconnected fibers (as was pointed out by Speyer)). There are two special cases where you can say a lot more: when $X$ is a schon hypersurface and when Trop(X) is smooth (smoothness here means Trop(X) is locally modeled on matroid fans). In this case, you can say things about the Hodge numbers of X. See my paper with Stapledon for details. Warning: the results of that paper require compactifying X by completing the algebraic torus to a toric variety. We have a sequel in the works that will use more sophisticated Hodge theory to get around that problem. - To JSE: higher dimensional analogs of Matt Baker's specialization lemma should hold. I've worked out some special cases for surfaces using tropical intersection theory. I think the right approach to the higher-dimensional analog should use rigid analytic geometry. – Eric Katz Feb 6 2011 at 4:06 In arxiv.org/0805.1916 Sam Payne shows that one can reconstruct the analytification of a quasiprojective variety over a nonarchimedean field as the inverse limit of its tropicalisations. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9238271117210388, "perplexity_flag": "head"}
http://en.wikiversity.org/wiki/Matrix_Algebra_for_Electrical_Engineers
# Matrix Algebra for Electrical Engineers From Wikiversity ## Introduction - Linear Equations Let us illustrate through examples what linear equations are. We will also be introducing new notation wherever appropriate. For example: $3 x - y = 14$ $2 x + y = 11$ You could solve for $x$ or $y$ in one equation and substitute it into the other. But what if you had three variables, and three equations, such as: $12 x + 2 y + 15 z = 25$ $23 x + 12 y + 45 z = 46$ $32 x + 68 y + 10 z = 8$ Substitution is still viable, but it would take a while, and you have many opportunities to make a mistake. In this page, we will talk about writing turning linear equations into a matrix, and then using properties of matrices to solve them. Note that we're skipping most of the theory involved, and are solely focusing on their practical usage. Now let's examine those first two problems. The first thing we need to do is turn the systems of equations into matrices. To do that, we make each row correspond to an equation, and each column (vertical) correspond to a single variable. So for the first equation: $3 x - y = 14$ $2 x + y = 11$ So we have: $\left[\begin{array}{ccc} 3 & -1 & 14\\ 2 & 1 & 11\end{array}\right]$ Where the first row is the first equation, and the second row is the second. The first column corresponds to $x$, the second to $y$, and the third to $z$. Let's try doing that to the second one now too: $\left[\begin{array}{cccc} 12 & 2 & 15 & 25\\ 23 & 12 & 45 & 46\\ 32 & 68 & 10 & 8\end{array}\right]$ ## Matrices Suppose that you have a linear system of equations $\begin{align} a_{11} x_1 + a_{12} x_2 + a_{13} x_3 + a_{14} x_4 &= b_1 \\ a_{21} x_1 + a_{22} x_2 + a_{23} x_3 + a_{24} x_4 &= b_2 \\ a_{31} x_1 + a_{32} x_2 + a_{33} x_3 + a_{34} x_4 &= b_3 \\ a_{41} x_1 + a_{42} x_2 + a_{43} x_3 + a_{44} x_4 &= b_4 \end{align} ~.$ Matrices provide a simple way of expressing these equations. Thus, we can instead write $\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & a_{34} \\ a_{41} & a_{42} & a_{43} & a_{44} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \end{bmatrix} ~.$ Here $\mathbf{A}$ is a $4\times 4$ matrix while $\mathbf{x}$ and $\mathbf{b}$ are $4\times 1$ matrices. In general, an $m \times n$ matrix $\mathbf{A}$ is a set of numbers arranged in $m$ rows and $n$ columns. $\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \dots & a_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & a_{m3} & \dots & a_{mn} \end{bmatrix}~.$ ## Determinant of a matrix The next thing we will discuss is the meaning of the determinant of a matrix. Note that a determinant is only defined for square matrices. For a $2 \times 2$ matrix $\mathbf{A}$, we have $\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \implies \det(\mathbf{A}) = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{vmatrix} = a_{11} a_{22} - a_{12} a_{21} ~.$ For a $n \times n$ matrix, the determinant is calculated by expanding into minors as $\begin{align} &\det(\mathbf{A}) = \begin{vmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \dots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \dots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & a_{n3} & \dots & a_{nn} \end{vmatrix} \\ &= a_{11} \begin{vmatrix} a_{22} & a_{23} & \dots & a_{2n} \\ a_{32} & a_{33} & \dots & a_{3n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n2} & a_{n3} & \dots & a_{nn} \end{vmatrix} - a_{12} \begin{vmatrix} a_{21} & a_{23} & \dots & a_{2n} \\ a_{31} & a_{33} & \dots & a_{3n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n3} & \dots & a_{nn} \end{vmatrix} + \dots \pm a_{1n} \begin{vmatrix} a_{21} & a_{23} & \dots & a_{2(n-1)} \\ a_{31} & a_{33} & \dots & a_{3(n-1)} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n3} & \dots & a_{n(n-1)} \end{vmatrix} \end{align}$ In short, the determinant of a matrix $\mathbf{A}$ has the value ${ \det(\mathbf{A}) = \sum^n_{i=1} (-1)^{i+j} a_{ij} M_{ij} }$ where $M_{ij}$ is the determinant of the submatrix of $\mathbf{A}$ formed by eliminating row $i$ and column $j$ from $\mathbf{A}$. Subject classification: this is a mathematics resource . Educational level: this is a secondary education resource. Educational level: this is a tertiary (university) resource. ## Cramers Rule Now we're going to unify our (basic) knowledge of turning a set of linear equations into a matrix, and calculating their determinant, as a method for solving linear equations. Cramer's rule is an elegant formula for the solutions of a system of linear equations. A typical linear system (also known as a set of "simultaneous linear equations") is a set of N linear equations in N variables (or "unknowns".) For N=3, it might look like this: $A x + B y + C z = P\,$ $D x + E y + F z = Q\,$ $G x + H y + I z = R\,$ The numbers A...I (the "coefficient matrix") are given, as are the numbers P, Q, R (the "right-hand-sides".) The values of x, y, z (the "unknowns") are to be found. A system of N linear equations in N unknowns has a uniquely determined solution except in unusual circumstances (the coefficient matrix having determinant zero, as will be shown below.) If the number of equations were greater than the number of unknowns, there would be no solutions except in unusual circumstances. If the number of equations were less than the number of unknowns, there would be infinitely many solutions. Neither of those cases is covered by Cramer's rule. Cramer's rule gives the solution in terms of the determinants of the coefficient matrix and the coefficient matrix with individual columns replaced. It says that the value of the nth unknown is the quotient of the determinant of the coefficient matrix with its nth column replaced by the right-hand-side numbers, divided by the determinant of the unmodified coefficient matrix. For the 3-equation example given above: $x = \frac{\begin{vmatrix} \color{Red}P & B & C \\ \color{Red}Q & E & F \\ \color{Red}R & H & I\end{vmatrix}}{\begin{vmatrix} A & B & C \\ D & E & F \\ G & H & I\end{vmatrix}}\ \ \ \ y = \frac{\begin{vmatrix} A & \color{Red}P & C \\ D & \color{Red}Q & F \\ G & \color{Red}R & I\end{vmatrix}}{\begin{vmatrix} A & B & C \\ D & E & F \\ G & H & I\end{vmatrix}}\ \ \ \ z = \frac{\begin{vmatrix} A & B & \color{Red}P \\ D & E & \color{Red}Q \\ G & H & \color{Red}R\end{vmatrix}}{\begin{vmatrix} A & B & C \\ D & E & F \\ G & H & I\end{vmatrix}}$ This works for any number of equations. ## Basic examples Now, let's try that with our previous two examples: Let's do the simpler one first: $\left[\begin{array}{ccc} 3 & -1 & 14\\ 2 & 1 & 11\end{array}\right]$ The Coefficient matrix is given as: $\left[\begin{array}{ccc} 3 & -1\\ 2 & 1\end{array}\right]$ As per Cramer's Rule, the variable of the nth column can be solved by taking the determinant of the matrix with the nth column replaced with the solution, and dividing it by the coefficient matrix. Let's do that. Lets first write out the matrix for calculating the first columns variable, $x$. Substituting the solutions (14 and 11) into the first column gives us the matrix: $\left[\begin{array}{ccc} 14 & -1\\ 11 & 1\end{array}\right]$ Now, we need to divide the matrix with the substitution by the original matrix: $\frac{\left[\begin{array}{cc} 14 & -1\\ 11 & 1\end{array}\right]}{\left[\begin{array}{cc} 3 & -1\\ 2 & 1\end{array}\right]}=\frac{14*1-(-1*11)}{3*1-(-1*2)}=\frac{25}{5}=5$ From this we can see that the value of the variable in the first column, $x$ is equal to 5. Now let's do the same for the second column. Replacing the -1 and 1 with the 14 and 11, and dividing the determinants: $\frac{\left[\begin{array}{cc} 3 & 14\\ 2 & 11\end{array}\right]}{\left[\begin{array}{cc} 3 & -1\\ 2 & 1\end{array}\right]}=\frac{3*11-(14*2)}{3*1-(-1*2)}=\frac{5}{5}=1$ From this we can see that the value of the variable in the second column, $y$ is equal to 1.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 50, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9261263608932495, "perplexity_flag": "head"}
http://alanrendall.wordpress.com/2011/11/19/calcium-oscillations/
# Hydrobates A mathematician thinks aloud ## Calcium oscillations There is evidence to suggest that oscillations in levels of calcium inside and outside cells are used as a signalling mechanism. A variety of mathematical models have been introduced to study this phenomenon. Here I will discuss some aspects of the subject. A more general review can be found in this Scholarpedia article. In the plasma membrane and the endoplasmic reticulum there are pumps which transport calcium ions out of the cytosol. The result is a huge concentration difference between the cytosol on the one hand and the extracellular space and the lumen of the endoplasmic reticulum on the other hand. This can be several orders of magnitude. There are also ion channels in these membranes which, when open, allow the calcium to flow down its gradient. This provides a way to change the calcium concentration in the cytosol very fast and this can cause rapid changes in the behaviour of a cell. In this context it is important that the endoplasmic reticulum has such a high surface area and is so widely distributed in the cell. One type of calcium channels in the ER reacts to the binding of the substance IP${}_3$ (inositol 1,4,5-trisphosphate) to the channel by opening. This effect is also modulated by the calcium concentration in the cytosol. There are calcium channels in the plasma membrane and there is also a certain amount of leakage through both membranes. Transport of calcium in and out of mitochondria can be an important effect. Some combination of these features can lead to oscillations in the calcium concentration in the cytosol. This presents a challenge for mathematical modelling. Ideally a dynamical system consisting of ODEs for the concentrations of various substances would exhibit periodic solutions. Of course a system of this kind must have dimension at least two and several two-dimensional models have been proposed. It could be that several of these models are useful since calcium signalling in different cell types may use different mechanisms. The difficult thing is not to find a model exhibiting oscillations but to find the right model for a particular type of cell. In what follows I consider one type of model. I have chosen this type for two reasons. The first is its simplicity. The second is that it may be relevant to explaining the role of calcium in the activation of T cells. I consider first a model due to Somogyi and Stucki (J. Biol. Chem. 266, 11068). It is a two-dimensional dynamical system. The two variables are the calcium concentrations in the lumen of the ER and the cytosol, call them $x$ and $y$. The concentration of IP${}_3$ is taken to be constant. The rates of change of $x$ and $y$ are given by $k'y-kx-\alpha f(y)x$ and $kx-k'y+\alpha f(y)x+\gamma-\beta y$. The quantities $k,k',\alpha,\beta,\gamma$ are positive constants while $f$ is a positive function which describes the behaviour of the IP${}_3$ receptor and must be further specified to get a definite model. The inventors of the model remark that setting $k=0$ and $f(y)=\frac{y^2}{a^2+y^2}$ causes this system to reduce to the famous Brusselator, which I have commented on elsewhere. Thus the model can be thought of as a kind of generalized Brusselator and indeed it exhibits similar qualitative behaviour. The choices which are suggested to be appropriate for the cells being studied (in this case hepatocytes) is that $k>0$ and $f$ is given by a Hill function, $f(y)=\frac{y^n}{a^n+y^n}$. Nice features of this system is that it has a unique stationary solution which can be written down explicitly and that it is also possible to get an explicit formula for the characteristic equation of the linearization at that stationary solution. In this way the stability of the stationary solution can be determined, with instability corresponding to the existence of a limit cycle. It is stated that a Hopf bifurcation occurs but there is no discussion of proving this. The general picture seems to be that oscillatory behaviour occurs at intermediate levels of IP${}_3$ stimulation and disappears at levels which are too low or too high. In this paper an alternative version of the model is introduced where in some places $x$ is replaced by $x-y$. This happens when modelling effects driven by the difference of concentrations in the two compartments. Given that $x$ is normally much larger than $y$ it is plausible to replace the difference of concentrations by the concentration in the ER. The dephosphorylation of the transcription factor NFAT during the activation of T cells has been studied in a paper of Salazar and Höfer (J. Mol. Biol. 327, 31). An important step in the activation process is an influx of calcium caused by release of IP${}_3$. The calcium binds to calmodulin. It also binds to the phosphatase calcineurin which can then be activated by calmodulin. Finally calcineurin removes phosphate groups from NFAT. In this paper a model for calcium dynamics is used which is closely related to the (alternative model) of Somogyi and Stucki. There are three equations but two of them form a closed system which is more or less the Somogyi-Stucki model with a specific choice of receptor activity as a function of the concentration of IP${}_3$. The last equation essentially means that the calcium level is integrated in time to give the concentration of active calcineurin. ### Like this: This entry was posted on November 19, 2011 at 11:59 am and is filed under dynamical systems, immunology, mathematical biology. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 2 Responses to “Calcium oscillations” 1. hydrobates Says: December 6, 2011 at 11:59 am | Reply I now noticed that in order to get the Brusselator it is necessary to take $f(y)=y^2$ instead of the function I gave. 2. The NFAT signalling pathway « Hydrobates Says: January 6, 2012 at 6:19 am | Reply [...] signalling pathway, its connection to calcium and a paper on the subject by Salazar and Höfer in a previous post. Now I have written a paper where I look into mathematical aspects of the activation of NFAT by [...] %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454312324523926, "perplexity_flag": "head"}
http://mathoverflow.net/questions/101841/is-sl-2q-isomorphic-to-pgl-2q
## Is $SL_2(q)$ isomorphic to $PGL_2(q)$? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $SL_2 (q)$ be group of all $2 * 2$ invertible matrices with unit determinant and $PGL_2(q)$ is quotient group $GL_2(q)/{\text{scalar matrices over q}}$. - what is a root systems there is an simple answer? – Sina Jul 10 at 12:57 1 Order of $SL_2(q)$ equals $q(q^2-1)$, order of $PGL_2(q)$ is $q(q^2-1)(q+1)$. – Andrei Smolensky Jul 10 at 13:09 2 They have the same order. $PGL_2(q)=GL_2(q)/\{\lambda I, \lambda\in F_q-\{0\}\}$ where I is the identity matrix – Sina Jul 10 at 13:21 4 Dear Sina, It's not clear that the answer you have accepted actually answers your question. Indeed, as Jim Humphreys points out in his answer, the groups are not isomorphic if $q$ is odd (the first has a non-trivial centre while the second does not), but are isomorphic if $q$ is a power of $2$. Regards, – Emerton Jul 10 at 14:48 1 Oops! I just thought that what I wrote above is wrong. The lesson I learned today: never try to reproduce a formula you believe to remember without checking. – Andrei Smolensky Jul 10 at 19:11 ## 3 Answers The question itself is natural, but it's fairly elementary and has a clearcut answer in the literature on finite simple groups including the series of books by Gorenstein-Lyons-Solomon (and for small order groups the Atlas). It's easiest to understand what is going on from the algebraic group viewpoint, summarized with references in Section 1.1 of my 2006 LMS Lecture Note volume Modular Representations of Finite Groups of Lie Type. Here Lang's theorem is crucial. It shows that whenever you have an isogeny (algebraic group epimorphism with finite kernel) from one connected algebraic group onto another over a finite field of `$q$` elements, the corresponding groups of rational points over `$\mathbb{F}_q$` have the same order. In your case, start with the natural map from a general linear group to the quotient by scalars, which restricts to an isogeny `$\mathrm{SL}_2 \rightarrow \mathrm{PGL}_2$`. Thus the two finite groups do have the same order for any `$q$`, even though the original map fails to be an algebraic group isomorphism. When `$q$` is even, however, the finite groups are in fact isomorphic. But when `$q$` is odd, the group on the left has a nontrivial center and the group on the right doesn't. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Not quite, $PGL(2, F_q) \cong PSL(2, F_q) \rtimes F_q^\times/ (F_q^{\times})^2$. Look here: http://en.wikipedia.org/wiki/File:PSL-PGL.svg - Works for any field. – Marc Palm Jul 10 at 14:36 2 But when $q$ is a power of $2,$ every non-zero element of the field is a square, and the ${\rm PSL}$ on the right is an ${\rm SL}.$ – Geoff Robinson Jul 10 at 15:50 1 I do not understand. Is there a mistake, of are you saying the OP is correct in this special case? $F_q^\times$ modulo the squares is a trivial group, when $2$ divides q. – Marc Palm Jul 10 at 16:41 Dear Mrc, I think that Geoff Robinson is objecting to your statement "not quite". If $q$ is a power of $2$ then in fact the general isomorphism does reduce to an isomorphism between $PGL_2$ and $SL_2$. Regards, – Emerton Jul 10 at 17:25 I was scared for a second here;) – Marc Palm Jul 10 at 18:39 $SL_2$ and $PGL_2$, seen as linear algebraic groups, have different root data. See Milne's notes on reductive groups http://jmilne.org/math/CourseNotes/RG.pdf p. 24. This implies that the algebraic groups are non-isomorphic, but not necessarily the statement you wanted. - what is a root system and is there any simple answer? – Sina Jul 10 at 13:08 3 If SINA does not know root systems, perhaps the fastest argument is that $\textbf{SL}_2(\mathbb{F}_q)$ has a nontrivial center, while $\textbf{PGL}_2(\mathbb{F}_q)$ has trivial center. – Jason Starr Jul 10 at 13:09 1 @Jason Does $SL_2(2^k)$ has nontrivial center? – Andrei Smolensky Jul 10 at 13:17 1 @Andrei -- I guess I was assuming that $q$ is odd. – Jason Starr Jul 10 at 14:16 5 Dear Timo, A couple of comments: (i) you probably mean root data rather than root systems; (ii) it's not clear (at least to me) that a statement about non-isomorphism of algebraic groups implies a corresponding statement about non-isomorphism of $\mathbb F_q$-points; (iii) related to point (ii), in fact the natural isogeny does induce an isomorphism if $q$ is a power of $2$ (this is easily checked, or see Jim Humphrey's answer), so point (ii) is a genuine concern, as far as I can tell. Regards, – Emerton Jul 10 at 14:44 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9268746972084045, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/32971/how-to-prove-this-inequality/33132
# how to prove this inequality? Given $x>0$, $y>0$ and $x + y =1$, how to prove that $\frac{1}{x}\cdot\log_2\left(\frac{1}{y}\right)+\frac{1}{y}\cdot\log_2\left(\frac{1}{x}\right)\ge 4$ ? - ## 2 Answers The function $t\mapsto\log_2{1\over t}$ is convex. Apply Jensen's inequality to $$f(x,y):={1\over x}\log_2{1\over y}+{1\over y}\log_2{1\over x}={1\over x y}\Bigl(y\ \log_2{1\over y}+x\ \log_2{1\over x}\Bigr)$$ and obtain $$f(x,y)\geq{1\over x y}\log_2 2={1\over x y}\geq 4\ .$$ - Hint 1: Rewrite this inequality as: $$-x\log_2 x - (1-x)\log_2 (1-x) \geq 4 x (1-x)$$ Both sides of the inequality define concave functions on the interval $[0,1]$. Plot them. Can you show that the graph of the second is always lying below the graph of the other? - 1 Hint 2: raise both sides of the expression Raskolnikov wrote to the exponent with base 2. – Willie Wong♦ Apr 14 '11 at 16:38 Thanks for comment. Can you provide some details, 'cause I didn't get it. – user9587 Apr 15 '11 at 2:30 1 @brian: What I'd do from there is a full sign analysis (first and second derivative) of the difference between the two functions. It's easy to show that there are two inflexion points, and a bit more work to show that there are 3 extrema (2 maxima and 1 minimum). From there, you can find the shape of the curve of the difference and show that it is always positive. – Raskolnikov Apr 15 '11 at 9:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462400078773499, "perplexity_flag": "head"}
http://mathoverflow.net/questions/49244/give-an-example-of-monoid-with-property-m2-m3
## Give an example of monoid with property $m^2 = m^3$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Give an example of finitely generated, infinite monoid $M$ with property that for all $m \in M$ we've got $m^2 = m^3$. This question comes from the problem I was given during algebraic languages theory class at CS department. I've got construction that is using methods outlined during that class but the structure of the monoid is not very clear. I thought someone could propose more direct construction that would give better insight into methods of constructing such algebraic structures. In a case there's no better solution I'm planning to share my own with brief explanation of methods used in that construction. - 1 Questions on MathOverflow are generally expected to have some motivation, showing where the problem came from and how it would help you with your research. – Andrew Stacey Dec 13 2010 at 11:37 Oh, sorry. I wasn't aware that I need to give some motivation. I'll edit my question. – Grzegorz Kossakowski Dec 13 2010 at 11:50 1 This question is likely to get closed (as I write, there are three votes outstanding) not because it is homework, but because it is of a level lower than that usually expected on this site. If it is closed, I recommend that you take it to math.stackexchange.com instead. I would also recommend that you give your construction and ask if there is an alternative description that would help you see better what is going on: that would be a more focussed question and easier to answer than the current question. – Andrew Stacey Dec 13 2010 at 12:18 2 It is not a homework, it grew up from a homework. But the OP needs to explain what is the construction they did in class (I suspect it is the same as in my answer), and what's unclear in this construction for him. There are many constructions of such monoids, Morse-Hadlund's is still the simplest one. – Mark Sapir Dec 13 2010 at 13:48 ## 1 Answer This is a classic result of Morse and Hedlund (they actually attribute it to Dilworth). Take the alphabet $\{a,b,c\}$ and an infinite word $W$ in that alphabet which does not contain subwords of the form $uu$ (such an infinite word was first constructed by Thue, search Google for Thue sequence, then by Morse-Hedlund, then by many others, all done independently). Now let $S$ be the set of all finite subwords of $W$ (including the empty word) and symbol $\{0\}$. The product of two words in $S$ is either their concatenation: $u\cdot v=uv$ if $uv\in S$ or 0 otherwise. That is a monoid satisfying the law $x^2=0$ (for all $x\ne 1$), hence $$x^2=x^3.$$ - 2 I think this answer makes the question worth being open. – Richard Kent Dec 13 2010 at 16:24 Semigroups like this, obtained by collapsing an ideal (here $\{a,b,c\}^*-S$) to a new 0 symbol, are called Rees quotient semigroups. By the way, it is Hedlund, not Hadlund. – Ale De Luca Apr 11 2011 at 17:15 1 Yes, the set of words that are not subwords of the given infinite word is an ideal in the free monoid, and the monoid I described is the corresponding Rees quotient. – Mark Sapir Apr 11 2011 at 23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9638189673423767, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/180162-maximising-quadratic-function.html
# Thread: 1. ## Maximising a quadratic function. Here is another one that I am having problems with. The cost (in dollars) of producing x items is C (x) = 4000 - 3x + (X^2 1000) If the items are sold for \$4 each, find the value of x that maximises the Profit and find the maximum profit. I have figured out the equasion Profit = P = 4x - (4000- 3x + (X^2 1000) I tried to put it into my graphics calculator, and get the value for the max profit but its a linear equasion. Thanks in advance for any help P.S apologies in advance, I do not know how to put maths equasions into computers... maybe my next question should be about that! 2. You should know that since the x^2 term has a negative coefficient, that there will be a maximum turning point, which you find by completing the square to put the equation into turning point form... $\displaystyle \begin{align*}P &=4x - \left(4000 - 3x + \frac{x^2}{1000}\right)\\ &= 4x - 4000 + 3x - \frac{x^2}{1000}\\ &= -\frac{x^2}{1000} + 7x - 4000\\ &= -\frac{1}{1000}\left(x^2 - 7000x + 4\,000\,000\right)\\ &= -\frac{1}{1000}\left[x^2 - 7000x + \left(-3500\right)^2 - \left(-3500\right)^2 + 4\,000\,000\right]\\ &= -\frac{1}{1000}\left[\left(x - 3500\right)^2 - 12\,250\,000 + 4\,000\,000\right]\\ &= -\frac{1}{1000}\left[\left(x - 3500\right)^2 - 8\,250\,000\right]\\ &= -\frac{1}{1000}\left(x - 3500\right)^2 + 8250\end{align*}$ So what is the maximum profit and how many units do they have to sell? 3. Originally Posted by jlee88 Here is another one that I am having problems with. The cost (in dollars) of producing x items is C (x) = 4000 - 3x + (X^2 1000) If the items are sold for \$4 each, find the value of x that maximises the Profit and find the maximum profit. I have figured out the equasion Profit = P = 4x - (4000- 3x + (X^2 1000) I tried to put it into my graphics calculator, and get the value for the max profit but its a linear equasion. Not with that [itex]x^2[/itex] in it! [quoteThanks in advance for any help P.S apologies in advance, I do not know how to put maths equasions into computers... maybe my next question should be about that![/QUOTE]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359073042869568, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/217846/h-normal-in-g-need-g-contain-a-subgroup-isomorphic-to-g-h
# $H$ normal in $G$. Need $G$ contain a subgroup isomorphic to $G/H$ If $H \trianglelefteq G$, need $G$ contain a subgroup isomorphic to $G/H$? I worked out the isomorphism types of the quotient groups of $S_3, D_8, Q_8$. For $S_3$: 1. $S_3/\{1\} \cong S_3$, 2. $S_3/\langle (1\ 2\ 3)\rangle \cong \mathbb Z_2$, 3. $S_3/S_3 \cong \{1\}$. For $D_8$: 1. $D_8/\{1\} \cong D_8$, 2. $D_8/\langle r\rangle \cong \mathbb Z_2$, 3. $D_8/\langle s, r^2\rangle \cong \mathbb Z_2$, 4. $D_8/\langle sr^3, r^2\rangle \cong \mathbb Z_2$, 5. $D_8/\langle r^2\rangle \cong V_4$, 6. $D_8/D_8 \cong \{1\}$. For $Q_8$ 1. $Q_8/\{1\} \cong Q_8$, 2. $Q_8/\{1, -1\} \cong V_4$, 3. $Q_8/\langle i \rangle \cong \mathbb Z_2$, 4. $Q_8/\langle j \rangle \cong \mathbb Z_2$, 5. $Q_8/\langle k \rangle \cong \mathbb Z_2$, 6. $Q_8/Q_8 \cong \{1\}$. So I'm guessing that the statement is true, but I don't know how to prove it. And if its not true, I haven't found a counter example. Can someone give me a proof or counterexample? Or a HINT :D EDIT: Ahhh. I feel stupid now. Given $\{1, -1\}$ normal in $Q_8$ there is no subgroup of $Q_8$ isomorphic to $V_4$. Correct? So the statement is false? - correct.$\phantom{.}$ – Julian Kuelshammer Oct 21 '12 at 6:48 That G has a normal subgroup H says that G is an extension of H. We say that it is a split extension if there is a map G/H->G compatible with the "short exact sequence" H->G->G/H (in the sense we have a commutative diagram), in which case G has a subgroup isomorphic to G/H and we say G is a semidirect product of H and G/H. In general, extensions need not split. – anon Mar 10 at 19:39 ## 1 Answer No. This statement is false. Observe that $\{1, -1\} \trianglelefteq Q_8$ and $Q_8/\{1, -1\} \cong V_4$ but no subgroup of $Q_8$ is isomorphic to $V_4$. (Immediately realized this right after posting, sorry!). - You're right --- – Martin Brandenburg Oct 21 '12 at 7:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8746484518051147, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/20593/calculate-variance-from-a-stream-of-sample-values
Calculate variance from a stream of sample values I'd like to calculate a standard deviation for a very large (but known) number of sample values, with the highest accuracy possible. The number of samples is larger than can be efficiently stored in memory. The basic variance formula is: $\sigma^2 = \frac{1}{N}\sum (x - \mu)^2$ ... but this formulation depends on knowing the value of $\mu$ already. $\mu$ can be calculated cumulatively -- that is, you can calculate the mean without storing every sample value. You just have to store their sum. But to calculate the variance, is it necessary to store every sample value? Given a stream of samples, can I accumulate a calculation of the variance, without a need for memory of each sample? Put another way, is there a formulation of the variance which doesn't depend on foreknowledge of the exact value of $\mu$ before the whole sample set has been seen? - – Dilip Sarwate Mar 4 '12 at 17:04 2 Answers You can keep two running counters - one for $\sum_i x_i$ and another for $\sum_i x_i^2$. Since variance can be written as $$\sigma^2 = \frac{1}{N} \left[ \sum_i x_i^2 - \frac{(\sum_i x_i)^2}{N} \right]$$ you can compute the variance of the data that you have seen thus far with just these two counters. Note that the $N$ here is not the total length of all your samples but only the number of samples you have observed in the past. - Wait -- shouldn't the N be inside the the expected values, such that the right side is divided by $N^2$? – user6677 Feb 5 '11 at 22:05 e.g. $$\sigma^2 = \frac{(\sum_i x_i^2)}{N} - (\frac{\sum_i x_i}{N})^2$$ – user6677 Feb 5 '11 at 22:27 @user6677: You are indeed right. Thanks for the correction. – Dinesh Feb 5 '11 at 22:30 I'm a little late to the party, but it appears that this method is pretty unstable, but that there is a method that allows for streaming computation of the variance without sacrificing numerical stability. Cook describes a method from Knuth, the punchline of which is to initialize $m_1 = x_1$, and $v_1 = 0$, where $m_k$ is the mean of the first $k$ values. From there, $$\begin{align*} m_k & = m_{k-1} + \frac{x_k - m_{k-1}}k \\ v_k & = v_{k-1} + (x_k - m_{k-1})(x_k - m_k) \end{align*}$$ The mean at this point is simply extracted as $m_k$, and the variance is $\sigma^2 = \frac{v_k}{k-1}$. It's easy to verify that it works for the mean, but I'm still working on grokking the variance. - +1, excellent! I didn't feel a simple upvote would be sufficient to express my appreciation of this answer, so there's an extra 50 rep bounty coming your way in 24 hours. – Ilmari Karonen Mar 4 '12 at 20:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413827657699585, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/205538-group-homomorphisms-kernel.html
4Thanks • 1 Post By johnsomeone • 1 Post By Deveno • 1 Post By johnsomeone • 1 Post By Deveno # Thread: 1. ## Group Homomorphisms- kernel I was working on a question on the no. of homomorphisms from S3 to Z/6Z. My approach was as follows. if f is a homomorphism from G-> G', ker(f) = {g: f(g) = e'} should be a normal subgroup of G. I recalled from a theorem that the only non-trivial normal subgroup of S3 is A3, the alternating group on 3 letters. So I guessed that there could only be one homomorphism. I have two questions. Can we count the homomorphism which has {e} has its kernel? Would it not imply that it is also an isomorphism contradicting the fact that S3 is isomorphic to Z/6Z? I doubt the contradiction because S3 cannot be isomorphic to Z/6Z because Z/6Z is in turn isomorphic to Z6, which is a cyclic group. But I guess S3 is not cyclic. 2. ## Re: Group Homomorphisms- kernel Yes - your reasoning is correct as far as it goes. Any homomorphism will produce a kernel, and that kernel will be a normal subgroup. The kernel can't be {e}, because otherwise, since those two groups have the same (finite) order, a kernel of {e} would correspond to a homomorphism that's actually an isomorphism - and because one group is cyclic and the other isn't, they can't be isomorphic. You're absolutely right. (Aside: Another way to quickly see that they're not isomorphic is to observe that one is abelian while the other isn't.) You've looked at the case of the kernel being A3, and the case of the kernel being {e}, which you eliminated. There's another normal subgroup of S3 that you haven't considered: S3 itself. Which homomorphisms, if any, would have that as a kernel? (Hint: don't overthink this - it's easy.) In trying to find the homomorphisms of S3 to Z6, you've broken it down according to: 1) Kernel = {e}: No homomorphisms. 2) Kernel = S3: ??? homomorphisms. 3) Kernel = A3: ??? homomorphisms. That's a good start, but you'll still need to then examine which homomorphisms, if any, exist in those cases (well, only case 3 requires any work). Just because A3 is a normal subgroup of S3 does *not* guarantee that there's a homomorphism from S3 to Z6 having A3 as a kernel. Also, at the other extreme, there might be several different homomorphisms from S3 to Z6 having A3 as their kernel. Ex: Is there a homomorphism from S3 to Z3 having kernel A3? No. If there were, then the image of that homomorphism would be isomorphic to S3/A3, which is the group of order 2. But, by Lagrange's theorem, Z3 doesn't have a subgroup of order 2. So your reasoning so far is excellent, but you've still got more work in front of you for the problem of finding the homomorphisms between those two groups. 3. ## Re: Group Homomorphisms- kernel there a "flip side" to the first isomorphism theorem (which is what you are appealing to by considering ker(f)), that it just as important as ker(f), im(f). that is, not only do we have to consider the normal subgroups N of G (to see what ker(f) might be), but we also have to see if G' contains a subgroup isomorphic to G/N. that's another way to look at what quotient groups are: homomorphic images (f(G) < G' is what we get when we "factor ker(f) out"). since sgn:S3-->{-1,1} is an onto homomorphism, and {-1,1} is a cyclic group of order 2 (because all groups of order 2 are cyclic, since 2 is prime) if we have a homomorphism f:S3-->Z6, the image must be a cyclic subgroup of Z6 of order 2. now Z6 is cyclic, and cyclic groups are SPECIAL. in particular, cyclic groups have exactly one subgroup of any order dividing the order of the group. so Z6 has just ONE subgroup of order 2, and being a subgroup of a cyclic group, it is also cyclic. it is easy to see that this group is {0,3}. so if a homomorphism f:S3-->Z6 with ker(f) = A3 exists, it must be this one: f(e) = 0 f((1 2)) = 3 f((2 3)) = 3 f((1 3)) = 3 f((1 2 3)) = 0 f((1 3 2)) = 0 now, one could verify for all 36 possible products a*b in S3 that f(a*b) = f(a) + f(b), or: since we know there is an isomorphism h:S3/A3-->{1,-1}, we could prove that {1,-1} and {0,3} are isomorphic (let's call the isomorphism k), in which case koh is an isomorphism S3/A3--->{0,3} (and we're done. why?). ******* the point i'm trying to make here, is that verifying maps are homomorphisms "element-by-element" is inefficient. homomorphims have PROPERTIES, and we have nifty theorems about them we can use, which let us side-step any actual calculation (which is good, if you are lazy like me). S3/{e} ---> H < Z6 leads to: H of order 6, H isomorphic to S3 and Z6, which can't be true since S3 and Z6 aren't isomorphic. S3/A3 ---> H < Z6 leads to: H of order 2 (so H must be {0,3}), possible since 2 divides 6, and there is only one such H, so only one such homomorphism (which works because S3/A3 is cyclic, and any two cyclic groups of the same order are isomorphic). S3/S3---> H < Z6 leads to: well, H must be a subgroup of Z6 of order 1...how many of these are there? with that out of the way, we can simply count how many homomorphisms we found, and counting is easy. 4. ## Re: Group Homomorphisms- kernel This approach to finding the homomorphisms between two groups depends on finding all the normal subgroups of one, and then, using whatever insights seem available, finding all the homomorphisms which have that normal subgroup as a kernel. This is rather hit or miss - it's not systematic at all. There is another approach, that, while also not completely systemaic, is still moreso than your approach. Recall that in linear algebra, one of the most used, perhaps the most used, observations is that once you've defined a set map on just a basis of one vector space into another vector space, then by linear extension you've actually uniquely defined a linear map between those two vector spaces. One might say "If you've defined it on a basis, then you've defined it everywhere." Groups and their homomorphisms have a similar property, although it's more complicated. This approach requires that you've seen "group presentations" - meaning groups defined in terms of generators and relations. If you're looking to find all the homomorphisms between G and H, and if G is given in terms of a presentation < P | R > ( = < generators | relations >), then a *consistent* mapping from just the set of generators P into H will define a unique homomorphism from G to H, in a way exactly analogous to the linear maps & vector spaces case of "if you've defined it on a basis, then you've defined it everywhere." The complication here is what "consistent" means. It means that if you have a set map from P to H, it can only be consistent with being a homomorphism between G and H if the relations R will hold as images in H. (I'll show a simple example at the bottom). But if you can find such a "consistent" set map from P to H, then you will have, by extension, defined a unique homomorphism from G to H. Proving that requires a bit of work (looking at free groups and such), but conceptually it should be clear, since what is a group homomorphism but an identification of the elements of G with some in H that "multiply like they do in G." Since the generators and relations in G tell you *everything* there is to know about "how things multiply in G", then once you've identified elements in H multiplying the same way, then you've found a "shadow" of G in H, i.e. a homomorphism. (FYI - my use of "consistent" in this paragraph isn't some technical definition, so far as I'm aware. It's just me trying to be descriptive.) To use this approach, finding all group homorphisms from $S_3$ to $Z_6$ requires giving a group presentation $S_3$. I'll give you that to get it started: $S_3 = <a, b \ | \ a^3 = 1, b^2 = 1, ab = ba^2 >$. In terms of cycles & permutations, a = ( 1 2 3 ), b = ( 1 2 ). $\text{Write } Z_6 = \{[0], [1], [2], [3], [4], [5] \}.$ $\text{So the "consistent" set maps } \phi \text{ from the generators }\{a, b \} \text{ of } S_3 \text{ to } Z_6 \text{ are those satisfying: }$ $\phi(a)^3 = \phi(1) = [0], \phi(b)^2 = \phi(1) = [0], \text{ and } \phi(a) \phi(b) = \phi(b) \phi(a)^2.$ $\text{Note that since } Z_6 \text{ is abelian, the last condition there implies } \phi(a) = [0].$ From there, can you enumerate all the "consistent" maps $\phi$? You need to determine $\phi(a), \phi(b)$ in $Z_6,$ and have that they're consistent with the relations of $S_3.$ You already know that $\phi(a) = 0$, and there's a severe restriction on $\phi(b)$, namely, $\phi(b)^2 = [0].$ If you can do this, then you'll have produced the list of homomorphisms between those two groups. Because Deveno has already completed this, you can check your discovered homorphisms with his results. ---------------------------------------------------------------- Example: (Before I begin, remember that presentations use non-abelian notation, and that can get confusing with these abelian groups. 1 means the identity, but so does [0]. I've used bracket notation for elements to tyt to alleviate this confusing situation. Also, see * at bottom.) $\text{Let }Z_n = <a \ | \ a^n = 1> = \text{ the cyclic group of order } n. \text{ Find all homomorphisms from } Z_4 \text{ into } Z_{12}:$ $\text{(I'm going to write the elements of the *group* } (Z_{12}, +, 0) \text{ as } Z_{12} = \{[0], [1], [2], ... [10], [11] \}. \ )$ $\text{With } Z_4 = <a \ | \ a^4 = 1> \text{, if I define } \phi(a) = b \in Z_{12}, \text{ when will that be "consistent"?}$ $\text{Consistent means consistent with all the } Z_4 \text{ relations, so with } a^4 = [0].$ $\text{"} \phi \text{ is consistent with the relation } a^4 \text{" means } \phi(a)^4 = \phi(1) = [0] \in Z_{12}.$ $\text{(That uses that any potential group homomorphism must send the identity to the identity.)}$ $\text{It's hopefully easy to see that } \{ x \in Z_{12} \ | \ x^4 = [0] \} = \{ [0], [3], [6], [9] \}.$ $\text{(Also, see * at the bottom.)}$ $\text{Thus there are exactly 4 group homomorphisms, } \{ \phi_1, \phi_2, \phi_3, \phi_4 \}, \text{ from } Z_4 \text{ to } Z_{12},$ $\text{because there are exactly 4 "consistent" ways to define } \phi(a) \in Z_{12}, \text{ namely: }$ $\phi(a) \in A, \text{ so } \phi_1(a) = [0], \phi_2(a) = [3], \phi_3(a) = [6], \text{ and } \phi_4(a) = [9].$ $\text{If I now write } Z_4 \text{ as } Z_4 = \{[0], [1], [2], [3] \}, \text{ then } a = [1], \text{ and with this notation have:}$ $\phi_1([k]) = [0], \ \phi_2([k]) = [3k], \ \phi_3([k]) = [6k], \text{ and } \phi_4([k]) = [9k].$ $\text{Now } \phi_1 \text{ is obviously the trivial homomorphism. What are the others? I'll do } \phi_4 \text{ : }$ $\phi_4([0]) = [0], \phi_4([1]) = [9], \phi_4([2]) = [18] = [6], \phi_4([3]) = [27] = [3].$ $\text{To explicitly check, first note the consequences of }\phi_4([0]) = [0] :$ $\phi_4([0]) + \phi_4([x]) = \phi_4([0] + [x]) \ \forall \ [x] \in Z_4.$ $\text{Now check the rest, using abelian-ness to reduce the work: }$ $\phi_4([1]) + \phi_4([1]) = [9] + [9] = [6] = \phi_4([2]) = \phi_4([1] + [1]),$ $\phi_4([1]) + \phi_4([2]) = [9] + [6] = [3] = \phi_4([3]) = \phi_4([1] + [2]),$ $\phi_4([1]) + \phi_4([3]) = [9] + [3] = [0] = \phi_4([0]) = \phi_4([1] + [3]),$ $\phi_4([2]) + \phi_4([2]) = [6] + [6] = [0] = \phi_4([0]) = \phi_4([2] + [2]),$ $\phi_4([2]) + \phi_4([3]) = [6] + [3] = [9] = \phi_4([1]) = \phi_4([2] + [3]),$ $\phi_4([3]) + \phi_4([3]) = [3] + [3] = [6] = \phi_4([2]) = \phi_4([3] + [3])$ $\text{Thus } \phi_4 \text{ is a homomorphism.}$ ---------------------------------------------------------------- * Note that these groups are abelian, so "powers" are repeated additions: [7]^3 = [7] + [7] + [7] = [21] = [9] in $Z_{12}.$ So don't get that confused with [7]^3 = [7][7][7] = [-5][-5][-5] = -[125] = -[5] = [7] in the ring of residues mod 12.) I'm is only considering the group. I'm not mentioning the ring of residues mod 12 at all. 5. ## Re: Group Homomorphisms- kernel slight addendum: instead of using relations: equations of the form expression a in generators (x,y,z etc.) = expression b, it is often more convenient to use relators: elements defined to be the identity. for example, in the presentation of S3 defined above, for the relation a3 = 1, we have the relator a3, and for the relation: ab = ba2, we have the relator: aba-2b-1 (which is equal to: abab, or (ab)2). so if we have a set of relators, f:G-->H is a homomorphism for G = <P|R> iff f(G) = <f(P)> and f(R) = {e}. what we mean by G = <P|R> is that G is F(P)/<N(R)>, where F(P) is the free group generated by the set of generator P (the structure of this group only depends on the cardinality of P: in other words: "it doesn't matter what "letters" you use as symbols for the generators") and N(R) is the smallest normal subgroup of F(P) containing R. one caveat: there may be several possible presentations for a given group G, and it is not always possible to even tell when two presentations give the same group. this is because free groups (on more than one generator, at least) have a very finely-patterned and rich internal structure: for example, the free group on two letters has a subgroup isomorphic to the free group on 3 letters! nevertheless, this approach can save a great deal of calculation, especially for cyclic groups (which are all quotients of the free group on one generator, also known as "the integers" (well, ok, "isomorphic to the integers")). if G is cyclic, we just look for where the generator might be sent, and the order of the image of the generator must divide the order of the generator: we cant send x in G = <x> and |x| = 12, to an element (and have a homomorphism) in another group G' of order 5,10 or 24, for example. the only possible values for |f(x)| for a homomorphism f are 1,2,3,4,6 and 12. in the case of S3 to Z6, we have the two generators a and b, and |f(a)|, |f(b)| must divide 3 and 2, respectively. in Z6, we can't have: |f(a)| = 3, |f(b)| = 2, or else 0 = f(e) = f((ab)2) = f(ab) + f(ab) = f(a) + f(a) + f(b) + f(b), and f(a) must be 2 or 4, and f(b) = 3 giving: 2 + 2 + 3 + 3 = 2 + 2 = 4 ≠ 0 or: 4 + 4 + 3 + 3 = 4 + 4 = 2 ≠ 0. so one of |f(a)|,|f(b)| must be 1 (possibly both). suppose f(b) = 0. then from f((ab)2) = 0, we get: f(a) + f(a) = 0, which implies that |f(a)| = 1 or 2 (and in particular CANNOT be 3). since 2 does not divide 3, if f(b) = 0, f(a) = 0 as well. otherwise, if |f(b)| = 2, we must have f(a) = 0 (since |f(a)| = 1), which works since: f((ab)2) = f(a) + f(a) + f(b) + f(b) = 0 + 0 + f(b) + f(b) = f(b) + f(b) = 0 (because |f(b)| = 2). so: f(a) = 0, f(b) = 0 implies f(S3) = 0 f(a) = 0 f(b) of order 2 implies f(S3) = ???, ker(f) = ??? f(a) ≠ 0, f(b) ≠ 0, not possible. 6. ## Re: Group Homomorphisms- kernel Deveno and johnsomeone.. i was working on your previous replies before i got another pair of replies.... still working.. just wanted to let you know that my being silent doesn't mean i'm not active... i'm working and will get back to you... love ya both
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948530912399292, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/248385/what-is-the-physical-explanation-of-a-division-by-a-fraction?answertab=oldest
# What is the 'physical' explanation of a division by a fraction? For example, dividing by 2, means we cut something in two. But dividing by 0.5, can only be explained with multiplying something by 2. So, is there a "physical" explanation of dividing by 0.5? Is it "I divide by an entity that internally multiplies' or something as so bizarre? - 3 I once heard it explained as asking the question how many times does m/n fit into 1. For example, how many times does 1/2 fit into 1, that is 1/(1/2) = 2, or 2 times. – Amzoti Dec 1 '12 at 1:41 That's a very good answer, similar to what Peter Tamaroff gave below. – Lela Dax Dec 1 '12 at 1:43 – Amzoti Dec 1 '12 at 1:43 ## 3 Answers I once heard: "If you feed a kid $Y$ grams of chocolate he would have a density of $X$ $\rm gr/cm^3$ of chocolate in his blood. But if you feed half a boy chocolate he will have $2X$ $\rm gr/cm^3$ of chocolate in his blood". - It sounds good and simple enough. – Lela Dax Dec 1 '12 at 1:33 Chocolate gets into the stomach first, and the concentration of it will vary with time. I guess dissolving the same amount of table salt in water of different volumes is a better example. – FrenzY DT. Dec 1 '12 at 2:37 1 @FrenzYDT. It is odd you took my example seriously. – Peter Tamaroff Dec 1 '12 at 2:39 Well, as folks say $1+1=2$ doesn't always necessarily hold. AAMOF, I like your explanation. – FrenzY DT. Dec 1 '12 at 2:41 $$\frac{1}{0.5} = \frac{2(\not{.5})}{\not{.5}} = 2.$$ $$\frac{1}{\frac12} = \frac{2\cdot\not{\frac{1}{2}}}{\not{\frac12}} = 2.$$ In words, "how many times does one-half fit into the whole?" And more generally, "how many times does $\;\dfrac1n\;$ fit into $\;1\;$?" - It sounds good mathematically but I don't know if it's complete in a physical sense since it might have to explain the meaning of 0,5/0,5 = 1. – Lela Dax Dec 1 '12 at 1:39 I guess it can be "I multiply 2 by a number that was divided by that same number hence I multiply 2 by 1". – Lela Dax Dec 1 '12 at 1:49 Along the same lines as Peter Tamaroff's answer: If you have ten ounces of vodka and each cocktail requires one ounce, then you can make ten cocktails. If you have ten ounces of vodka and each cocktail requires only half an ounce, then you can make twenty cocktails. In other words, halving the vodka doubles the number of drinks. (Unfortunately, nobody will be happy with these drinks.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9557613134384155, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/40323-probability-statistics.html
# Thread: 1. ## Probability and Statistics Here's my problem, A certain Lottery has 49 numbers, six of which are the winning numbers for a particular game. To play the game each participant choses six numbers. What is the probability of choosing exactly... a) six correct numbers b) five correct numbers c) four correct numbers now i know how to find the answer to a) which is 49C6 which equals 13,983,816...so the answer is 1/13,983,816. But what is the formula for finding b. and c.... i've tried 6C5/49C6 but that's not right, i don't need the answers but does someone know the formula for figuring these out? this would be a huge help. Thanks! 2. Originally Posted by gwen01 Here's my problem, A certain Lottery has 49 numbers, six of which are the winning numbers for a particular game. To play the game each participant choses six numbers. What is the probability of choosing exactly... a) six correct numbers b) five correct numbers c) four correct numbers now i know how to find the answer to a) which is 49C6 which equals 13,983,816...so the answer is 1/13,983,816. But what is the formula for finding b. and c.... i've tried 6C5/49C6 but that's not right, i don't need the answers but does someone know the formula for figuring these out? this would be a huge help. Thanks! This should work for both b and c. I will do b $\frac{\binom{5}{5} \binom{44}{1}}{\binom{49}{6}}=\frac{1}{317814}$ There are 44 arrangments of picking 5 correct numbers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495928287506104, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Dyadic_transformation
# Dyadic transformation xy plot where x = x0 ∈ [0, 1] is rational and y = xn for all n. The dyadic transformation (also known as the dyadic map, bit shift map, 2x mod 1 map, Bernoulli map, doubling map or sawtooth map[1][2]) is the mapping (i.e., recurrence relation) $d: [0, 1) \to [0, 1)^\infty$ $x \mapsto (x_0, x_1, x_2, \ldots)$ produced by the rule $x_0 = x$ $\forall n \ge 0, x_{n+1} = (2 \cdot x_n) \mod 1$. Equivalently, the dyadic transformation can also be defined as the iterated function map of the piecewise linear function $f(x)=\begin{cases}2x & 0 \le x < 0.5 \\2x-1 & 0.5 \le x < 1. \end{cases}$ The name bit shift map arises because, if the value of an iterate is written in binary notation, the next iterate is obtained by shifting the binary point one bit to the right, and if the bit to the left of the new binary point is a "one", replacing it with a zero. The dyadic transformation provides an example of how a simple 1-dimensional map can give rise to chaos. ## Relation to tent map and logistic map The dyadic transformation is topologically conjugate to : • the unit-height tent map • the chaotic r=4 case of the logistic map. The r=4 case of the logistic map is $z_{n+1}=4z_{n}(1-z_{n})$; this is related to the bit shift map in variable x by $z_{n}=\sin^{2}(2 \pi x_{n})$. There is semi-conjugacy between the dyadic transformation (here named doubling map) and the quadratic polynomial. ## Periodicity and non-periodicity Because of the simple nature of the dynamics when the iterates are viewed in binary notation, it is easy to categorize the dynamics based on the initial condition: If the initial condition is irrational (as almost all points in the unit interval are), then the dynamics are non-periodic—this follows directly from the definition of an irrational number as one with a non-repeating binary expansion. This is the chaotic case. If x0 is rational the image of x0 contains a finite number of distinct values within [0, 1) and the forward orbit of x0 is eventually periodic, with period equal to the period of the binary expansion of x0. Specifically, if the initial condition is a rational number with a finite binary expansion of k bits, then after k iterations the iterates reach the fixed point 0; if the initial condition is a rational number with a k-bit transient (k≥0) followed by a q-bit sequence (q>1) that repeats itself infinitely, then after k iterations the iterates reach a cycle of length q. Thus cycles of all lengths are possible. For example, the forward orbit of 11/24 is: $\frac{11}{24} \mapsto \frac{11}{12} \mapsto \frac{5}{6} \mapsto \frac{2}{3} \mapsto \frac{1}{3} \mapsto \frac{2}{3} \mapsto \frac{1}{3} \mapsto \cdots,$ which has reached a cycle of period 2. Within any sub-interval of [0,1), no matter how small, there are therefore an infinite number of points whose orbits are eventually periodic, and an infinite number of points whose orbits are never periodic. This sensitive dependence on initial conditions is a characteristic of chaotic maps. ## Solvability The dyadic transformation is an exactly solvable model in the theory of deterministic chaos. The square-integrable eigenfunctions of the associated transfer operator of the Bernoulli map are the Bernoulli polynomials. These eigenfunctions form a discrete spectrum with eigenvalues $2^{-n}$ for non-negative integers n. There are more general eigenvectors, which are not square-integrable, associated with a continuous spectrum. These are given by the Hurwitz zeta function; equivalently, linear combinations of the Hurwitz zeta give fractal, differentiable-nowhere eigenfunctions, including the Takagi function. The fractal eigenfunctions show a symmetry under the fractal groupoid of the modular group. ## Rate of information loss and sensitive dependence on initial conditions One hallmark of chaotic dynamics is the loss of information as simulation occurs. If we start with information on the first s bits of the initial iterate, then after m simulated iterations (m<s) we only have (s-m) bits of information remaining. Thus we lose information at the exponential rate of one bit per iteration. After s iterations, our simulation has reached the fixed point zero, regardless of the true iterate values; thus we have suffered a complete loss of information. This illustrates sensitive dependence on initial conditions—the mapping from the truncated initial condition has deviated exponentially from the mapping from the true initial condition. And since our simulation has reached a fixed point, for almost all initial conditions it will not describe the dynamics in the qualitatively correct way as chaotic. Equivalent to the concept of information loss is the concept of information gain. In practice some real-world process may generate a sequence of values {$x_n$} over time, but we may only be able to observe these values in truncated form. Suppose for example that $x_0$ = .1001101, but we only observe the truncated value .1001 . Our prediction for $x_1$ is .001 . If we wait until the real-world process has generated the true $x_1$ value .001101, we will be able to observe the truncated value .0011, which is more accurate than our predicted value .001 . So we have received an information gain of one bit. ## References 1. Wolf, A. "Quantifying Chaos with Lyapunov exponents," in Chaos, edited by A. V. Holden, Princeton University Press, 1986. • Dean J. Driebe, Fully Chaotic Maps and Broken Time Symmetry, (1999) Kluwer Academic Publishers, Dordrecht Netherlands ISBN 0-7923-5564-4 • Linas Vepstas, The Bernoulli Map, the Gauss-Kuzmin-Wirsing Operator and the Riemann Zeta, (2004)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8648037314414978, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/269898/correct-combination-of-differentiation-rules?answertab=active
correct combination of differentiation rules I am trying to calculate the derivative of a rather complex function for my homework. I think I have found the solution, it just seems too bulky for my taste. See the bottom for specific questions I have regarding my solution. $f(x)=\frac{\overbrace{\sin x}^\text{u(x)}\cdot \overbrace{e^x+x^3}^\text{v(x)}}{\underbrace{x^3+2x+2}_\text{w(x)}}$ $u'(x)=\cos x$, $v'(x)=e^x+3x^2$, $w'(x)=3x^2+4x$ $\begin{align*} u'v'(x)&=(\sin x\cdot e^x+3x^2)+(\cos x\cdot x^3)\\ &= e^x\cdot\sin x+3x\cdot\sin x+x^3\cdot\cos x \end{align*}$ $\begin{align*} f'(x)&= \frac{u'v'(x)\cdot w(x)-w'(x)\cdot uv(x)}{w(x)^2}\\ &= \frac{(e^x\cdot\sin x+3x\cdot\sin x+x^3\cdot\cos x)\cdot (x^3+2x^2+2)-(3x^2+4x)(\sin x\cdot e^x+x^3)}{(x^3+2x^2+2)^2} \end{align*}$ The obvious question: Is this derivative correct? Especially: • Is it really possible to calculate the derivative by splitting the function into part-functions and calculating them together following the differentiation rules, the way I did it? • Is it possible to reduce the summands by applying some rule I am not aware of? E.g. reducing $(e^x\cdot\sin x+3x\cdot\sin x+x^3\cdot\cos x)$ to something with just one $\sin$ or something? - $\sin x\cdot e^x+x^3=e^x\sin x+x^3$, not $(e^x+x^3)\sin x$. – Brian M. Scott Jan 3 at 19:53 – Maesumi Jan 3 at 20:09 This was what I was thinking. However, because of the product rule I thought I had to actually calculate the part-functions first, i.e. the derivatives of $a(x)=\sin x$, $b(x)=eˆx$ and $c(x)=xˆ3$ (with $v(x)=a(x)+b(x)$). If that is not necessary, then @dirk5959's answer (using simply the quotient-rule) is of course correct and evident! – alex Jan 3 at 21:31 If the expression is meant as $uv/w$ then use $(uv)'=uv'+u' v=\sin x (e^x+3x^2) + \cos x (e^x+x^3)$. As as there are two errors. You have only $\cos x . x^3$ missing an $e^x$ there. Next you have written $3x$ instead of $3x^2$. – Maesumi Jan 3 at 21:43 1 Answer Your solution is pretty close, but remember that you're not multiplying $\sin x$ by $(e^x + x^3)$ in your numerator, as you do when you try to multiply $u'(x)$ by $v'(x).$ Instead, try differentiating your numerator in the following form: $e^x\sin x + x^3.$ I believe the rest will follow from the quotient rule. $n(x) = e^x\sin x$ $d(x) = x^3 + 2x + 2$ $n'(x) = e^x \cos x + e^x\sin x$ $d'(x) = 3x^2 + 2$ $\frac{d}{dx} \frac{n(x)}{d(x)} = \frac{n'(x)d(x) - d'(x)n(x)}{d(x)^2}$ $= \frac{(e^x\cos x + e^x\sin x)(x^3 + 2x + 2) - (3x^2+2)(e^x\sin x)}{(x^3 + 2x + 2)^2}$ which is messy but can be simplified. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570714831352234, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/equivalence-principle?sort=unanswered&pagesize=15
# Tagged Questions The equivalence-principle tag has no wiki summary. 1answer 237 views ### Why dynamic Casimir effect does not appear in static gravity field? Dynamic Casimir effect tells us that a constantly-accelerated mirror should emit radiation due to interaction with vacuum. Following principle of equivalence, a similar mirror placed in static ... 1answer 56 views ### Is the result of (every) research on acceleration equivalent to gravity? Is the result of an experiment on acceleration equivalent to another experiment in a gravitational field? If I have an experimental conclusion from research under uniform acceleration, can the ... 1answer 122 views ### Why does weak equivalence principle say gravity is equivalent to acceleration? I am told that the weak equivalent principle, that $m_i=m_g$ (inertial and gravitational masses are equivalent) is equivalent to the statement that in a small system you can't tell whether you are in ... 0answers 99 views ### Use of Principle of Equivalence Let $x^\mu$ be the coordinates of a reference frame, $K$, where all bodies feel the same constant and uniform acceleration $\textbf{a}=\textbf{g}=-\nabla\varphi$; let $\xi^\mu$ be the coordinates of a ... 0answers 40 views ### Switching from an accelerated frame of reference to a locally inertial reference system Using the equivalence principle, show that the interval for an accelerated observer ($\textbf{g}$ uniform and constant) has the form ds^2|_{\text{first order in ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8780089616775513, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Bohr_model
# Bohr model 'Rutherford–Bohr model' and 'Bohr-Rutherford diagram' redirect to this page. 'Bohr model' is not to be confused with Bohr equation. The Rutherford–Bohr model of the hydrogen atom (Z = 1) or a hydrogen-like ion (Z > 1), where the negatively charged electron confined to an atomic shell encircles a small, positively charged atomic nucleus and where an electron jump between orbits is accompanied by an emitted or absorbed amount of electromagnetic energy (hν).[1] The orbits in which the electron may travel are shown as grey circles; their radius increases as n2, where n is the principal quantum number. The 3 → 2 transition depicted here produces the first line of the Balmer series, and for hydrogen (Z = 1) it results in a photon of wavelength 656 nm (red light). In atomic physics, the Bohr model, introduced by Niels Bohr in 1913, depicts the atom as small, positively charged nucleus surrounded by electrons that travel in circular orbits around the nucleus—similar in structure to the solar system, but with attraction provided by electrostatic forces rather than gravity. After the cubic model (1902), the plum-pudding model (1904), the Saturnian model (1904), and the Rutherford model (1911) came the Rutherford–Bohr model or just Bohr model for short (1913). The improvement to the Rutherford model is mostly a quantum physical interpretation of it. The Bohr model has been superseded, but the quantum theory remains sound. The model's key success lay in explaining the Rydberg formula for the spectral emission lines of atomic hydrogen. While the Rydberg formula had been known experimentally, it did not gain a theoretical underpinning until the Bohr model was introduced. Not only did the Bohr model explain the reason for the structure of the Rydberg formula, it also provided a justification for its empirical results in terms of fundamental physical constants. The Bohr model is a relatively primitive model of the hydrogen atom, compared to the valence shell atom. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics, and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics, before moving on to the more accurate, but more complex, valence shell atom. A related model was originally proposed by Arthur Erich Haas in 1910, but was rejected. The quantum theory of the period between Planck's discovery of the quantum (1900) and the advent of a full-blown quantum mechanics (1925) is often referred to as the old quantum theory. ## Origin In the early 20th century, experiments by Ernest Rutherford established that atoms consisted of a diffuse cloud of negatively charged electrons surrounding a small, dense, positively charged nucleus.[2] Given this experimental data, Rutherford naturally considered a planetary-model atom, the Rutherford model of 1911 – electrons orbiting a solar nucleus – however, said planetary-model atom has a technical difficulty. The laws of classical mechanics (i.e. the Larmor formula), predict that the electron will release electromagnetic radiation while orbiting a nucleus. Because the electron would lose energy, it would gradually spiral inwards, collapsing into the nucleus. This atom model is disastrous, because it predicts that all atoms are unstable.[3] Also, as the electron spirals inward, the emission would gradually increase in frequency as the orbit got smaller and faster. This would produce a continuous smear, in frequency, of electromagnetic radiation. However, late 19th century experiments with electric discharges have shown that atoms will only emit light (that is, electromagnetic radiation) at certain discrete frequencies. To overcome this difficulty, Niels Bohr proposed, in 1913, what is now called the Bohr model of the atom. He suggested that electrons could only have certain classical motions: 1. Electrons in atoms orbit the nucleus. 2. The electrons can only orbit stably, without radiating, in certain orbits (called by Bohr the "stationary orbits"[4]): at a certain discrete set of distances from the nucleus. These orbits are associated with definite energies and are also called energy shells or energy levels. In these orbits, the electron's acceleration does not result in radiation and energy loss as required by classical electromagnetics. 3. Electrons can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation with a frequency ν determined by the energy difference of the levels according to the Planck relation: $\Delta{E} = E_2-E_1=h\nu \ ,$ where h is Planck's constant. The frequency of the radiation emitted at an orbit of period T is as it would be in classical mechanics; it is the reciprocal of the classical orbit period: $\nu = {1\over T}.$ The significance of the Bohr model is that the laws of classical mechanics apply to the motion of the electron about the nucleus only when restricted by a quantum rule. Although rule 3 is not completely well defined for small orbits, because the emission process involves two orbits with two different periods, Bohr could determine the energy spacing between levels using rule 3 and come to an exactly correct quantum rule: the angular momentum L is restricted to be an integer multiple of a fixed unit: $L = n{h \over 2\pi} = n\hbar$ where n = 1, 2, 3, ... is called the principal quantum number, and ħ = h/2π. The lowest value of n is 1; this gives a smallest possible orbital radius of 0.0529 nm known as the Bohr radius. Once an electron is in this lowest orbit, it can get no closer to the proton. Starting from the angular momentum quantum rule, Bohr[5] was able to calculate the energies of the allowed orbits of the hydrogen atom and other hydrogen-like atoms and ions. Other points are: 1. Like Einstein's theory of the Photoelectric effect, Bohr's formula assumes that during a quantum jump a discrete amount of energy is radiated. However, unlike Einstein, Bohr stuck to the classical Maxwell theory of the electromagnetic field. Quantization of the electromagnetic field was explained by the discreteness of the atomic energy levels; Bohr did not believe in the existence of photons. 2. According to the Maxwell theory the frequency ν of classical radiation is equal to the rotation frequency νrot of the electron in its orbit, with harmonics at integer multiples of this frequency. This result is obtained from the Bohr model for jumps between energy levels En and En−k when k is much smaller than n. These jumps reproduce the frequency of the k-th harmonic of orbit n. For sufficiently large values of n (so-called Rydberg states), the two orbits involved in the emission process have nearly the same rotation frequency, so that the classical orbital frequency is not ambiguous. But for small n (or large k), the radiation frequency has no unambiguous classical interpretation. This marks the birth of the correspondence principle, requiring quantum theory to agree with the classical theory only in the limit of large quantum numbers. 3. The Bohr-Kramers-Slater theory (BKS theory) is a failed attempt to extend the Bohr model which violates the conservation of energy and momentum in quantum jumps, with the conservation laws only holding on average. Bohr's condition, that the angular momentum is an integer multiple of ħ was later reinterpreted in 1924 by de Broglie as a standing wave condition: the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit: $n \lambda = 2 \pi r.\,$ Substituting de Broglie's wavelength of h/p reproduces Bohr's rule. In 1913, however, Bohr justified his rule by appealing to the correspondence principle, without providing any sort of wave interpretation. In 1913, the wave behavior of matter particles such as the electron (i.e., matter waves) was not suspected. In 1925 a new kind of mechanics was proposed, quantum mechanics, in which Bohr's model of electrons traveling in quantized orbits was extended into a more accurate model of electron motion. The new theory was proposed by Werner Heisenberg. Another form of the same theory, wave mechanics, was discovered by the Austrian physicist Erwin Schrödinger independently, and by different reasoning. Schrödinger employed de Broglie's matter waves, but sought wave solutions of a three-dimensional wave equation describing electrons that were constrained to move about the nucleus of a hydrogen-like atom, by being trapped by the potential of the positive nuclear charge. ## Electron energy levels The Bohr model gives almost exact results only for a system where two charged points orbit each other at speeds much less than that of light. This not only includes one-electron systems such as the hydrogen atom, singly ionized helium, doubly ionized lithium, but it includes positronium and Rydberg states of any atom where one electron is far away from everything else. It can be used for K-line X-ray transition calculations if other assumptions are added (see Moseley's law below). In high energy physics, it can be used to calculate the masses of heavy quark mesons. To calculate the orbits requires two assumptions: • Classical mechanics The electron is held in a circular orbit by electrostatic attraction. The centripetal force is equal to the Coulomb force. ${m_e v^2\over r} = {Zk_e e^2 \over r^2}$ where me is the electron's mass, e is the charge of the electron, ke is Coulomb's constant and Z is the atom's atomic number. This equation determines the electron's speed at any radius: $v = \sqrt{ Zk_e e^2 \over m_e r}.$ It also determines the electron's total energy at any radius: $E= {1\over 2} m_e v^2 - {Z k_e e^2 \over r} = - {Z k_e e^2 \over 2r}.$ The total energy is negative and inversely proportional to r. This means that it takes energy to pull the orbiting electron away from the proton. For infinite values of r, the energy is zero, corresponding to a motionless electron infinitely far from the proton. The total energy is half the potential energy, which is true for noncircular orbits too by the virial theorem. For positronium, me is replaced by its reduced mass (μ = me/2). • Quantum rule The angular momentum L = mevr is an integer multiple of ħ: $m_e v r = n \hbar$ Substituting the expression for the velocity gives an equation for r in terms of n: $\sqrt{Zk_e e^2 m_e r} = n \hbar$ so that the allowed orbit radius at any n is: $r_n = {n^2\hbar^2\over Zk_e e^2 m_e}$ The smallest possible value of r in the hydrogen atom is called the Bohr radius and is equal to: $r_1 = {\hbar^2 \over k_e e^2 m_e} \approx 5.29 \times 10^{-11} \mathrm{m}$ The energy of the n-th level for any atom is determined by the radius and quantum number: $E = -{Zk_e e^2 \over 2r_n } = - { Z^2(k_e e^2)^2 m_e \over 2\hbar^2 n^2} \approx {-13.6Z^2 \over n^2}\mathrm{eV}$ An electron in the lowest energy level of hydrogen (n = 1) therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level (n = 2) is −3.4 eV. The third (n = 3) is −1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom. The combination of natural constants in the energy formula is called the Rydberg energy (RE): $R_E = { (k_e e^2)^2 m_e \over 2 \hbar^2}$ This expression is clarified by interpreting it in combinations which form more natural units: $\, m_e c^2$ is the rest mass energy of the electron (511 keV) $\, {k_e e^2 \over \hbar c} = \alpha \approx {1\over 137}$ is the fine structure constant $\, R_E = {1\over 2} (m_e c^2) \alpha^2$ Since this derivation is with the assumption that the nucleus is orbited by one electron, we can generalize this result by letting the nucleus have a charge q = Z e where Z is the atomic number. This will now give us energy levels for hydrogenic atoms, which can serve as a rough order-of-magnitude approximation of the actual energy levels. So, for nuclei with Z protons, the energy levels are (to a rough approximation): $E_n = -{Z^2 R_E \over n^2}$ The actual energy levels cannot be solved analytically for more than one electron (see n-body problem) because the electrons are not only affected by the nucleus but also interact with each other via the Coulomb Force. When Z = 1/α (Z ≈ 137), the motion becomes highly relativistic, and Z2 cancels the α2 in R; the orbit energy begins to be comparable to rest energy. Sufficiently large nuclei, if they were stable, would reduce their charge by creating a bound electron from the vacuum, ejecting the positron to infinity. This is the theoretical phenomenon of electromagnetic charge screening which predicts a maximum nuclear charge. Emission of such positrons has been observed in the collisions of heavy ions to create temporary super-heavy nuclei.[citation needed] The Bohr formula properly uses the reduced mass of electron and proton in all situations, instead of the mass of the electron: $m_\text{red} = \frac{m_e m_p}{m_e + m_p} = m_e \frac{1}{1+m_e/m_p}$. However, these numbers are very nearly the same, due to the much larger mass of the proton, about 1836.1 times the mass of the electron, so that the reduced mass in the system is the mass of the electron multiplied by the constant 1836.1/(1+1836.1) = 0.99946. This fact was historically important in convincing Rutherford of the importance of Bohr's model, for it explained the fact that the frequencies of lines in the spectra for singly ionized helium do not differ from those of hydrogen by a factor of exactly 4, but rather by 4 times the ratio of the reduced mass for the hydrogen vs. the helium systems, which was much closer to the experimental ratio than exactly 4.0. For positronium, the formula uses the reduced mass also, but in this case, it is exactly the electron mass divided by 2. For any value of the radius, the electron and the positron are each moving at half the speed around their common center of mass, and each has only one fourth the kinetic energy. The total kinetic energy is half what it would be for a single electron moving around a heavy nucleus. $E_n = {R_E \over 2 n^2 }$ (positronium) ## Rydberg formula The Rydberg formula, which was known empirically before Bohr's formula, is now in Bohr's theory seen as describing the energies of transitions or quantum jumps between one orbital energy level, and another. Bohr's formula gives the numerical value of the already-known and measured Rydberg's constant, but now in terms of more fundamental constants of nature, including the electron's charge and Planck's constant. When the electron gets moved from its original energy level to a higher one, it then jumps back each level till it comes to the original position, which results in a photon being emitted. Using the derived formula for the different energy levels of hydrogen one may determine the wavelengths of light that a hydrogen atom can emit. The energy of a photon emitted by a hydrogen atom is given by the difference of two hydrogen energy levels: $E=E_i-E_f=R_E \left( \frac{1}{n_{f}^2} - \frac{1}{n_{i}^2} \right) \,$ where nf is the final energy level, and ni is the initial energy level. Since the energy of a photon is $E=\frac{hc}{\lambda}, \,$ the wavelength of the photon given off is given by $\frac{1}{\lambda}=R \left( \frac{1}{n_{f}^2} - \frac{1}{n_{i}^2} \right). \,$ This is known as the Rydberg formula, and the Rydberg constant R is $R_E/hc$, or $R_E/2\pi$ in natural units. This formula was known in the nineteenth century to scientists studying spectroscopy, but there was no theoretical explanation for this form or a theoretical prediction for the value of R, until Bohr. In fact, Bohr's derivation of the Rydberg constant, as well as the concomitant agreement of Bohr's formula with experimentally observed spectral lines of the Lyman ($n_f = 1$), Balmer ($n_f = 2$), and Paschen ($n_f = 3$) series, and successful theoretical prediction of other lines not yet observed, was one reason that his model was immediately accepted. To apply to atoms with more than one electron, the Rydberg formula can be modified by replacing "Z" with "Z - b" or "n" with "n - b" where b is constant representing a screening effect due to the inner-shell and other electrons (see Electron shell and the later discussion of the "Shell Model of the Atom" below). This was established empirically before Bohr presented his model. ## Shell model of the atom In 1922, Nobel prize winner Niels Bohr revised Rutherford's model by suggesting that • The electrons were confined into clearly defined orbits. • They could jump between these orbits, but could not freely spiral inward or outward in intermediate states. • An electron must absorb or emit specific amounts of energy for transition between these fixed orbits. Bohr extended the model of Hydrogen to give an approximate model for heavier atoms. This gave a physical picture which reproduced many known atomic properties for the first time. Heavier atoms have more protons in the nucleus, and more electrons to cancel the charge. Bohr's idea was that each discrete orbit could only hold a certain number of electrons. After that orbit is full, the next level would have to be used. This gives the atom a shell structure, in which each shell corresponds to a Bohr orbit. This model is even more approximate than the model of hydrogen, because it treats the electrons in each shell as non-interacting. But the repulsions of electrons are taken into account somewhat by the phenomenon of screening. The electrons in outer orbits do not only orbit the nucleus, but they also orbit the inner electrons, so the effective charge Z that they feel is reduced by the number of the electrons in the inner orbit. For example, the lithium atom has two electrons in the lowest 1S orbit, and these orbit at Z=2. Each one sees the nuclear charge of Z=3 minus the screening effect of the other, which crudely reduces the nuclear charge by 1 unit. This means that the innermost electrons orbit at approximately 1/4 the Bohr radius. The outermost electron in lithium orbits at roughly Z=1, since the two inner electrons reduce the nuclear charge by 2. This outer electron should be at nearly one Bohr radius from the nucleus. Because the electrons strongly repel each other, the effective charge description is very approximate; the effective charge Z doesn't usually come out to be an integer. But Moseley's law experimentally probes the innermost pair of electrons, and shows that they do see a nuclear charge of approximately Z-1, while the outermost electron in an atom or ion with only one electron in the outermost shell orbits a core with effective charge Z-k where k is the total number of electrons in the inner shells. The shell model was able to qualitatively explain many of the mysterious properties of atoms which became codified in the late 19th century in the periodic table of the elements. One property was the size of atoms, which could be determined approximately by measuring the viscosity of gases and density of pure crystalline solids. Atoms tend to get smaller toward the right in the periodic table, and become much larger at the next line of the table. Atoms to the right of the table tend to gain electrons, while atoms to the left tend to lose them. Every element on the last column of the table is chemically inert (noble gas). In the shell model, this phenomenon is explained by shell-filling. Successive atoms become smaller because they are filling orbits of the same size, until the orbit is full, at which point the next atom in the table has a loosely bound outer electron, causing it to expand. The first Bohr orbit is filled when it has two electrons, and this explains why helium is inert. The second orbit allows eight electrons, and when it is full the atom is neon, again inert. The third orbital contains eight again, except that in the more correct Sommerfeld treatment (reproduced in modern quantum mechanics) there are extra "d" electrons. The third orbit may hold an extra 10 d electrons, but these positions are not filled until a few more orbitals from the next level are filled (filling the n=3 d orbitals produces the 10 transition elements). The irregular filling pattern is an effect of interactions between electrons, which are not taken into account in either the Bohr or Sommerfeld models, and which are difficult to calculate even in the modern treatment. ## Moseley's law and calculation of K-alpha X-ray emission lines Niels Bohr said in 1962, "You see actually the Rutherford work [the nuclear atom] was not taken seriously. We cannot understand today, but it was not taken seriously at all. There was no mention of it any place. The great change came from Moseley." In 1913 Henry Moseley found an empirical relationship between the strongest X-ray line emitted by atoms under electron bombardment (then known as the K-alpha line), and their atomic number Z. Moseley's empiric formula was found to be derivable from Rydberg and Bohr's formula (Moseley actually mentions only Ernest Rutherford and Antonius Van den Broek in terms of models). The two additional assumptions that [1] this X-ray line came from a transition between energy levels with quantum numbers 1 and 2, and [2], that the atomic number Z when used in the formula for atoms heavier than hydrogen, should be diminished by 1, to (Z-1)². Moseley wrote to Bohr, puzzled about his results, but Bohr was not able to help. At that time, he thought that the postulated innermost "K" shell of electrons should have at least four electrons, not the two which would have neatly explained the result. So Moseley published his results without a theoretical explanation. Later, people realized that the effect was caused by charge screening, with an inner shell containing only 2 electrons. In the experiment, one of the innermost electrons in the atom is knocked out, leaving a vacancy in the lowest Bohr orbit, which contains a single remaining electron. This vacancy is then filled by an electron from the next orbit, which has n=2. But the n=2 electrons see an effective charge of Z-1, which is the value appropriate for the charge of the nucleus, when a single electron remains in the lowest Bohr orbit to screen the nuclear charge +Z, and lower it by -1 (due to the electron's negative charge screening the nuclear positive charge). The energy gained by an electron dropping from the second shell to the first gives Moseley's law for K-alpha lines: $E= h\nu = E_i-E_f=R_E (Z-1)^2 \left( \frac{1}{1^2} - \frac{1}{2^2} \right) \,$ or $f = \nu = R_v \left( \frac{3}{4}\right) (Z-1)^2 = (2.46 \times 10^{15} \operatorname{Hz})(Z-1)^2.$ Here, Rv = RE/h is the Rydberg constant, in terms of frequency equal to 3.28 x 1015 Hz. For values of Z between 11 and 31 this latter relationship had been empirically derived by Moseley, in a simple (linear) plot of the square root of X-ray frequency against atomic number (however, for silver, Z = 47, the experimentally obtained screening term should be replaced by 0.4). Notwithstanding its restricted validity,[6] Moseley's law not only established the objective meaning of atomic number (see Henry Moseley for detail) but, as Bohr noted, it also did more than the Rydberg derivation to establish the validity of the Rutherford/Van den Broek/Bohr nuclear model of the atom, with atomic number (place on the periodic table) standing for whole units of nuclear charge. The K-alpha line of Moseley's time is now known to be a pair of close lines, written as (Kα1 and Kα2) in Siegbahn notation. ## Shortcomings The Bohr model gives an incorrect value $\scriptstyle \mathbf{L} = \hbar$ for the ground state orbital angular momentum. The angular momentum in the true ground state is known to be zero. Although mental pictures fail somewhat at these levels of scale, an electron in the lowest modern "orbital" with no orbital momentum, may be thought of as not to rotate "around" the nucleus at all, but merely to go tightly around it in an ellipse with zero area (this may be pictured as "back and forth", without striking or interacting with the nucleus). This is only reproduced in a more sophisticated semiclassical treatment like Sommerfeld's. Still, even the most sophisticated semiclassical model fails to explain the fact that the lowest energy state is spherically symmetric--- it doesn't point in any particular direction. Nevertheless, in the modern fully quantum treatment in phase space, Weyl quantization, the proper deformation (full extension) of the semi-classical result adjusts the angular momentum value to the correct effective one. As a consequence, the physical ground state expression is obtained through a shift of the vanishing quantum angular momentum expression, which corresponds to spherical symmetry. In modern quantum mechanics, the electron in hydrogen is a spherical cloud of probability which grows denser near the nucleus. The rate-constant of probability-decay in hydrogen is equal to the inverse of the Bohr radius, but since Bohr worked with circular orbits, not zero area ellipses, the fact that these two numbers exactly agree, is considered a "coincidence." (Though many such coincidental agreements are found between the semi-classical vs. full quantum mechanical treatment of the atom; these include identical energy levels in the hydrogen atom, and the derivation of a fine structure constant, which arises from the relativistic Bohr-Sommerfeld model (see below), and which happens to be equal to an entirely different concept, in full modern quantum mechanics). The Bohr model also has difficulty with, or else fails to explain: • Much of the spectra of larger atoms. At best, it can make predictions about the K-alpha and some L-alpha X-ray emission spectra for larger atoms, if two additional ad hoc assumptions are made (see Moseley's law above). Emission spectra for atoms with a single outer-shell electron (atoms in the lithium group) can also be approximately predicted. Also, if the empiric electron-nuclear screening factors for many atoms are known, many other spectral lines can be deduced from the information, in similar atoms of differing elements, via the Ritz-Rydberg combination principles (see Rydberg formula). All these techniques essentially make use of Bohr's Newtonian energy-potential picture of the atom. • the relative intensities of spectral lines; although in some simple cases, Bohr's formula or modifications of it, was able to provide reasonable estimates (for example, calculations by Kramers for the Stark effect). • The existence of fine structure and hyperfine structure in spectral lines, which are known to be due to a variety of relativistic and subtle effects, as well as complications from electron spin. • The Zeeman effect - changes in spectral lines due to external magnetic fields; these are also due to more complicated quantum principles interacting with electron spin and orbital magnetic fields. • The model also violates the uncertainty principle in that it considers electrons to have known orbits and definite radius, two things which can not be directly known at once. • Doublets and Triplets: Appear in the spectra of some atoms: Very close pairs of lines. Bohr’s model cannot say why some energy levels should be very close together. • Multi-electron Atoms: don’t have energy levels predicted by the model. It doesn’t work for (neutral) helium. • A rotating charge such as the electron classically orbiting around the nucleus, would constantly lose energy in form of electromagnetic radiation (via various mechanisms: dipole radiation, Bremsstrahlung,...). But such radiation is not observed. ## Refinements Elliptical orbits with the same energy and quantized angular momentum Several enhancements to the Bohr model were proposed; most notably the Sommerfeld model or Bohr-Sommerfeld model, which suggested that electrons travel in elliptical orbits around a nucleus instead of the Bohr model's circular orbits.[1] This model supplemented the quantized angular momentum condition of the Bohr model with an additional radial quantization condition, the Sommerfeld-Wilson quantization condition[7][8] $\int_0^T p_r \,dq_r = n h \,$ where pr is the radial momentum canonically conjugate to the coordinate q which is the radial position and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants. The Bohr-Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy-plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could be turned this way and that relative to the coordinates without restriction. The Sommerfeld quantization can be performed in different canonical coordinates, and sometimes gives answers which are different. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics which Erwin Schrödinger developed in 1926. However, this is not to say that the Bohr model was without its successes. Calculations based on the Bohr-Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron. The Bohr-Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization. ## See also 1913 in science Balmer's Constant Basic concepts of quantum mechanics Franck-Hertz experiment provided early support for the Bohr model. Free-fall atomic model Inert pair effect is adequately explained by means of the Bohr model. Introduction to quantum mechanics Theoretical and experimental justification for the Schrödinger equation ## References ### Footnotes 1. ^ a b Akhlesh Lakhtakia (Ed.); Salpeter, Edwin E. (1996). "Models and Modelers of Hydrogen". American Journal of Physics (World Scientific) 65 (9): 933. Bibcode:1997AmJPh..65..933L. doi:10.1119/1.18691. ISBN 981-02-2302-1. 2. Niels Bohr (1913). "On the Constitution of Atoms and Molecules, Part I". Philosophical Magazine 26 (151): 1–24. doi:10.1080/14786441308634955. 3. Niels Bohr (1913). "On the Constitution of Atoms and Molecules, Part II Systems Containing Only a Single Nucleus". Philosophical Magazine 26 (153): 476–502. doi:10.1080/14786441308634993. 4. N. Bohr (1913). "I.On the constitution of atoms and molecules". 26 (151): 1–25. doi:10.1080/14786441308634955. 5. M.A.B. Whitaker (1999). "The Bohr-Moseley synthesis and a simple model for atomic x-ray energies". 20 (3): 213–220. Bibcode:1999EJPh...20..213W. doi:10.1088/0143-0807/20/3/312. 6. A. Sommerfeld (1916). "Zur Quantentheorie der Spektrallinien". 51 (17): 1. Bibcode:1916AnP...356....1S. doi:10.1002/andp.19163561702. 7. W. Wilson (1915). "The quantum theory of radiation and line spectra". 29 (174): 795–802. doi:10.1080/14786440608635362. ### Primary sources • Niels Bohr (1913). "On the Constitution of Atoms and Molecules, Part I". Philosophical Magazine 26 (151): 1–24. doi:10.1080/14786441308634955. • Niels Bohr (1913). "On the Constitution of Atoms and Molecules, Part II Systems Containing Only a Single Nucleus". Philosophical Magazine 26 (153): 476–502. doi:10.1080/14786441308634993. • Niels Bohr (1913). "On the Constitution of Atoms and Molecules, Part III Systems containing several nuclei". Philosophical Magazine 26: 857–875. • Niels Bohr (1914). "The spectra of helium and hydrogen". Nature 92 (2295): 231–232. Bibcode:1913Natur..92..231B. doi:10.1038/092231d0. • Niels Bohr (1921). "Atomic Structure". Nature 106 (2682): 104–107. Bibcode:1921Natur.107..104B. doi:10.1038/107104a0. • A. Einstein (1917). "Zum Quantensatz von Sommerfeld und Epstein". Verhandlungen der Deutschen Physikalischen Gesellschaft 19: 82–92.  Reprinted in The Collected Papers of Albert Einstein, A. Engel translator, (1997) Princeton University Press, Princeton. 6 p. 434. (provides an elegant reformulation of the Bohr-Sommerfeld quantization conditions, as well as an important insight into the quantization of non-integrable (chaotic) dynamical systems.) ## Further reading • Linus Carl Pauling (1970). "Chapter 5-1". General Chemistry (3rd ed.). San Francisco: W.H. Freeman & Co. • Reprint: Linus Pauling (1988). General Chemistry. New York: Dover Publications. ISBN 0-486-65622-5. • George Gamow (1985). "Chapter 2". Thirty Years That Shook Physics. Dover Publications. • Walter J. Lehmann (1972). "Chapter 18". Atomic and Molecular Structure: the development of our concepts. John Wiley and Sons. • Paul Tipler and Ralph Llewellyn (2002). Modern Physics (4th ed.). W. H. Freeman. ISBN 0-7167-4345-0. • Steven and Susan Zumdahl (2010). "Chapter 7.4". Chemistry (8th ed.). Brooks/Cole. ISBN 978-0-495-82992-8. • Helge Kragh (2011). "Conceptual objections to the Bohr atomic theory — do electrons have a "free will" ?". 36 (3): 327. Bibcode:2011EPJH...36..327K. doi:10.1140/epjh/e2011-20031-x.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9174007773399353, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/180310/evaluating-int-0a-frac-cosux-sqrta2-x2-mathrm-dx
Evaluating $\int_0^a \frac{\cos(ux)}{\sqrt{a^2-x^2}}\mathrm dx$ I believe this integral $$\int_0^a \frac{\cos(ux)}{\sqrt{a^2-x^2}}\mathrm dx$$ can not be computed exactly. However is there a method or transformation to express this integral in terms of the cosine integral or similar? I am referring to the integrals here. $a$ is real number; with the change of variable this integral becomes $$\int_0^a\cos(u\sin t) \ \mathrm dt$$ with $$x=a\sin t,$$ So, the new integral is $$\int_0^{\pi /2}\cos(ua\sin t) \ \mathrm dt$$ - Actually, it's expressible as a Bessel function... – J. M. Aug 8 '12 at 14:47 aja, thanks what bessel function if possible :) thanks again – Jose Garcia Aug 8 '12 at 14:48 1 Answer From $$\int_0^a \frac{\cos(ux)}{\sqrt{a^2-x^2}}\mathrm dx$$ you were able to transform it into $$\int_0^{\pi/2}\cos(au\sin\,t)\mathrm dt$$ which is expressible in terms of the Anger function $\mathscr{J}_\nu(z)$, which is equivalent to the more familiar Bessel function of the first kind $J_\nu(z)$ for integer orders: $$\int_0^{\pi/2}\cos(au\sin\,t)\mathrm dt=\frac12\int_0^\pi\cos(au\sin\,t)\mathrm dt=\frac{\pi}{2}J_0(au)$$ - shouldn't it be $\frac{\pi }{2} J_{0}(au/2)$ due to the change of variable $t \rightarrow t/2$ – Jose Garcia Aug 8 '12 at 18:58 anyway thank you all for your answers :D – Jose Garcia Aug 8 '12 at 18:58 @Jose, note the limits. :) – J. M. Aug 9 '12 at 1:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8769111633300781, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/2573/solve-in-terms-of-specific-variables/2575
# Solve in terms of specific variables As part of my physics homework (which I do not need the answer for, as I did it by hand, and got the correct $$\frac{m_1 g-m_2 g \sin(\theta)}{-m_1-m_2}$$ as the answer) I would have liked to have solved the following system of equations as shown (i.e. put $a_1$ as `Subscript[a,1]` and not `a1` or something similar. The equations that I wanted to solve were: 1. `Subscript[m, 2] Subscript[a, 2 x] == T - Subscript[m, 2] g Sin[\[Theta]]` 2. `Subscript[m, 1] Subscript[a, 1 y] == T - Subscript[m, 1] g` 3. `Subscript[a, 2 x] == -Subscript[a, 1 y]` I tried the following, and was given `{}` as the answer (which is clearly false): ```Solve[Subscript[m, 2] Subscript[a, 2 x] == T - Subscript[m, 2] g Sin[\[Theta]] && Subscript[m, 1] Subscript[a, 1 y] == T - Subscript[m, 1] g && Subscript[a, 2 x] == -Subscript[a, 1 y], Subscript[a, 1 y]]``` It seems that there at least two problems with the equations that I am asking it to solve. 1. I do not think that it is treating $a_{1y}$ as a variable. When I typed a in the part of solve that asks for the variables that I am solving for, all terms that have `a` in them light up as blue, but when I enter the subscript, it loses that highlighting. 2. It would seem (at least to me) that a perfectly valid answer to `Solve` would be $-a_{2x}$ but that would be a useless answer, as I want the answer in terms of $m_1 , m_2 , g, \theta$ and not in terms of other "compound variables" Note: When I tried a "subscript-less" version (```Solve[m2 a2x == T - m2 g Sin[\[Theta]] && m1 a1y == T - m1 g && a2x == -a1y, a1y]``` I still got `{}` as the answer. Using Mathematica 8. What is going on, and how can I enter the equations normally, and solve for them in terms of specific variables? - ## 1 Answer Well, I'm not an expert on this and I always fight when I do these stuff, but this is what I think. You are using symbols in your equations. To Mathematica, this probably means that they are something unkown but that something could be anything. If you put the symbols as last arguments, those are the ONLY symbols it will try to "generate conditions" to make the equations fit, for ANY value of the other symbols... (this is a generality. Read the long help then. You can set domain specifications, or quantifiers like `Exists`) So, for example, ````Solve[x == y && x == -y, x] ```` will give an empty list even though `y=0` is a solution. So in that case you have two options. Either specify `y` as a symbol to solve too ````In[22]:= Solve[x == y && x == -y, {x, y}] Out[22]= {{x -> 0, y -> 0}} ```` or use some version that will generate the conditions on `y` ````Solve[x == y && x == -y] ```` {{x -> 0, y -> 0}} or ````Reduce[x == y && x == -y] ```` y == 0 && x == 0 You could also explicitly ask solve to eliminate `y` ````Solve[x == y && x == -y, x, {y}] ```` which is equivalent to ````Solve[Exists[y, x == y && x == -y], x] ```` {{x -> 0}} Back to your case. Conclusion: either use reduce or add more symbols to the variable list. I can definitely find values for `a2x`, `T`, `m1`, `g`, that make the last two equations impossible to be satisfied. - what should I add to the list of variables to get the answer at the top of my question? I cannot seem to figure it out (Just replacing `Solve` with `Reduce` shows many answers, but none of them are in the or that I want). – soandos Mar 4 '12 at 4:00 @soandos, well, you clearly don't want your solution in terms of T or a2x, so add those – Rojo Mar 4 '12 at 4:03 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9559407830238342, "perplexity_flag": "middle"}
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Homomorphism
# All Science Fair Projects ## Science Fair Project Encyclopedia for Schools! Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary # Science Fair Project Encyclopedia For information on any area of science that interests you, enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.). Or else, you can start by choosing any of the categories below. # Homomorphism This word should not be confused with homeomorphism. In abstract algebra, a homomorphism is a map from one algebraic structure to another of the same type that preserves all the relevant structure. N.B. Some authors use the word homomorphism in a larger context than that of algebra. Some take it to mean any kind of structure preserving map (such as continuous maps in topology), or even a more abstract kind of map—what we term a morphism—used in category theory. This article only treats the algebraic context. For more general usage see the morphism article. For example, if one considers sets with a single binary operation defined on them (an algebraic structure known as a magma), a homomorphism is a map $\phi: X \rightarrow Y$ such that $\phi(u \cdot v) = \phi(u) \circ \phi(v)$ where $\cdot$ is the operation on X and $\circ$ is the operation on Y. Each type of algebraic structure has its own type of homomorphism. For specific definitions see: The notion of a homomorphism can be given a formal definition in the context of universal algebra, a field which studies ideas common to all algebraic structures. In this setting, a homomorphism $\phi: A \rightarrow B$ is a map between two algebraic structures of the same type such that $\phi(f_A(x_1, \ldots, x_n)) = f_B(\phi(x_1), \ldots, \phi(x_n))$ for each n-ary operation f and for all xi in A. ### Types of homomorphisms • An isomorphism is a bijective homomorphism. Two objects are said to be isomorphic if there is an isomorphism between them. Isomorphic objects are completely indistinguishable as far as the structure in question is concerned. • An epimorphism is a surjective homomorphism. • A monomorphism is an injective homomorphism. • A homomorphism from an object to itself is called an endomorphism. • An endomorphism which is also an isomorphism is called an automorphism. The above terms are used in an analogous fashion in category theory, however, the definitions in category theory are more subtle; see the article on morphism for more details. Note that in the larger context of structure preserving maps, it is generally insufficient to define an isomorphism as a bijective morphism. One must also require that the inverse is a morphism of the same type. In the algebraic setting (at least within the context of universal algebra) this extra condition is automatically satisfied. ### Kernel of a homomorphism Any homomorphism f : X → Y defines an equivalence relation ~ on X by a ~ b iff f(a) = f(b). The relation ~ is called the kernel of f. It is a congruence relation on X. The quotient set X/~ can then be given an object-structure in a natural way, e.g., [x] * [y] = [x * y]. In that case the image of X in Y under the homomorphism f is necessarily isomorphic to X/~; this fact is one of the isomorphism theorems. Note in some cases (e.g. groups or rings), a single equivalence class K suffices to specify the structure of the quotient, so we write it X/K. Also in these cases, it is K, rather than ~, that is called the kernel of f (cf. normal subgroup, ideal). ### Related topics 03-10-2013 05:06:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8930649757385254, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/120613/need-help-with-this-primitive-roots-question
# Need help with this primitive roots question Question: If p and q are odd primes and $({a^p+1})/q$, show that either $(a+1)/q$ or $q= 2kp + 1$ for some integer $k$ I read the theorem that says If p and q are odd primes and $({a^p-1})/q$, show that either $(a-1)/q$ or $q= 2kp + 1$ for some integer $k$. I tried to follow a similar method as the proof of that theorem, but I can't seem to be able to come up with a solution. - 1 Sorry: what do you mean by "and $(a^p+1)/q$"? And $\frac{a^p+1}{q}$ what? Or is that supposed to mean "$q$ divides $a^p+1$"? (I guessed it might mean "$a^p+1$ divides $q$, but that would be silly: it would mean $a=0$ or $a^p+1=q$). – Arturo Magidin Mar 15 '12 at 19:31 1 Apparently Google thinks you mean $q \mid (a^p + 1).$ This exact problem is verbatim `If p and q are odd primes and q | a^(p) + 1, Show that either q | a+1 or q=2kp +1 for some integer k.?` here on: answers.yahoo.com/question/index?qid=20090714022447AAXZQrK – user2468 Mar 15 '12 at 19:36 Isn't this just a matter of replacing $a$ with $-a$? – anon Mar 15 '12 at 19:36 1 I think the notation may be supposed to mean that the fractions are integers - but it needs clarifying to make sense of the problem. – Mark Bennet Mar 15 '12 at 19:37 ## 2 Answers Suppose that $q$ divides $a^p+1$. Then $a^p \equiv -1 \pmod q$. Note that $(a^p)^2=a^{2p}$. So $a^{2p}\equiv 1 \pmod q$. Let $e$ be the smallest positive integer such that $a^e\equiv 1 \pmod q$. Then $e$ divides $2p$. In principle, there are $4$ possibilities to examine: (i) $e=1$; (ii) $e=2$; (iii) $e=p$; (iv) $e=2p$. Possibility (i): If $e=1$, then $a \equiv 1\pmod{q}$. This cannot happen, because we were told that $a^p \equiv -1\pmod{q}$. But $1\not\equiv -1\pmod{q}$, since $q$ is odd. Possibility (ii): If $e=2$, then $a\equiv -1\pmod{q}$, so $q$ divides $a+1$. Possibility (iii) If $e=p$, we reach a contradiction, since $a^p\equiv -1 \pmod{q}$, and $1\not\equiv -1\pmod{q}$. Possibility (iv): Recall that by the minimality of $e$, the number $e$ must divide any $k$ such that $a^k\equiv 1\pmod{q}$. Since $a$ cannot be divisible by $p$, we have, by Fermat's Theorem, $a^{q-1}\equiv 1\pmod{q}$. Thus if $e=2p$, the number $2p$ must divide $q-1$. Let $q-1=(2p)k$. Then $q=2kp+1$. So the only live possibilities are (ii) and (iv). One yields that $q$ divides $a+1$, and the other yields that $q$ is of the shape $2kp+1$. Both can happen. - Suppose by hypothesis that $a^p+1\equiv0 \;(q)$. Then $(-a)^p-1\equiv0 \;(q)$, so by the original theorem we may conclude that either $(-a)-1\equiv 0 \;(q)$, which is $a+1\equiv0\;(q)$, or $q=2kp+1$, as desired. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9368992447853088, "perplexity_flag": "head"}
http://nrich.maths.org/6624/solution?nomenu=1
## 'Cyclic Quadrilaterals' printed from http://nrich.maths.org/ ### Show menu This problem invited students to consider the relationship between opposite angles of cyclic quadrilaterals. Often, with these types of problems, it is helpful to draw diagrams; several students submitted diagrams as part of their solution - well done. The problem is divided into two parts: the first part contains questions that form "building blocks" to help meet the final challenge in the second part. Nick, from St Stephen's at Carramar summed up his solution: The sum of the angles at opposite vertices of a cyclic quadrilateral is $180^\circ$. This is the same for all cyclic quadrilaterals, regardless of the positioning of the centre dot. Click here to see his full solution with diagrams. Well done also to the following students, who also submitted similar (and correct!) answers to this problem: Andre, Laura, Sascha, Chris and Sailesh from St. Stephen's School, Marcus and Kye from St Philip's Primary School, and Natasha. Now that you have completed this problem, you could try the following problems as an extension: Subtended Angles and/or Right Angles.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503036737442017, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/55447/maximal-euler-characteristic-of-surfaces-bounding-two-fixed-curves
## Maximal euler characteristic of surfaces bounding two fixed curves ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\gamma_0$ and $\gamma_1$ be two simple closed curves in a closed surface $S$. What is the maximum Euler characteristic of a compact properly embedded surface $\Sigma \subset S\times [0,1]$ such that $\partial \Sigma = \gamma_0 \times {0} \cup \gamma_1 \times {1}$? Of course, in order for such a surface $\Sigma$ to exist, the two curves $\gamma_0$ and $\gamma_1$ must represent the same class in $H_1(S,\mathbb Z_2)$. Note that $\Sigma$ may be non-orientable. If $2n$ is the minimum geometric intersection number of $\gamma_1$ and $\gamma_2$, it is easy to construct a $\Sigma$ with $\chi(\Sigma)\geqslant \chi(S) - n$. Is there a converse estimate of this kind? Do we have $\chi(\Sigma) \leqslant -n$ when $S$ is a torus? - what is the "geometric intersection number" and why is its minimum even? – Vivek Shende Feb 14 2011 at 23:48 1 @Vivek : The geometric intersection number of two curves $x$ and $y$ on a surface is the minimum of the set of numbers $#(x' \cap y')$, where $x'$ and $y'$ range over curves homotopic to $x$ and $y$. In the case at hand, this has to be even because the mod $2$ algebraic intersection number of the two curves in question is $0$, so mod $2$ there have to be the same number of positive intersections as negative intersections. – Andy Putman Feb 15 2011 at 2:21 ## 3 Answers For orientable surfaces of genus at least $2$, pretty sharp bounds are obtained in the unpublished PhD thesis of Ingrid Irmer, which is available here. - Nice reference, thanks! – Bruno Martelli Feb 15 2011 at 8:24 The thesis deals with oriented curves and surfaces, but probably the same techniques apply in the non-oriented case. – Bruno Martelli Feb 15 2011 at 14:00 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let $a$, $b$ and $x$ be curves on $S$ such that • $a$ and $x$ intersect once, • $b$ and $x$ intersect once, and • $a$ and $b$ have a large algebraic intersection number. (I assume that $S$ is orientable.) Then there is a surface $\Sigma_a$ connecting $x$ and $x+2a$ with $\chi(\Sigma_a) = -1$ and a similar surface $\Sigma_b$ connecting $x$ and $x+2b$. Combining $\Sigma_a$ and $\Sigma_b$ we get a surface of Euler characteristic $-2$ connecting $x+2a$ and $x+2b$. Thus the geometric intersection number of two curves does not give a lower bound on the complexity of a surface joining them. (In your original question you ask if $\chi(\Sigma) \le n$, but $n$ is positive and $\chi(\Sigma)$ is negative, so I assume you meant $|\chi(\Sigma)|$.) - Thank you (I have corrected the sign) – Bruno Martelli Feb 15 2011 at 8:24 Here is a graph $\mathcal{G}(S,x)$ associated to a surface and homology class (similar to the 1-skeleton of the curve complex): For a fixed class in $x\in H_1(S;\mathbb{Z}_2)$, consider isotopy classes of embedded multicurves representing the homology class $x$ (one may assume no components are parallel and there are no trivial components). Make this collection the vertices of the graph $\mathcal{G}(S,x)$. Connect two vertices $A, B$ to be adjacent in the graph $\mathcal{G}(S,x)$ if $A\cup B$ are disjointly embedded, and after removing all parallel curves of $A\cup B$, the remaining components bound a pair of pants or a twice-punctured projective plane (this second can only happen if $S$ is non-orientable). I'm not sure if such a complex has been defined before, but there is a somewhat analogous complex defined in the integral homology case by Bestvina, Bux, and Margalit. Also, this is related to a technique of Hatcher-Thurston to undertand surfaces in two-bridge knot complements. I claim that the maximal Euler characteristic of a surface bounding $\gamma_0\times 0 \cup \gamma_1\times 1$ is the negative of the distance between $\gamma_0$ and $\gamma_1$ in $\mathcal{G}(S,x)$. Put the product metric on $S\times [0,1]$, and make $\Sigma$ into a minimal surface with respect to this metric (Theorem 6.12 of Hass-Scott). Then $S\times t, t\in [0,1]$ gives a foliation by totally geodesic surfaces, and by the maximum principle, they can be tangent to $\Sigma$ in only saddle tangencies (see an argument of Hass, we will assume things are perturbed to be generaic). Thus, for all but finitely many $t$, $S\times t$ meets $\Sigma$ in a finite collection of curves, giving rise to a vertex of $\mathcal{G}(S,x)$. As one passes through a tangency point $S\times t_0$ (assuming things are generic), the intersection changes by a saddle move, giving a surface in $S$ of Euler characteristic $-1$ bounding the curves before and after the tangency. There can never be a closed trivial curve occurring, because this would give rise to a center tangency. Thus, each saddle tangency gives an edge between the adjacent vertices of $\mathcal{G}(S,x)$, and therefore $\Sigma$ gives a path between $\gamma_0$ and $\gamma_1$ in $\mathcal{G}(S,x)$. Conversely, any such path gives rise to a surface. Of course, there will be many geodesics connecting $\gamma_0$ and $\gamma_1$ in $\mathcal{G}(S,x)$, given by any Morse function on $(\Sigma, \gamma_0,\gamma_1)$ with only index 1 critical points, and I don't expect the distance function to be easy to compute (probably one should use normal surface theory to compute it). - In your definition of the graph is it equivalent (and perhaps easier) to say that $A$ and $B$ and joined by an edge iff they are related by a saddle move? – Kevin Walker May 18 2011 at 21:59 yes, that's equivalent - in fact, as one varies the parameter t, one will see a movie isotoping multicurves with finitely many saddle moves. – Agol May 18 2011 at 22:07 Thanks Ian. Does $\Sigma$ need to be $\partial$-incompressible to guarantee that one can find a minimal representative, or is incompressibility enough? In fact my question was motivated precisely by normal surface theory: in some cases normal surfaces of highest $\chi$ are "seen" by quantum invariants by using the techniques described in a (nice) paper of Frohman - Bartoszynska arxiv.org/abs/math/0310273 and I was wondering whether this minimal distance between curves could be computed by using some Turaev-Viro invariant. – Bruno Martelli May 19 2011 at 14:35 This is Theorem 6.12 of Hass-Scott: ams.org/journals/tran/1988-310-01/… – Agol May 19 2011 at 15:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 93, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305217862129211, "perplexity_flag": "head"}
http://physics.aps.org/articles/print/v5/132
# Viewpoint: New Temperature Probe for Quark-Gluon Plasma , Lawrence Livermore National Laboratory, Livermore, CA 94551, USA and Physics Department, University of California at Davis, Davis, CA 95616, USA Published November 26, 2012  |  Physics 5, 132 (2012)  |  DOI: 10.1103/Physics.5.132 The population of Upsilon mesons in quark-gluon plasma can be used to measure the plasma’s temperature. At the Large Hadron Collider (LHC), lead ions are collided at several tera-electron-volts to produce a soup of quarks and gluons called the quark-gluon plasma. Measuring the temperature of the plasma, which only survives for $∼$ $10-23$ seconds and is hotter than the Sun‘s interior, would be impossible with any normal thermometer. Instead, researchers gauge the plasma’s temperature by measuring its effects on other particles created in the collision. Interpreting these “particle thermometers,” however, has not been entirely straightforward. Now, the collaboration running the CMS experiment at the LHC reports in Physical Review Letters that the detected yields of Upsilon mesons (bound states of a bottom quark and its antiparticle) in the plasma provide a comparatively clean measure of the plasma’s temperature. The CMS researchers have made the first statistically significant detection of the suppression of the first two excited states of the mesons, a suppression that becomes more pronounced at higher plasma temperatures [1]. Upsilon mesons (denoted $Υ$) are a kind of quarkonium, a short-lived bound state between a heavy quark and its antiparticle that can be created in high-energy particle collisions. Structurally, quarkonium is similar to the bound state that forms between an electron and a positron (positronium), but in addition to a Coulomb-like potential between the two quarks, $αc/r$, where $αc$ is a coupling based on one-gluon exchange, there is also a potential, $σr$, that is similar to the elastic tension in a rigid string. In the presence of the “deconfined” quarks and gluons that make up the quark-gluon plasma, both of these terms become weaker: the surrounding plasma effectively screens the force between the quarks in the bound state. In the mid-80s, the theorists Tetsuo Matsui and Helmut Satz predicted that, as a result of this screening effect, the number of quarkonium states detected in a heavy-ion collision would be a lot lower, or “suppressed,” if the collision produced a quark-gluon plasma [2]. Suppression would not imply the absence of produced quarkonium, but rather that the observed yields were depleted relative to the expected yield, either because the quark-antiquark pair failed to form a quarkonium state, or the state itself was destroyed through its subsequent interactions. So far, researchers have studied two quarkonium “families” in heavy-ion collisions: charmonium, bound states of a charm and anticharm quark, and bottomonium, bound states of a bottom and antibottom quark. (Bottomonium includes the Upsilon mesons.) Since the charm quark is lighter than the bottom quark, and more copiously produced in heavy-ion collisions, early experiments at CERN’s Super Proton Synchrotron focused on charmonium, and more specifically the suppression of its first excited state, known as $J/ψ$ [3]. But the $J/ψ$ is an imperfect temperature probe: For one, the lepton pairs produced in $J/ψ$ decays (which are what experiments actually measure) have insufficient momentum to make it past the high magnetic fields of the CMS particle spectrometer and into the detector. Second, other, non-quark-gluon plasma effects can cause the suppression of $J/ψ$. In contrast, CMS can more cleanly detect the lepton pairs produced by Upsilon decays. Partly, this is because each lepton carries half the Upsilon meson’s mass, giving it enough momentum to shoot through the magnetic fields and into the detector. But another important factor is the CMS detector’s excellent dilepton mass resolution. Prior to the LHC, the separation of the three mass peaks associated with the lowest-energy Upsilon state ($Υ(1S)$) and its first two excited states ($Υ(2S)$ and $Υ(3S)$) had only been observed in proton-antiproton collisions at the Tevatron. The CMS experiment is the first to cleanly separate these peaks in heavy ion collisions – a unique accomplishment. The clear separation of the peaks that CMS observes makes it much easier to interpret Upsilon suppression and how it relates to the plasma’s temperature [4]. The main sensitivity to temperature comes from comparing the suppressions of the different Upsilon states: The more highly excited states are progressively less tightly bound and have a larger effective radius than the ground state, and are therefore more sensitive to the plasma’s temperature. For each state, the screening length – a measure of the distance beyond which a quark’s color charge is screened – decreases with increasing temperature and can be calculated with lattice quantum chromodynamics [5]. When the temperature is high enough that the screening length is the same as the radius of a given Upsilon state, the state will no longer remain bound in the medium, and its final-state yield will be reduced. Thus Upsilon states with larger radii and smaller binding energies will break up first, while those with smaller radii and larger binding energies require higher temperatures to be suppressed (Fig. 1). The suppression of the $Υ(3S)$ state first, then $Υ(2S)$, and then $Υ(1S)$ is called sequential suppression [6], and the LHC is the first machine at which it is possible to see it for bottomonium states. Experimentally, CMS determines the amount of suppression of the individual states in the lead-lead collisions by comparing them to yields from proton-proton collisions of the same energy. The suppression levels also depend on the centrality of the collisions: head-on collisions involve most of the nucleons in both nuclei and make a hotter quark-guon plasma, while more peripheral collisions involve fewer nucleons and yield a smaller, cooler plasma. All of this is contained in a number called the suppression factor, $RAA$, which is the ratio of a particular $Υ$ state in the lead-lead collision relative to that in proton-proton collisions, normalized to account for the centrality of the collision. In terms of actual numbers, CMS finds that $Υ(3S)$, with its low binding energy, is completely suppressed by the medium ($RAA∼0$). The $Υ(2S)$ is also highly suppressed: $RAA(Υ(2S))$ is less than $0.3$ in peripheral collisions and less than $0.1$ in more central collisions. CMS sees the suppression of $Υ(1S)$, too, but this is not because the quark-gluon plasma suppresses $Υ(1S)$ production (the temperature is not high enough to see a direct suppression of the most tightly bound $Υ(1S)$ state), but because the excited states are suppressed and don’t feed down. The $Υ(2S)$ and $Υ(3S)$ are suppressed relative to the $Υ(1S)$ with a greater than five standard deviation significance, the gold standard for discovery physics. Some work remains to be done, however. There are non-quark-gluon plasma, or “cold matter” effects that may be affecting the apparent sequential suppression. These effects can be studied in the forthcoming proton-lead collision run at the LHC. Since the proton-proton baseline is the limiting factor in CMS’s current statistical analysis, another proton-proton run at $2.76$ tera-electron-volts (the same energy as the heavy-ion collisions) should further reduce the uncertainties. Finally, future lead-lead collision data with better statistics should reveal the $Υ(nS)/Υ(1S)$ ratios as a function of transverse momentum [7]. Ultimately, these refinements would allow researchers to use Upsilon suppression as a way to more fully characterize the quark-gluon plasma. ## Acknowledgment This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. ### References 1. S. Chatrchyan et al. (CMS Collaboration), “Observation of Sequential Y Suppression in PbPb Collisions,” Phys. Rev. Lett. 109, 222301 (2012). 2. T. Matsui and H. Satz, ”J/ψ Suppression by Quark-Gluon Plasma Formation,” Phys. Lett. B 178, 416 (1986). 3. L. Kluberg, 20 years of J/ψ suppression at the CERN SPS, Results from Experiments NA38, NA51 and NA50,” Eur. Phys. J. C 43, 145 (2005); L. Kluberg and H. Satz, “Color Deconfinement and Charmonium Production in Nuclear Collisions,” arXiv:0901.3831 (hep-ph). 4. A. D. Frawley, T. Ullrich, and R. Vogt, “Heavy Flavor in Heavy-Ion Collisions at RHIC and RHIC II,” Phys. Rep. 462, 125 (2008). 5. H. Satz, “Colour Deconfinement and Quarkonium Binding ,” J. Phys. G 32, R25 (2006). 6. S. Digal, P. Petreczky, and H. Satz, “Quarkonium Feed-Down and Sequential Suppression,” Phys. Rev. D 64, 094015 (2001). 7. J. F. Gunion and R. Vogt, Determining the Existence and Nature of the Quark-Qluon Plasma by Upsilon Suppression at the LHC,” Nucl. Phys. B 492, 301 (1997). ### Highlighted article #### Observation of Sequential Υ Suppression in PbPb Collisions S. Chatrchyan et al. (CMS Collaboration) Published November 26, 2012 | PDF (free) ### Figures ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 38, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8947197198867798, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/25026/list
## Return to Answer 2 added 269 characters in body Doing all exercises in Atiyah-MacDonald, like BCnrd suggested, is surely the ideal way to learn about this and much more. Let me offer a couple of practical tips to get you started: A surprisingly effective example to keep in mind when you deal with any question about submodules of a module $M$ is to take $M=R$. Then the submodules of $R$ are just the ideals of $R$, which are concrete enough to check your intuition, but still have possess a very rich structure so that not much is lost. Also, since many properties of modules fail to pass to submodules in higher dimension, it usually suffices to consider some small example, say $R= k[x,y]$. As an example, let says you are trying to understand the following question: Over what Noetherian ring $R$ is a submodule of any free module free? (this is of course true for v.spaces)vector spaces). If you take $M=R$, it follows that all ideals $I$ have to be free. If $R=k[x]$, this is true, and already an interesting exercise, but if $R=k[x,y]$, just take $I=(x,y)$. $I$ is not free because the generators have a non-zero relation: $xy-yx=0$. This example also suggests that all ideals in $R$ have to be principal, otherwise similar counter-examples can be found. So you naturally gets to principal ideal rings. If you want to play with it a bit more, since $R/I$ fits into an exact sequence: $$0 \to I \to R \to R/I \to 0$$ This says that $R/I$ has projective dimension at most $1$ for any ideal $I$. This leads you to some serious restriction on $R$, which will point you to the right condition, from a different perspective. You can replace "free" by "locally free" and play the same game, it will naturally leads you to all sort of interesting things worth learning about commutative rings, for examples projective modules or Quillen-Suslin theorem, etc. (There are, of course, other ways to approach this particular question, my point is by considering $M=R$ you can already get very quite far). I hope you will have some fun! 1 Doing all exercises in Atiyah-MacDonald, like BCnrd suggested, is surely the ideal way to learn about this and much more. Let me offer a couple of practical tips to get you started: A surprisingly effective example to keep in mind when you deal with any question about submodules of a module $M$ is to take $M=R$. Then the submodules of $R$ are just the ideals of $R$, which are concrete enough to check your intuition, but still have a very rich structure so that not much is lost. Also, since many properties of modules fail to pass to submodules in higher dimension, it usually suffices to consider some small example, say $R= k[x,y]$. As an example, let says you are trying to understand the following question: Over what Noetherian $R$ is a submodule of any free module free? (this is of course true for v.spaces). If you take $M=R$, it follows that all ideals $I$ have to free. If $R=k[x]$, this is true, and already an interesting exercise, but if $R=k[x,y]$, just take $I=(x,y)$. $I$ is not free because the generators have a non-zero relation: $xy-yx=0$. If you want to play with it a bit more, since $R/I$ fits into an exact sequence: $$0 \to I \to R \to R/I \to 0$$ This says that $R/I$ has projective dimension at most $1$ for any ideal $I$. This leads you to some serious restriction on $R$, which will point you to the right condition. You can replace "free" by "locally free" and play the same game, it will naturally leads you to all sort of interesting things worth learning about commutative rings, for examples projective modules or Quillen-Suslin theorem, etc. (There are, of course, other ways to approach this particular question, my point is by considering $M=R$ you can already get very far). I hope you will have some fun!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467967748641968, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/186435/how-to-precisely-distinguish-vectors-and-points/186466
# How to precisely distinguish vectors and points? [duplicate] Possible Duplicate: Distinction between vectors and points I have a doubt about the distinction between points and vectors. I know there's already a topic about that here in the web site, but i thought the correct was to create a new one. Well, the question is: in euclidean space we identify both points and vectors with elements of $\mathbb{R}^n$, but I know they're different things. And I know that when dealing with general manifolds the situation gets worse and it's needed to define precisely the notion of a tangent space at each point of the manifold. So my question is: how is it possible to define precisely the distinction between points and vectors first in euclidean space and then in general manifolds ? I've seem a book on differential geometry where the author introduces the operation of addition of points and multiplication of point by scalar, but i did think that these operations are meaningless geometrically speaking. I've heard about the notion of an affine space, is that the correct way to make a rigorous distinction between vectors and points? Thanks in advance for the help. - 3 – Rahul Narain Aug 24 '12 at 19:11 ## marked as duplicate by Rahul Narain, sdcvvc, Michael Greinecker, William, J. M.Aug 31 '12 at 10:03 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 5 Answers I don't know that point/vector is best way to make this distinction. Think about position and velocity. We can think of both of these things as vectors. Position only has meaning when it is measured relative to something, we can call position the displacement from that location (usually the origin). If we measure relative to ourselves, the position changes when we move. However the velocity of an object is the same no matter where we are relative to it. We would say that velocity is translation invariant. (Please do not bring up physics) When we say point we usually meaning something like position. When we say "vector" we usually mean something that isn't dependent on a distinguished point like the origin. Or at least, we make clear what the distinguished point is. - At a very high level, mathematicians tend not to bother distinguishing between things when they're basically the same. It's quite hard to define precisely what I mean by 'basically the same', but it's a bit like having two isomorphic groups. Most mathematicians would - even if they didn't say so explicitly - have in the back of their mind the notion that the group of symmetries of a triangle is the same as the group of permutations of three elements. All you're doing is giving a different label to the elements of the group and possibly rearranging them. Similarly, it's easy to show that equivalence classes on a set X form a partition of X. But we can go in the other direction, and take any partition of X, and call two elements of X equivalent if they are in the same part of the partition. So it makes sense not to distinguish between equivalence relations on X and partitions of X - there's a clear bijection between them, and it's useful to use tools from one area to help with the other. You mention that elements of $\mathbb{R}^n$ can be thought of either as vectors or as ordered $n$-tuples (or points). Since these are clearly basically the same thing, there is no point differentiating between them - if we treat them at the same object then we can use all the linear-algebra properties of vectors and the finite-sequence properties of ordered $n$-tuples at the same time. There's nothing to be gained from treating them as distinct objects. - I'm guessing you can tell a point and an arrow apart if I draw tham on a paper. If that really is the case, then you understand the geometric difference between the two. My guess is that you notice we use ordered tuples to write both, and you're confused because they look the same. Well they do look exactly the same sometimes! In fact, you can use points to represent vectors! If you plot a point on an x-y chart, and draw an arrow from the origin to the point, you call that the position vector of the point. In this case, the components for the vector and the coordinates for the point are identical, and its easy to confuse. But the point itself is really just that dimensionless point, and the vector itself is the 1 dimensional oriented line segment (=arrow) from the point to the origin. The bottom line is that you'll have to desensitize yourself to this, because things that look alike but aren't the same abound in mathematics. You can tell ordered pairs and open intervals in the real line apart, right? Just hang on to the geometric interpretation, and you should be OK. A point has no dimensions, but we keep track of its location with those coordinates. A vector is extant in a certain direction, with a certain length. The way we represent vectors is to slide their sources to the origin, and then write down what point their arrowhead lands on. The numbers we record vectors with record their length and direction, but not location. For vectors in ordinary Euclidean space, location is not important: just direction and length. So you see, even though the tuple of numbers looks the same, it's actually keeping track of different types of information. - rschwieb, thanks for your answer. I understand the affine structure, where we can add point to vector and subtract points. But for example, if I have a function $F : U \subset \mathbb{R}^n \to \mathbb{R}^m$, should I treat the objects of the image as points or vectors? Because if I treat as points I won't be able to add two of these functions for example, because addition of points is not defined. Thanks once more, and sorry if I said something silly. – Leonardo Aug 24 '12 at 19:40 @Leonardo No no, I'm sure this question and sentiment is very common. Actually I won't be able to help you as much with affine space: I don't have a good feel for it. My feeling is that affine space is another layer of intuition to develop, separate from points and vectors (or from your intuition of vectors). I only know affine space is like "a vector space where you can't remember where you put the origin". – rschwieb Aug 24 '12 at 19:44 The words "point" and "vector" are quite mathematically overloaded. If your mean "point" and "vector" to be terms with geometric content, then "vector" most likely means to you "an element of a finite dimensional real vector space." Although one can attach geometric significance to elements of infinite dimensional spaces and elements of vector spaces defined over other fields. But let's stick with the finite dimensional real case. "Point" is a bit easier. It almost certainly means to you "an element of a manifold" (smooth or otherwise). In this context, every vector is a point. But not all points are vectors. A vector is a point plus more structure. Specifically, to each point in a smooth manifold, $p \in M$, one can attach a tangent space, $T_pM$. Then putting these tangent spaces together (say disjoint union) one gets $TM = \cup_{p\in M} T_pM$ (the tangent bundle). It turns out that the tangent bundle itself has the structure of a smooth manifold. Thus the elements of $TM$ are simultaneously points (of the tangent bundle manifold) and vectors (each element lives in a tangent (vector) space $T_pM$ for some $p\in M$). So the "points" in $TM$ have more structure than the points of $M$. If we choose an element $p \in M$, it may or may not make sense to perform "scalar multiplication". However, this always makes sense for the points in $TM$ (since each "point" belongs to some vector space). Next, given two points in $M$ it may or may not make sense to "add" them. This is also generally true in $TM$. However, if we carefully select $v,w \in TM$ so that $v \in T_pM$ and $w \in T_qM$ where $p=q$, then $v+w$ is defined (since both $v$ and $w$ belong to the same vector space). Now when we start dealing with $\mathbb{R}^n$ everything collapses down. If you let ${\bf x} \in \mathbb{R}^n$. Then $T_{\bf x}\mathbb{R}^n$ is canonically isomorphic to $\mathbb{R}^n$ itself. Since everything is "flat" one can "parallel transport" all of the vectors tangent at ${\bf x}$ back to the origin. So $T_{\bf x}\mathbb{R}^n$ is essentially the same as $T_{\bf 0}\mathbb{R}^n$ (the tangent space at the origin). This in turn can be identified with $\mathbb{R}^n$ (identifying terminal points of vectors based at the origin with the terminal point itself). So in the end we essentially have $T_{\bf x}\mathbb{R}^n=T_{\bf 0}\mathbb{R}^n=\mathbb{R}^n$. So for most purposes we can totally ignore the distinction between "point" and "vector" when working with $\mathbb{R}^n$. - Bill Cook, thanks for you answer. It was this kind of doubt I had. There's still one thing I'd like to ask: if I have a map $F : M \to N$ where $M$ and $N$ are manifolds, as I see elements of manifolds as points then this map receives a point and produces a new point. But what about $F : \mathbb{R}^n \to \mathbb{R}^m$ ? Should I treat the elements of the image as points or vectors? Because once treated as vectors I can add and multiply by scalars while if I treat as points those operations are geometrically meaningless if I understood well. Thanks again for your support. – Leonardo Aug 25 '12 at 16:38 @Leonardo there's really no way to answer your question except, "It depends." Most of the time the difference between treating elements as either points or vectors is just a matter of taste. For your map $F$, there are cases where it makes most sense to think of it as a map from points to vectors and other times from vectors to points or points to points or vectors to vectors...it depends on the map! – Bill Cook Aug 27 '12 at 0:55 Yes, the affine space is the correct concept to look at for the distinction of points and vectors. However, there's a simple way you can get an uniform definition of vectors and points: Define your space as $S = \mathbb R^n\times\{0,1\}\subset\mathbb R^{n+1}$. Now you define the ordinary vector space operations on $R^{n+1}$, but with a restriction: The operations are only defined ibn $S$, and only if the result again ends up in the set $S$. You'll find that for the elements $v\in S$ with $v_{n+1}=0$ you basically face no restrictions; indeed, they are just forming the vector space $\mathbb R^n$. However, if $v_{n+1}=1$, there are severe restrictions: For example, you cannot multiply them with an arbitrary number $\lambda$, because $\lambda v_{n+1}=\lambda\notin\{0,1\}$. You also cannot add them (because then $v_{n+1}+w_{n+1}=2$. However, you can add to it a vector with $v_{n+1}=0$. You can also do affine combinations of them, that is, for two vectors $v,w$ you can create $\lambda v+(1-\lambda)w$ (because, again, the last component becomes $1$), and similarly for more than two vectors (note that this is now an "atomic" operation because the intermediate steps are not defined; however you can rewrite it so that all intermediate steps are defioned, too, as $v + \lambda(w-v)$). You also can subtract two of those vectors with last component $1$, which gives you a vector with last component $0$. Now you probably can already guess what those vectors with $v_{n+1}=1$ represent: They represent the points in $\mathbb R^n$! You can add an $\mathbb R^n$-vector (i.e. an $S$-vector with last component $0$ to a point, and get another point. You can subtract two points, and get a vector. And the affine combinations of two points form the straight line through those points (and similarly, if you have three points not on a straight line, you get a plane from their affine combinations, and so on). Also note that there's an $1:1$ relation between points and vectors: For each vector $(v,0)$ there's a point $(v,1)$ which you get by adding the vector $v_0$ to the origin $(0,1)$. OK, now it looks as if the origin was somehow special: After all, we've got an unique association from the (special) null vector. However this is not really the case: We could do the same restriction using any other point $(o,1)$ as origin; then we get the $1:1$ association $(v,0)\leftrightarrow(v+o,1)$. Note that all above constructions are completely unaffected by this choice, because all the vector space operations which leave the last component fixed are actually operations which leave the constant vector fixed; also, where the last component cancels out, so does the fixed vector. Note that by changing the basis, using $(o,1)$ instead of $(0,1)$ as $n+1$st basis vector, you even recover the original form $(0,1)$ for the new origin expressed in the new basis. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9547950625419617, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/27315/applications-of-the-feynman-vernon-influence-functional
# Applications of the Feynman-Vernon Influence Functional I am looking for a reference where the Feynman-Vernon influence functional was defined and used in the context of relativistic quantum field theory. This functional is one method to describe non-equilibrium dynamics for open systems (e.g. coupled to noise) which seems (naively, as an outsider) to be particularly well-suited for field theories where path integral methods are more intuitive. As a consolation prize, I'd be also interested in applications to other areas of physics (such as dissipative quantum systems), or for more effective or popular methods to describe non-equilibrium dynamics (of open or closed systems) in the context of relativistic quantum field theory (preferably in the path integral language). - One of the standard framworks to define and develop non-equilibirum QFT is to use Keldysh Green functions defined on a countour going from $t=-\infty$ to now and back to $t=+\infty$. Is your search related to that? – Slaviks Sep 21 '11 at 18:38 I am familiar with the close path contours in the context of real time correlators in thermal equilibrium. I am more interested in systems far from equilibrium and open systems, and how you deal with them using QFT. This is kinda vague, I know... – user566 Sep 21 '11 at 18:48 The point of Keldysh is to be able to use arbitrary density matrix as the asysmptotic initial condition, not neccessarily equilibrium. The whole thing was designed to define non-equilibrium rigorously. That's one of the directions to check out for answers. – Slaviks Sep 21 '11 at 18:54 Thanks @Slaviks. – user566 Sep 21 '11 at 18:56 ## 2 Answers The book Quantum dissipative systems by Weiss dedicates a subsection to the Feynman Vernon method, see also the original reference. See also this article and chapter 18.8 of the book by Kleinert. It's applied to the Caldeira-Leggett model, which is a toy model for a particle in contact with a heat bath. There are a number of mesoscopic systems out there in which a Feynman-Vernon functional of similar type pops up. I don't have any references, but tunneling junctions in fractional quantum Hall edges, impurities in Luttinger liquids and SQUID devices form three examples. I'm sure the book by Weiss has some references as well. The Keldysh-Schwinger or real-time formalism is required to treat systems out of equilibrium. For a list of references see this thread here. But this formalism by itself is not enough. You need to make some assumptions regarding the degrees of freedom of the heat bath, the coupling between the subsystem and the external heat bath and also the initial (untangled or not) state of the system as a whole. The idea as follows: you model the system under consideration in contact with a heat bath. In the Caldeira-Leggett model the heat bath is a macroscopic number of harmonic oscillators, each of which is in contact with the degrees of freedom of the system under consideration. The Feynman-Vernon functional is obtained by integrating out the degrees of freedom associated with the heat bath, all by using a path integral formalism. We can think of this functional as describing the time evolution of the reduced density matrix. - One of the avenues to search for an answer is the so-called Keldysh formalism which is used extensively in condensed matter, in particular in mescopic physics, to define and study steady-state and time-dependent quantum phenomena in systems with infinitely many degrees of freedom. A recent comprehensive review is given by Kamenev and Levchenko, arXiv:0901.3586. The general idea is as follows: time evolution is defined an a real-time contour going from $t=-\infty$ to $t=+\infty$ and then back, to avoid reference to an unknown final state. The two-time Green functions $G^{ab}(t',t)$ acquire indices $a,b=\pm$ denoting the forward- ($+$) or backward- ($-$) propagating branches of the contour. This gives extra matrix structure to correlators, but many QFT techniques can be adopted to handle this generalization. I'm not aware of relativistic applications but almost sure it has been done somewhere. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219823479652405, "perplexity_flag": "head"}
http://www.physicsforums.com/showpost.php?p=4173713&postcount=1
View Single Post ## How can we measure entropy using experiments. A friend asks me this. If considering the equation: $∫\frac{dQ}{T}$, then it is technically feasible to work out some forms of expressions with measurable physical quantities like temperature and specific heat, therefore it is possible to work out a precise value for entropy change. But is there a more economic way? I think Claussius entropy is too phenomenological to be directly observed in experiments, and the Boltzmann definition is not suitable for experiments. While above is about the entropy change, my friend also asks how to determine the entropy of a system, for example a tank of CO2. If a perfect crystal has zero entropy, does that meran in order to calculate the entropy we have to construct possible quasi-static processes from perfect crystals to the present compound and work out the entropy change, which seems to be very uneconomic? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121866226196289, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/115866?sort=oldest
Homotopy $\pi_4(SU(2))=Z_2$ Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am a physics student, recently I read a paper using Homotopy $\pi_4(SU(2))=Z_2$, I guess mathematicians have some visualization or explanation of this result. So I come here ask for help. CROSS-POST from http://physics.stackexchange.com/questions/46284/homotopy-pi-4su2-z-2 - <google.com/search?q=Π4(S3)> – Francois Ziegler Dec 9 at 1:58 3 $SU(2)$ is homeomorphic to $S^3$ so you are asking why is $\pi_4(S^3) = \mathbb{Z}/2$? Pontryagin came up with a method to do such computations using bordisms of framed submanifolds; see thm 21 on page 99 of math.rochester.edu/people/faculty/doug/…. – solbap Dec 9 at 2:28 Actually use $\pi_4$, not $\Pi_4$, which is a different construction. Also, $\pi_4(S^2) = \pi_4(S^3)$. – David Roberts Dec 9 at 2:28 4 This question is somewhat terse -- it's not clear whether you want to know a generating class (as in the Hopf map comment), or why this is the only class (a nontrivial calculation), or how to detect whether a map is nullhomotopic (along the lines checking the framing of a preimage of a point, as in solbap's comment). Some clarification of what you'd really like to know would help us understand how to help. – Tyler Lawson Dec 9 at 3:45 1 @Yingfei please add link to original question when cross-posting in order people do not repeat already done work - this is usual practice. – Alexander Chervov Dec 9 at 5:21 show 6 more comments 2 Answers This calculation of $\pi_4(S^3)$ is also obtained in the paper R. Brown and J.-L. Loday, Topology, 26 (1987) 311-334, and also available here. In that paper, $S^3$ is regarded as the double suspension $SS$ of the circle $S^1$, which is itself seen as an Eilenberg-Mac Lane space $K(\mathbb Z,1)$. We obtain in Proposition 4.10 a determination of $\pi_4$ of the double suspension $SS$ of a $K(G,1)$ for any group $G$ as the kernel of a morphism $G \tilde{\wedge} G \to G$ defined by the commutator map, where the group $G \tilde{\wedge} G$ is the quotient of the free group on the set $G \times G$ by a set of relations satisfied by commutators. Hence the result for $\pi_4 SK(G,1)$ is easy to calculate if $G$ is abelian: in fact in that case it is $G \otimes G$, the tensor product of abelian greoups, factored by the relations $g \otimes h + h \otimes g$ for all $g,h \in G$. Part of the intuition behind this is that the suspension $SX$ of a space $X$ is regarded as the union $C^+X \cup C^- X$ of two cones with intersection $X$, and this union is one to which our van Kampen type theorem for squares of spaces can apply. In fact we are dealing with the triad $(SX; C^+X, C^-X)$, which is the union of two triads $(C^+X;C^+X,X)$, $(C^- X; C^-X,X)$, and the union of these two "trivial" triads creates something in dimension $3$. If $\pi_2 X=0$, this gives a complete determination of $\pi_3 SX$, and further work gives a result on the double suspension in the given case. So the intuition is that in homotopy theory, identifications in low dimensions have high dimensional homotopy implications, and to cope with this for gluing purposes we need algebraic structures with structures in a range of dimensions. The hope is that someday such structures will have applications in physics! - I fixed the link to your paper. – Akhil Mathew Dec 10 at 2:40 @Akhil: thanks! – Ronnie Brown Dec 10 at 9:41 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is really just a handwavy but perhaps more "visual" description of Pontryagin's result as cited by solbap in the comments above. Though I've written a huge block of text, there are some reasonably concrete three-dimensional pictures that you can build up in your head in this case, but it does take quite a bit of practice. First, I assume that you are familiar with Pontryagin's construction relating the homotopy classes of maps to the k-sphere with framed (co-)bordism classes of codimension k submanifolds. Check out Milnor's book Topology from the Differentiable Viewpoint if you're not familiar with this. Because your user profile says that you are interested in condensed matter physics, I'll add that this idea is used in the case of $k=2$ to draw some nice pictures of "homotopies around defects" in this paper of Teo and Kane. Warmup, $\pi_3(S^2)$ As a warmup, let's try to visualize homotopy classes of maps from $S^3$ to $S^2$, i.e. the situation of the Hopf fibration. Pontryagin's construction says that we should be looking at bordism classes of framed codimension 2 submanifolds in $S^3$. 3-2=1, so we should be looking at 1-dimensional submanifolds, i.e. links in $S^3$. Here we have framed links in $S^3$ which can be visualized by drawing each component of the link with another parallel copy that winds around it, much like a ribbon. You should convince yourself that all components in these framed links can be merged together into a single unknot with some integer framing. Thus what matters ultimately is the classification of possible framings. Imagine taking a 2D slice of $S^3$ transverse to a point $p$ of the framed link and placing the point $p$ at the origin of that plane. Then the framing at that point is just a choice of the $x$- and $y$- axes (i.e. a 2-dimensional frame). As we carry this plane along the original unknot, this choice of axes can rotate in that plane and so the classification of framings is naturally an integer. You may check that the inverse image of the North pole of the Hopf fibration is an unknot, and the inverse image of any other point on the sphere is an unknot which is linked once with it. Finally, you should see how you can build up any other homotopy class from "adding" Hopf fibrations together by putting multiple copies of this framed unknot together (possibly with opposite orientations), which gives a visualization of the group structure on the set of homotopy classes. In this way you get a visualization of $\pi_3(S^2)$ by means of some pictures of framed circles. I can't resist here adding a link to this paper of DeTurck et al which gives some beautiful illustrations and description of the homotopy classes of maps from $T^3$ to $S^2$ with this tool. $\pi_4(S^3)$ Now, you are interested in the case of homotopy classes of maps from $S^4$ to $S^3$. In this case you are now looking at framed links in $S^4$. You can still arrange for the link to become a single framed unknot by a sequence of bordisms. However, the framing can no longer be drawn with simply just a single parallel knot. Consider taking a 3-dimensional slice transverse to a point $p$ on the link in $S^4$ and let us place $p$ at the origin of our $R^3$ that we sliced with. In $S^4$, the framing of the link yields a choice of a 3-dimensional frame in this $R^3$ slice. And just as the relevant topological invariant of the framing in $S^3$ was how this frame rotates as we travel along the $S^1$ corresponding to our link component, leading to an element of $\pi_1(S^1)$ (the winding number), in $S^4$, we must now track how this 3-d frame rotates as we follow the $S^1$ of the link component. But now we are considering a continuous loop of choices of 3-dimensional orientations, i.e. an element of $\pi_1(SO(3))$, which is well known to be $\mathbb{Z}/2\mathbb{Z}$. With this key ingredient of the 3-dimensional framing, hopefully you can see that $\pi_4(S^3)=\pi_1(SO(3))=\mathbb{Z}/2\mathbb{Z}$. - Thank you! This is a very very nice answer~ – Yingfei Gu Dec 11 at 10:09 Thanks, feel free to ask for clarification; it can be hard to describe the pictures without a chalkboard. – jc Dec 13 at 11:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9377805590629578, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/7441/what-are-the-constraints-on-building-a-tower-to-space?answertab=active
# What are the Constraints on Building a Tower to Space? When I was a boy I used to daydream about building a tower so tall that the top of it would stick out of the top of Earth's atmosphere project into near space. There would perhaps be a zero gravity area in the penthouse where my friends and I could bounce around and play space versions of various earth-based games and sports in most excellent zero-g conditions. Much to my continued disappointment and despite all the technological advances of the last thirty or so years, no one has built such a structure. Can anyone explain the physical limitations/constraints that are preventing someone from realising my fantasy of a 'Space Tower'? UPDATE: This Kickstarter Project seems to be pretty confident ... - 1 you'd have to go a lot higher than LEO to achieve your zero-g penthouse. – TheSheepMan Aug 30 '11 at 11:38 – AdamRedwine Aug 29 '12 at 14:35 ## 7 Answers First off, the limitation is a material that would not collapse under the weight - earth crust is not quite hard enough. Buckling and other instabilities, nope. Generally, forget a tower built on earth. Not a chance, no such material. Start building from geostationary orbit and extend the "rope" both inside and outside the orbit. The outside may be just heavy counterweights, as the inside will begin to pull towards earth. Make the orbital part thicker to support extra weight, as you extend the lower part, until it reaches earth surface below. Now the problem is the material. The only material in existence with sufficient weight-strength ratio is buckytubes. These are currently centimeters long at most, extremely expensive and you'd not only need thousands of kilometers of them... the rope to sustain its own weight would have to be about 1km thick in the thickest place (near the geostationary orbit). Now consider: • earth carbon supply, I don't think all coal mines combined could mine that much carbon • construction craft fuel. This all would have to be lifted high enough. A LEO rocket takes many times more fuel than its payload weight. A geostationary orbit rocket - much more. The good news is the fuel can be hydrogen+oxygen which is water, and we have that aplenty. The bad news is you need at least as much energy to separate them as you gain from burning them, so the power consumption for fuel production would exceed whole world's power production. • environmental impact of that much steam released into atmosphere • account for micrometeorites that can really rain over your parade. And this thing being that big, collisions WILL happen. Also account for space junk. • account for winds and storms once you reach the atmosphere. Also, upper atmosphere is pretty hot... not nice work environment, also nanotubes aren't extremely fire-proof. • cost and impact on economy. Coal becomes super-expensive and we look for alternate sources of carbon. And when you finally build it, calculate how long a lift travelling some 300km/h would take to reach 37,000km of the 0-gravity orbit... EDIT: I can't currently find the article that listed 1km thickness, but let us try to calculate parameters of the tower merely strong enough to sustain itself. The nanotube tensile strength is $UTS=6422kg/mm^2$ (1) The density is $\rho=1.4g/cm^3$ The ribbon is said to be 1m wide. The thickness will vary. For the needed $M_0=20t=20000kg$ capacity it needs $A_1=0.31mm^2$ cross-section at the bottom. At 1000mm width that's 0.00031mm thick. Now I'm really not in the mood to solve a differential equation of thickness - mass - tensile strength - gravity so let me try a discretization, approximating with $h=1km$ long wedges. At 35000 samples that should give us a decent approximation. $$V_n= {A_n+A_{n+1} \over 2}h \\ M_n=\rho V = \rho {A_n+A_{n+1} \over 2}h$$ Now we can't happily assume weight not to vary with altitude. After all, near the orbit it will be zero. It varies with distance from center of Earth. At surface, $r_0=6378km; M_{earth}=5.97 10^{24}kg; G =6.67300 × 10^{-11} {m^3 \over kg s^2}$; So, the weight function of each segment will be $$Fw_n=G{M_n M_{earth} \over r_n^2} \\ r_n=r_0+n[km]$$ And the tensile strength surface $A_{n+1}$ must overcome is $$F_{n+1}=F_n + Fw_n \\ F_{n+1} = A_{n+1} UTS$$ We seek $A_{35000}$ which will trivially yield thickness by dividing by 1000mm. $$A_{n+1} UTS = F_n + Fw_n \\ A_{n+1} UTS = F_n + G{M_n M_{earth} \over r_n^2} \\ A_{n+1} UTS = F_n + G{\rho {A_n+A_{n+1} \over 2} h M_{earth} \over r_n^2} \\ A_{n+1} = F_n + (A_n+A_{n+1}){ G \rho h M_{earth} \over 2r_n^2 UTS } \\ X := { G \rho h M_{earth} \over 2r_n^2 UTS }\\ A_{n+1} - = F_n + (A_n+A_{n+1})X \\ A_{n+1} = F_n + X A_n + X A_{n+1} \\ A_{n+1} - X A_{n+1} = F_n + X A_n \\ (1-X)A_{n+1} = F_n + X A_n \\$$ We get our two fundamental equations for numeric computation: (with helper X, which I'm really not in the mood to transform into something nicer.) $$X = { G \rho h M_{earth} \over 2r_n^2 UTS }\\ A_{n+1} = { F_n + X A_n \over (1-X) } \\ F_{n+1} = A_{n+1} UTS$$ Now excuse me, it's 3AM and I'll finish the calculations at a different time. - Great debunking of the space elevator ideas. It's a sad reality that most people who advocate space elevators simply don't understand the simple math behind it. What assumption is the 1 km thickness based off of btw? Doesn't the depend on the weight of the payload? If you're assuming that it only has to have the ability to lift a small mass, then would a much smaller mass be required? Pragmatically, I wonder if a space pipeline for propellant would be the most likely use for such a thing since batch lifting would present great difficulty. – AlanSE May 24 '11 at 15:34 In the kinds of SF where they simple assume the beanstalk, the cars pick up speed in a hurry after they leave the atmosphere: there is no need to limit them to a few hundred m/s. Depending on the implied technology there may be a change of cars. The real stinker is that it makes non geo-stationary orbits unsafe. – dmckee♦ May 25 '11 at 3:08 1 @dmckee: On one hand, yes. On the other, there is still friction against the tower, friction of the mechanisms, energy transport losses and so on. Even if it was built as maglev, dynamic tensile strains would create a whole lot of heat. Not impossible to do, but still travel time would be of order of days. – SF. May 25 '11 at 7:21 1 – David Cary Jan 7 at 20:53 1 – David Cary Jan 7 at 21:03 show 6 more comments Rather than using material, perhaps magnetic fields configured in stages. Imagine a stack of plates separated at a distance on the order of a meter. Magnetic fields, from superconducting magnetics repeal the plates above or below. Sensors and an electronic system dynamically adjust the fields. I wonder if the fields would need to become attractive at some point due to centifigual forces from the earths rotation. - Towers supported from the bottom are a bit tricky. Buckling limits how tall a column can be. One needs to additional lateral stiffness to overcome this, usually by putting up guy wires. Even so there are going to be real limits, as Anonymous Coward has mentioned above, solids obtain their stiffness from chemical interactions between molecules and atoms, and the strength to weight ratio is limited. There are some plans for some structures up to about a kilometer, but the cost per unit volume of building goes up for tall buildings. We could probably go a lot higher by the use of carbon nanotubes, but we are years away from being about to construct practical guy wires from them. - Re: the original question: If you wanted "zero gravity" at the top of the tower, you'd have to build a tower tall enough to reach the height of geostationary orbit: a point at which the orbital period of an object in freefall matches the time it takes the earth to rotate once. As other commenters have pointed out, that'll occur at a height of roughly 35000 km above ground. Good luck! Re: The claim that "but one may still say that the limitations are of an engineering (and budgetary) character rather than fundamental physics limitations." I disagree. One fundamental physical limitation is the fact that matter is held together by chemical bonds. This limits the ultimate strength of any material (although AFAIK for all macroscopic materials mankind currently produces, the ultimate strength is far shy of what one would get from a "perfect" material). The ultimate strength will limit how tall a tower one can build. This is, for example, the reason why small asteroids can be quite aspherical, but large asteroids (and planets) cannot: an highly aspherical planet would collapse under its own gravitational force and become near-spherical due to the finite strength of the materials it's made of. The ultimate strength limit also means that you can't build the "space elevator" other folks mention using any currently available materials that I know of. - There's an interesting review article on the subject: Review of New Concepts, Ideas and Innovations in Space Towers Mark Krinker, (2010) A lot of new concepts, ideas and innovation in space towers were offered, developed and researched in last years especially after 2000. For example: optimal solid space towers, inflatable space towers (include optimal space tower), circle and centrifugal space towers, kinetic space towers, electrostatic space towers, electromagnetic space towers, and so on. Given review shortly summarizes the[s]e researches and gives a brief description [of] them, note some [of] their main advantages, shortcomings, defects and limitations. http://arxiv.org/abs/1002.2405 The above is sufficiently interesting that I would recommend it as a place to look for interesting problems for your undergraduate classes in mechanics. - The level at which this question is being asked is uncertain. I thought I would mention the idea of the space elevator, which some people take seriously. However, Lubos is correct in saying that the edge of the atmosphere does not mean the end of gravity. A spacecraft orbits the Earth because it is falling towards the Earth. However, it is just moving fast enough so that it keeps missing the Earth which curves away under the spacecraft’s trajectory. The apparent loss of gravity, as seen with space shuttle and ISS astronauts floating around, is due to the fact the astronauts and everything in the spacecraft is falling and moving with the rest of the spacecraft. Remember that Galileo demonstrated that different masses fall with the same acceleration, and so everything in a spacecraft falls with the same acceleration. However, the whole thing is moving fast enough to keep missing the Earth, and this acceleration of gravity provides the centripetal force which maintains a circular orbit. There is this “Jack and the Beanstalk” idea of the space elevator. I seriously question whether this will ever be built, but the idea is possible in principle. http://en.wikipedia.org/wiki/Space_elevator The idea has some problems of course. In particular it is tough to stack up mass elements without it falling over. If the gravity force at the center of mass deviates from its foundation the stack fall over. So I think the idea of building the tower from the ground up is probably wrong. The prospect for this lies in building from the top down. There are ideas about manipulating the orbits of asteroids. The Russians want to change the orbit of Apophis asteroid, which will come close to the Earth in 2029. Suppose we get good at doing this, and we manipulate the orbit of an asteroid into geosynchronous orbit around the Earth. A geosynchronous orbits is at a radius of 37,000 km where the orbital period is equal to the rotational period of the Earth. As the Wiki page shows one must then have a counter weight beyond geosynchronous orbital radius. So if one had an asteroid of sufficient mass and with the proper material constitution one could then build the tower downwards from this point. This would be accompanied by building upwards with an amount of mass so the center of mass of the emerging structure remains at geosynchronous orbit. Eventually this would then be constructed into this tower. The gravity gradient on this emerging structure would have to be carefully monitored and the vibrations on this controlled. It would not be at all trivial to do this. - Wasn't it in an Arthur C. Clark science fiction story? – anna v Mar 23 '11 at 13:32 1 To do zero gravity frolicing in the penthouse, the tower had to have that 37 000 km (geostationary) height (I dont know, whether this is measured from center of earth or from sea level, but the difference will not matter with respect to "feasibility". – Georg Mar 23 '11 at 15:06 actually it would have to be well over the geostationary height to balance out (in average) the weight of the cord below geostationary height. – lurscher Mar 23 '11 at 18:31 1 AC Clarke did write a novel about a space elevator about 20 years ago. I don’t remember how they built the thing in the novel. I did try to keep the point that the center of mass of the whole thing has to be at geosynchronous orbit. So you do need a counter weight further out. My suspicion is this is pretty pie in the sky stuff. I suspect it is very unlikely we will ever build this. The further back in history you go you find commitments to building large things, pyramids, cathedrals etc over long periods of time. The modern world is “fast-food,” where we “want it now.” – Lawrence B. Crowell Mar 23 '11 at 18:51 1 The longevity of policies, programs and the preeminence of nations have become more and more time compressed. Egypt was a major power for 1000 years, Rome for 500, the British for 250, and now the US primary position is about to expire in less than 100 years. A mark of progress is time compression --- and impatience. Future superpowers may have tenures measured in decades. Andy Warhol has cursed us all with 15 minutes of fame. – Lawrence B. Crowell Mar 23 '11 at 18:52 show 3 more comments First of all, it is an elementary misconception that there would be a "zero gravity" environment in a tower that would only reach the top of the atmosphere. Most of the air molecules exist at a height smaller than 10 kilometers - and above 100 kilometers from the Earth's surface, the air is so diluted that it becomes undetectable. At the height of 10 kilometers - where the atmospheric pressure is almost zero - the gravitational acceleration is just 0.3% weaker than it is on the surface and even at 100 kilometers, it is just 3% weaker. So forget about "lunar games". The gravitational forces over there are pretty much indistinguishable by humans from those we know on the surface. At 100 kilometers, a 75-kilogram person may feel 2 kg lighter but it may be compensated by the suit he needs to avoid suffocation. ;-) The absence of air has nothing to do with the absence of the gravitational force. The air tries to be at low attitudes in order to minimize its potential energy; how much it wants to minimize the energy is given by the molecular mass and the temperature. However, the air density is something totally different than the gravitational acceleration - they're surely not proportional to each other in any sense. The air density is proportional to $\exp(-\Phi m / kT)$ where $\Phi$ is the gravitational potential, $m$ is the molecule's mass, $k$ is Boltzmann's constant, and $T$ is temperature in Kelvin's degrees. However, the gravitational acceleration is $d\Phi / dh$. These two functions depend totally differently on the height $h$. Tall buildings The tallest building in the world is Burj Khalifa in Dubai - it has 828 meters. It's about 10% of the thickness of the atmosphere in the "narrow sense". It's hard to build tall buildings - one must guarantee that they're stable despite the immense weight of the material above each floor and despite the wind and vibrations of the Earth's surface. But there are no "strictly physical" limitations that would prevent one from building a tower that reaches 10 kilometers above the surface. One may say that all such limitations are of engineering character. Tall mountains such as Mount Everest may be viewed as "natural tall buildings" and their height isn't far from the top of the atmosphere (in the narrow sense). The design of very tall buildings would probably have to be a bit hierarchical - with a solid base made out of a heavier material and lighter floors near the top, just like in the case of mountains. One would surely start to face problems to find reasonable materials if he wanted buildings that substantially reduce the gravity on the roof - buildings that are thousands of kilometers tall. For example, one of my kindergarten visions was to build an elevator that could take one to the Moon and that could convert the kinetic energy of the Moon's motion around the Earth. That's a really challenging task for engineers. One will run into problems with conventional materials etc. - but one may still say that the limitations are of an engineering (and budgetary) character rather than fundamental physics limitations. - Thanks. Have reworded my question in light of your answers about the reach of Earth's gravity. – 5arx Mar 23 '11 at 9:56 1 – Martin Beckett Mar 23 '11 at 17:21 @Martin, No, Youngs modulus does not rule that! The relation of force to elastic deformation is irrelevant. The ruling factors are maximum pressure/shear/etc when the material starts to flow, rupture, crunch etc, (which is where Hook's law ends, and Youngs modulus is no more existent) – Georg Mar 24 '11 at 10:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486747980117798, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=675712
Physics Forums Density of States at the Fermi Energy The density of states at the fermi energy is given by D(E_F)=(3/2)n/E_F I understand the density of states is the number of states per energy per unity volume, accounting for n/E_F. I don't understand how the 3/2 multiplying factor accounts for the volume? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> Nanocrystals grow from liquid interface>> New insights into how materials transfer heat could lead to improved electronics Dimensionally you are correct. But in this case, unfortunately, you have to perform the detailed calculus steps in order to get that factor. First let us determine the expression for ##n##. In ##\bf k##-space you need to count the total number of occupied states. This can be computed as seen in the steps below $$\begin{eqnarray} n&=&2\int_{{\rm FS}}\frac{d^{3}\mathbf{k}}{(2\pi)^{3}} \\ &=& \frac{2}{(2\pi)^{3}}\int_{0}^{k_{F}}dk\int_{0}^{2 \pi}d\phi\int_{0}^{\pi}d\theta\left(k^{2} \sin(\theta)\right) \\ &=& \frac{2}{(2\pi)^{3}}\left(\int_{0}^{k_{F}}dk\, k^{2}\right) \left(\int_{0}^{2\pi}d\phi\right) \left(\int_{0}^{\pi}d\theta\,\sin(\theta)\right) \\ &=& \frac{2}{2\pi^{2}}\int_{0}^{k_{F}}dk\, k^{2} \\ &=& \frac{k_{F}^{3}}{3\pi^{2}} \end{eqnarray}$$ where ##\int_{{\rm FS}}## is an integral from the origin till the (spherical) Fermi Surface (FS). The ##k^{2} \sin(\theta)## in the second step is simply the Jacobian in spherical coordinates. Now, ##n## is the total number of available (and filled) states for ##k\le k_{F}##. The total number of states available up to some arbitrary ##k## is simply $$N(k)=\frac{k^{3}}{3\pi^{2}}$$ The density of states (for the isotropic case) is given by $$\begin{eqnarray} D(E) &=& \frac{dN(E)}{dE}\\ &=& \frac{dN(k)}{dk}\left(\frac{dE}{dk}\right)^{-1} \end{eqnarray}$$ For a parabolic dispersion we have $$E=\frac{\hbar^{2}k^{2}}{2m^{*}}$$ Therefore, at ##k=k_F## we have $$\begin{eqnarray} D(E_{F}) &=& D(E(k_{F}))\\ &=& \frac{m^{*}k_{F}}{\hbar^{2}\pi^{2}}\\ &=& \frac{k_{F}^{3}}{\pi^{2}}\left(\frac{\hbar^{2}k_{F}^{2}}{m^{*}}\right)^ {-1}\\ &=& \frac{3}{2}\left(\frac{k_{F}^{3}}{3\pi^{2}}\right) \left(\frac{\hbar^{2}k_{F}^{2}}{2m^{*}}\right)^{-1} \end{eqnarray}$$ From the above expressions you can make the appropriate substitutions $$D(E_{F}) = \frac{3}{2}nE_{F}^{-1}$$ Tags fermi energy Thread Tools | | | | |------------------------------------------------------------|------------------------------------|---------| | Similar Threads for: Density of States at the Fermi Energy | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 1 | | | Quantum Physics | 1 | | | Atomic, Solid State, Comp. Physics | 0 | | | Quantum Physics | 0 | | | Quantum Physics | 8 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8403327465057373, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/29624/how-many-orders-of-infinity-are-there/29630
## How many orders of infinity are there? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Define a growth function to be a monotone increasing function $F: {\bf N} \to {\bf N}$, thus for instance $n \mapsto n^2$, $n \mapsto 2^n$, $n \mapsto 2^{2^n}$ are examples of growth functions. Let's say that one growth function $F$ dominates another $G$ if one has $F(n) \geq G(n)$ for all $n$. (One could instead ask for eventual domination, in which one works with sufficiently large $n$ only, or asymptotic domination, in which one allows a multiplicative constant $C$, but it seems the answers to the questions below are basically the same in both cases, so I'll stick with the simpler formulation.) Let's call a collection ${\mathcal F}$ of growth functions complete cofinal if every growth function is dominated by at least one growth function in ${\mathcal F}$. Cantor's diagonalisation argument tells us that a cofinal set of growth functions cannot be countable. On the other hand, the set of all growth functions has the cardinality of the continuum. So, on the continuum hypothesis, a cofinal set of growth functions must necessarily have the cardinality of the continuum. My first question is: what happens without the continuum hypothesis? Is it possible to have a cofinal set of growth functions of intermediate cardinality? My second question is more vague: is there some simpler way to view the poset of growth functions under domination (or asymptotic domination) that makes it easier to answer questions like this? Ideally I would like to "control" this poset in some sense by some other, better understood object (e.g. the first uncountable ordinal, the nonstandard natural numbers, or the Stone-Cech compactification of the natural numbers). EDIT: notation updated in view of responses. - 8 I'm pretty sure the correct terminology here would be cofinal. en.wikipedia.org/wiki/Cofinal_(mathematics) – Harry Gindi Jun 26 2010 at 17:46 1 Thanks for the correction! – Terry Tao Jun 26 2010 at 17:58 4 Let me mention Scott Aaronson post about growth functions scottaaronson.com/blog/?p=263 and a somewhat related MO question mathoverflow.net/questions/4347/… – Gil Kalai Jun 26 2010 at 18:26 1 I don't know if this helps you or not, but the posets $({}^\omega\omega,{\leq})$ and $({}^\omega\omega,{\leq^*})$ are very closely related to the compact and $\sigma$-compact subsets of Baire space, respectively. – François G. Dorais♦ Jun 26 2010 at 18:35 @Dorais: indeed the minimal covering number of the irrationals (Baire space) by compact sets. – Henno Brandsma Jun 26 2010 at 19:36 ## 7 Answers For asymptotic domination, commonly denoted `${\leq^*}$` and often called eventual domination, this has been answered by Stephen Hechler, On the existence of certain cofinal subsets of ${}^{\omega }\omega$, MR360266. What you call a complete set is usually called a dominating family. As a poset under eventual domination, a dominating family $\mathcal{F}$ must have the following three properties: 1. $\mathcal{F}$ has no maximal element. 2. Every countable subset of $\mathcal{F}$ has an upper bound in $\mathcal{F}$. 3. $|\mathcal{F}| \leq 2^{\aleph_0}$ Hechler showed that for any abstract poset $(P,{\leq})$ with these three properties, there is a forcing extension where all cardinals and cardinal powers are preserved, and there is a dominating family isomorphic to $(P,{\leq})$. In particular, one can have a wellordered dominating family whose length is any cardinal $\delta$ with uncountable cofinality. In this case, the restriction $\delta \leq 2^{\aleph_0}$ is inessential since one can always add $\delta$ Cohen reals without affecting conditions (1) and (2). However, for arbitrary posets, condition (2) could be destroyed by adding reals. The total domination order is more complex. One can always get a totally dominating family $\mathcal{F}'$ from a dominating family $\mathcal{F}$ by adding $\max(f,n) \in \mathcal{F}'$ for every $f \in \mathcal{F}$ and $n < \omega$. Since $\mathcal{F}$ is infinite, the resulting family $\mathcal{F}'$ has the same size as $\mathcal{F}$. Howerver, there does not appear to be a simple combinatorial characterization of the possibilities for the posets that arise in this way. - 1 Excellent answer! – Gil Kalai Jun 26 2010 at 18:23 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. François has given an excellent answer to this question. What you call a cofinal collection, a family $\cal F$ such that every function is dominated by a function in $\cal F$, is known as a dominating family. This is different, for example, from the similar concept of an unbounded family, a family $\cal F$ such that no function dominates every function in $\cal F$, since in partial orders as opposed to linear orders the notions of dominating and unbounded are not the same. As there are several inequivalent but similar-sounding notions here, it seems worthwhile to use the established terminology. As Kristal mentions, I mention in this MO answer, which is also a direct answer to this question, that the dominating number d is the size of the smallest dominating family of function, the smallest family of functions such that every function is dominated by something in the family. As you point out, this number is always uncountable and at most the continuum, but as François mentioned, the particular value of d can be exactly controlled by forcing. In particular, it can achieve desired intermediate values, when CH fails. The similar-sounding but actually inequivalent bounding number b, in contrast, is the size of the smallest unbounded family $\cal F$, a family such that no function dominates every function in $\cal F$. Since every dominating family is unbounded, it follows that b $\leq$ d. Remarkably, however, it is consistent that b $\lt$ d, and this is proved again by forcing. There are dozens of other similar cardinal characteristics of the continuum, some of which I mention in this MO answer. For examples, researchers consider the additivity of the Lebesgue-measure, the additivity of the meager ideal, the cofinality of the symmetric group $S_\omega$ (the smallest number of proper subgroups forming a chain whose union is the whole group), the covering number (fewest number of measure zero sets to cover the reals) and variations, and so on. Researchers in this area classify and separate these different cardinals into hierarchies, and some prominent relationships are expressed by Cichon's diagram. It is often particularly desired to control some of the cardinal characteristics by forcing, while leaving others fixed, and some of the most valuable results here are general theorems that make such a conclusion. Andreas Blass, now here on MO, is one of the world experts in this area. - @Joel Leave questions about foundations to the experts-this is why. Good response,Joel. Would love to have Blass chime in on this and see what insights he has. – Andrew L Jun 27 2010 at 7:09 Thanks for the vote of confidence, Andrew, although there are several expert answers here, notably François' and Henno's. – Joel David Hamkins Jun 27 2010 at 11:44 Francois Dorais cited the paper of Stephen Hechler that (more than) completely answers the first part of the question. For the second part, concerning other ways to view $\mathfrak d$, two other papers of Hechler are relevant; here are the MathSciNet citations: MR0369078 (51 #5314) Hechler, Stephen H., A dozen small uncountable cardinals. TOPO 72---general topology and its applications (Proc. Second Pittsburgh Internat. Conf., Pittsburgh, Pa., 1972; dedicated to the memory of Johannes H. de Groot), pp. 207--218. Lecture Notes in Math., Vol. 378, Springer, Berlin, 1974. MR0380705 (52 #1602) Hechler, Stephen H., On a ubiquitous cardinal. Proc. Amer. Math. Soc. 52 (1975), 348--352. Four (if I remember correctly) of the 12 cardinals in the first paper turn out to equal $\mathfrak d$, which is also the "ubiquitous cardinal" of the second paper. Let me also mention that, if one just wants to answer the first part of the question, one doesn't need the very detailed information given by Hechler's theorem. In order to get a model of set theory with prescribed values for $\mathfrak d$ and for the cardinality $\mathfrak c$ of the continuum (subject to the necessary restrictions that both have uncountable cofinality and that $\mathfrak d\leq\mathfrak c$), it suffices to start with a model of the generalized continuum hypothesis (e.g., G\"odel's constructible universe), adjoin as many Cohen reals as the cardinal you want to be $\mathfrak d$, and then adjoin enough random reals to bring $\mathfrak c$ up to the desired value. The forcing method introduced by Hechler in the paper that Francois cited has become one of the standard tools in the study of cardinal characteristics of the continuum. For just one example, see MR0780528 (86i:03064) Baumgartner, James E.; Dordal, Peter, Adjoining dominating functions. J. Symbolic Logic 50 (1985), no. 1, 94--101. Finally, let me indulge in a bit of self-promotion. On the set theory page of my web site, http://www.math.lsa.umich.edu/~ablass/set.html , the first two papers are about cardinal characteristics of the continuum. The first is a short (6 pages), general-audience introduction (based on a talk at a conference for Ryll-Nardzewski's 70th birthday), and the second is a long chapter which (contrary to the "to appear" on the web site) has now appeared in the Handbook of Set Theory. - Yes, this is possible, if you define the order to be dominance with finitely many exceptions. So f < g iff the set of n with f(n) > g(n) is finite. What you call a complete system of growth functions is called a dominating subset of $\omega^\omega$ (and a scale if it is well-ordered). See van Douwen's paper "The integers and topology" in the Handbook of Set Theoretic Topology. The minimal cardinality of such a dominating family is called $\mathfrak{d}$ in the set-theoretic literature and it's one of the so-called cardinal invariants of the continuum. What is known is that its cofinality is at least $\mathfrak{b}$ where the latter is the minimal size of an unbounded set in $\omega^\omega$ in the partial order of eventual dominance. Also, $\mathfrak{d}$ is equal to the minimal size of a cofinal subset of $\omega^\omega$ in the total dominance order that you defined. So indeed, the problem is the same for both orders, and both have minimal size $\mathfrak{d}$. The eventual dominance is more commonly used though, and that's how I knew it at first. This cardinal can assume almost any value (under said restriction on the cofinality at least) and there has been a lot of study on this and similar cardinal invariants and their interrelations. We can have $\omega_1 = \mathfrak{d} < \mathfrak{c}$, $\omega_1 < \mathfrak{d} < \mathfrak{c}$ and $\omega_1 < \mathfrak{d} = \mathfrak{c}$, in different models of ZFC. - I think that there a complete set of growth functions of intermediate cardinality. This is based a earlier discussion on a related subject here. In particular Joel David Hamkins answer seems to answer the question in the affirmative. - If instead of looking at all functions you just look at the computable ones, then in some sense you can find a cofinal subset corresponding to large cardinal axioms. The idea is roughly that if you have a large cardinal axiom you can define a computable fast growing function f as follows. Take all Turing machines of size at most n such that ZFC with the large cardinal axiom can prove in less than n symbols that the Truing machine eventually halts. Then f(n) is the maximum number of steps it takes any of these machines to halt. - 5 Could you provide any reference work that explores this idea further? – Joel David Hamkins Jun 27 2010 at 12:07 Hi: I am not sure this is an answer to Terry's second question. "Is there some simpler way to view the poset of growth functions under domination (or asymptotic domination) that makes it easier to answer questions like this?" One could use Poincare coordinates(?) to define functions like: $f(n)=n^2$ $g(n)=2^n$ $h(n)=2^{2^n}$ $A(n)=f(n)+g(n)+h(n)$ $f_1(n)=\frac{f(n)}{A(n)}$ $g_1(n)=\frac{g(n)}{A(n)}$ $h_1(n)=\frac{h(n)}{A(n)}$ One can then plot $f_1(n),g_1(n),h_1(n)$ vs. $n$. The faster a function reaches to 1, the more dominating it is. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 69, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309931993484497, "perplexity_flag": "head"}
http://www.cfd-online.com/W/index.php?title=Introduction_to_turbulence/Stationarity_and_homogeneity&diff=8967&oldid=8946
[Sponsors] Home > Wiki > Introduction to turbulence/Stationarity and homogeneity # Introduction to turbulence/Stationarity and homogeneity ### From CFD-Wiki (Difference between revisions) | | | | | |---------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Michail (Talk | contribs) () | | Jola (Talk | contribs) m (Stationarity and homogeneity moved to Introduction to turbulence/Stationarity and homogeneity: Correct book title) | | | (18 intermediate revisions not shown) | | | | | Line 1: | | Line 1: | | | | | + | {{Introduction to turbulence menu}} | | | | + | | | | == Processes statistically stationary in time == | | == Processes statistically stationary in time == | | | | | | | Line 7: | | Line 9: | | | | An alternative way of looking at ''stationarity'' is to note that ''the statistics of the process are independent of the origin in time''. It is obvious from the above, for example, that if the statistics of a process are time independent, then <math> \left\langle  u^{n} \left( t \right) \right\rangle = \left\langle u^{n} \left( t + T \right) \right\rangle </math> , etc., where <math> T </math> is some arbitrary translation of the origin in time. Less obvious, but equally true, is that the product <math> \left\langle u \left( t \right) u \left( t' \right) \right\rangle </math> depends only on time difference <math> t'-t </math> and not on <math> t </math> (or <math> t' </math> ) directly. This consequence of stationarity can be extended to any product moment. For example <math> \left\langle u \left( t \right) v \left( t' \right) \right\rangle </math> can depend only on the time difference <math> t'-t </math>. And <math> \left\langle u \left( t \right) v \left( t' \right) w \left( t'' \right)\right\rangle </math> can depend only on the two time differences <math> t'- t </math> and <math> t'' - t </math> (or <math> t'' - t' </math> ) and not <math> t </math> , <math> t' </math> or <math> t'' </math> directly. | | An alternative way of looking at ''stationarity'' is to note that ''the statistics of the process are independent of the origin in time''. It is obvious from the above, for example, that if the statistics of a process are time independent, then <math> \left\langle  u^{n} \left( t \right) \right\rangle = \left\langle u^{n} \left( t + T \right) \right\rangle </math> , etc., where <math> T </math> is some arbitrary translation of the origin in time. Less obvious, but equally true, is that the product <math> \left\langle u \left( t \right) u \left( t' \right) \right\rangle </math> depends only on time difference <math> t'-t </math> and not on <math> t </math> (or <math> t' </math> ) directly. This consequence of stationarity can be extended to any product moment. For example <math> \left\langle u \left( t \right) v \left( t' \right) \right\rangle </math> can depend only on the time difference <math> t'-t </math>. And <math> \left\langle u \left( t \right) v \left( t' \right) w \left( t'' \right)\right\rangle </math> can depend only on the two time differences <math> t'- t </math> and <math> t'' - t </math> (or <math> t'' - t' </math> ) and not <math> t </math> , <math> t' </math> or <math> t'' </math> directly. | | | | | | | - | == The autocorrelation == | + | == Autocorrelation == | | | | | | | | One of the most useful statistical moments in the study of stationary random processes (and turbulence, in particular) is the '''autocorrelation''' defined as the average of the product of the random variable evaluated at two times, i.e. <math> \left\langle u \left( t \right) u \left( t' \right)\right\rangle </math>. Since the process is assumed stationary, this product can depend only on the time difference <math> \tau = t' - t </math>. Therefore the autocorrelation can be written as: | | One of the most useful statistical moments in the study of stationary random processes (and turbulence, in particular) is the '''autocorrelation''' defined as the average of the product of the random variable evaluated at two times, i.e. <math> \left\langle u \left( t \right) u \left( t' \right)\right\rangle </math>. Since the process is assumed stationary, this product can depend only on the time difference <math> \tau = t' - t </math>. Therefore the autocorrelation can be written as: | | Line 47: | | Line 49: | | | | </td><td width="5%">(4)</td></tr></table> | | </td><td width="5%">(4)</td></tr></table> | | | | | | | - | == The autocorrelation coefficient == | + | == Autocorrelation coefficient == | | | | | | | | It is convenient to define the ''autocorrelation coefficient'' as: | | It is convenient to define the ''autocorrelation coefficient'' as: | | Line 91: | | Line 93: | | | | for all values of <math> \tau </math> . | | for all values of <math> \tau </math> . | | | | | | | - | == The integral scale == | + | == Integral scale == | | | | | | | | One of the most useful measures of the length of a time a process is correlated with itself is the integral scale defined by | | One of the most useful measures of the length of a time a process is correlated with itself is the integral scale defined by | | Line 103: | | Line 105: | | | | It is easy to see why this works by looking at Figure 5.2. In effect we have replaced the area under the correlation coefficient by a rectangle of height unity and width <math> T_{int} </math> . | | It is easy to see why this works by looking at Figure 5.2. In effect we have replaced the area under the correlation coefficient by a rectangle of height unity and width <math> T_{int} </math> . | | | | | | | - | == The temporal Taylor microscale == | + | == Temporal Taylor microscale == | | | | | | | | The autocorrelation can be expanded about the origin in a MacClaurin series; i.e., | | The autocorrelation can be expanded about the origin in a MacClaurin series; i.e., | | Line 562: | | Line 564: | | | | | | | | | Consider for example the one-dimensional spatial correlation which is obtained by measuring the correlation between the temperature at two points along a line in the x-direction, say, | | Consider for example the one-dimensional spatial correlation which is obtained by measuring the correlation between the temperature at two points along a line in the x-direction, say, | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | B^{(1)}_{\theta,\theta} \left( r \right) \equiv \left\langle \theta \left( x_{1} + r , x_{2} , x_{3} , t  \right) \theta \left( x_{1} , x_{2} , x_{3} , t  \right) \right\rangle | | | | + | </math> | | | | + | </td><td width="5%">(60)</td></tr></table> | | | | + | | | | | + | The superscript "(1)" denotes "the coordinate direction in which the separation occurs". This distinguishes it from the vector separation of <math> B_{\theta,\theta} </math> above. Also, note that the correlation at zero separationis just the variance; i.e., | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | B^{(1)}_{\theta,\theta} \left( 0 \right) = \left\langle \theta^{2} \right\rangle | | | | + | </math> | | | | + | </td><td width="5%">(61)</td></tr></table> | | | | + | | | | | + | The integral scale in the <math> x </math>-direction can be defined as: | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | L^{(1)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x + r, y,z,t \right) \theta \left( x,y,z,t \right) \right\rangle dr | | | | + | </math> | | | | + | </td><td width="5%">(62)</td></tr></table> | | | | + | | | | | + | It is clear that there are at least two more integral scales which could be defined by considering separations in the y and z directions. Thus | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | L^{(2)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x,y + r,z,t \right) \theta \left( x,y,z,t \right) \right\rangle dr | | | | + | </math> | | | | + | </td><td width="5%">(63)</td></tr></table> | | | | + | | | | | + | and | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | L^{(3)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x,y,z + r,t \right) \theta \left( x,y,z,t \right) \right\rangle dr | | | | + | </math> | | | | + | </td><td width="5%">(64)</td></tr></table> | | | | + | | | | | + | In fact, an integral scale could be defined for ''any'' direction simply by choosing the components of the separation vector <math> \stackrel{\rightarrow}{r} </math>. This situation is even more complicated when correlations of vector quantities are considered. For example, consider the correlation of the velocity vectors at two points, <math> B_{i,j} \left( \stackrel{\rightarrow}{r} \right) </math>. Clearly  <math> B_{i,j} \left( \stackrel{\rightarrow}{r} \right) </math> is not a single correlation, but rather nine separate correlations: <math> B_{1,1} \left( \stackrel{\rightarrow}{r} \right) </math> , <math> B_{1,2} \left( \stackrel{\rightarrow}{r} \right) </math> , <math> B_{1,3} \left( \stackrel{\rightarrow}{r} \right) </math> , <math> B_{2,1} \left( \stackrel{\rightarrow}{r} \right) </math> , <math> B_{2,2} \left( \stackrel{\rightarrow}{r} \right) </math> , etc. For each of these an integral scale can be defined once a direction for the separation vector is chosen. For example, the integral scales associated with <math> B_{1,1} </math> for the principal directions are | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | L^{(1)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( r,0,0 \right) dr | | | | + | </math> | | | | + | </td><td width="5%">(65)</td></tr></table> | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | L^{(2)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,r,0 \right) dr | | | | + | </math> | | | | + | </td><td width="5%">(66)</td></tr></table> | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | L^{(3)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,0,r \right) dr | | | | + | </math> | | | | + | </td><td width="5%">(67)</td></tr></table> | | | | + | | | | | + | Similar integral scales can be defined for the other componentsof the correlation tensor. Two of particular importance in the development of the turbulence theory are: | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | L^{(2)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,r,0 \right) dr | | | | + | </math> | | | | + | </td><td width="5%">(68)</td></tr></table> | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | L^{(1)}_{2,2} \equiv \frac{1}{\left\langle u^{2}_{2} \right\rangle} \int^{\infty}_{0} B_{2,2} \left( r,0,0 \right) dr | | | | + | </math> | | | | + | </td><td width="5%">(69)</td></tr></table> | | | | + | | | | | + | In general, each of these integral scales will be different, unless restrictions beyond simple homogeneity are placed on the process (e.g., like ''isotropy'' discussed below). Thus, it is important to specify precisely which integral scale is being referred to; i.e., which components of the vector quantities are being used and in which direction the integration is being performed. | | | | + | | | | | + | Similar considerations apply to the Taylor microscales, regardless of whether they are being determined from the correlations at small separations, or from the mean square fluctuating gradients. The two most commonly used Taylor microscales are often referred to as <math> \lambda_{f} </math> and <math> \lambda_{g} </math> and are defined by | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | \lambda^{2}_{f} \equiv 2 \frac{ \left\langle u^{2}_{1} \right\rangle }{ \left\langle \left[ \partial u_{1} / \partial x_{1}  \right]^{2} \right\rangle } | | | | + | </math> | | | | + | </td><td width="5%">(70)</td></tr></table> | | | | + | | | | | + | and | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | \lambda^{2}_{g} \equiv 2 \frac{ \left\langle u^{2}_{1} \right\rangle }{ \left\langle \left[ \partial u_{1} / \partial x_{2}  \right]^{2} \right\rangle } | | | | + | </math> | | | | + | </td><td width="5%">(71)</td></tr></table> | | | | + | | | | | + | The subscripts f and g refer to the autocorrelation coefficients defined by: | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | f \left( r \right) \equiv \frac{\left\langle u_{1} \left( x_{1} + r,x_{2},x_{3} \right) u_{1} \left( x_{1},x_{2},x_{3} \right) \right\rangle}{ \left\langle u^{2}_{1} \right\rangle } = \frac{B_{1,1} \left( r,0,0 \right)}{ B_{1,1} \left( 0,0,0 \right) } | | | | + | </math> | | | | + | </td><td width="5%">(72)</td></tr></table> | | | | + | | | | | + | and | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | g \left( r \right) \equiv \frac{\left\langle u_{1} \left( x_{1},x_{2}+r,x_{3} \right) u_{1} \left( x_{1},x_{2},x_{3} \right) \right\rangle}{ \left\langle u^{2}_{1} \right\rangle } = \frac{B_{1,1} \left( 0,r,0 \right)}{ B_{1,1} \left( 0,0,0 \right) } | | | | + | </math> | | | | + | </td><td width="5%">(73)</td></tr></table> | | | | + | | | | | + | It is straightforward to show from the definitions that <math> \lambda_{f} </math> and <math> \lambda_{g} </math> are related to the curvature of the <math> f </math> and <math> g </math> correlation functions at <math> r=0 </math>. Specifically, | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | \lambda^{2}_{f}= \frac{2}{d^{2} f / dr^{2} |_{r=0}  } | | | | + | </math> | | | | + | </td><td width="5%">(74)</td></tr></table> | | | | + | | | | | + | and | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | \lambda^{2}_{g}= \frac{2}{d^{2} g / dr^{2} |_{r=0}  } | | | | + | </math> | | | | + | </td><td width="5%">(75)</td></tr></table> | | | | + | | | | | + | Since both <math> f </math> and <math> g </math> are symmetrical functions of <math> r </math>,  <math> df/dr </math> and <math> dg/dr </math> must be zero at <math> r=0 </math>. It follows immediately that the leading <math> r </math>-dependent term in the expansions about the origin of both autocorrelations are of parabolic form; i.e., | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | f \left( r \right) = 1 - \frac{r^{2}}{\lambda^{2}_{f}} + \cdots | | | | + | </math> | | | | + | </td><td width="5%">(76)</td></tr></table> | | | | + | | | | | + | and | | | | + | | | | | + | <table width="70%"><tr><td> | | | | + | :<math> | | | | + | g \left( r \right) = 1 - \frac{r^{2}}{\lambda^{2}_{g}} + \cdots | | | | + | </math> | | | | + | </td><td width="5%">(77)</td></tr></table> | | | | + | | | | | + | This is illustrated in Figure 5.9 which shows that the Taylor microscales are the intersection with the <math> r </math>-axis of a parabola fitted to the appropriate correlation function at the origin. Fitting a parabola is a common way to determine the Taylor microscale, but to do so you must make sure you resolve accurately to scales much smaller than it (typically an order of magnitude smaller is required). Otherwise you are simply determining the spatial filtering of your probe or numerical algorithm. | | | | + | | | | | + | | | | | + | {{Turbulence credit wkgeorge}} | | | | + | | | | | + | {{Chapter navigation|Turbulence kinetic energy|Homogeneous turbulence}} | ## Latest revision as of 09:19, 25 February 2008 Nature of turbulence Statistical analysis Reynolds averaged equation Turbulence kinetic energy Stationarity and homogeneity Homogeneous turbulence Wall bounded turbulent flows Study questions ... template not finished yet! ## Processes statistically stationary in time Many random processes have the characteristic that their statistical properties do not appear to depend directly on time, even though the random variables themselves are time-dependent. For example, consider the signals shown in Figures 2.2 and 2.5 When the statistical properties of a random process are independent of time, the random process is said to be stationary. For such a process all the moments are time-independent, e.g., $\left\langle \tilde{ u \left( t \right)} \right\rangle = U$, etc. In fact, the probability density itself is time-independent, as should be obvious from the fact that the moments are time independent. An alternative way of looking at stationarity is to note that the statistics of the process are independent of the origin in time. It is obvious from the above, for example, that if the statistics of a process are time independent, then $\left\langle u^{n} \left( t \right) \right\rangle = \left\langle u^{n} \left( t + T \right) \right\rangle$ , etc., where $T$ is some arbitrary translation of the origin in time. Less obvious, but equally true, is that the product $\left\langle u \left( t \right) u \left( t' \right) \right\rangle$ depends only on time difference $t'-t$ and not on $t$ (or $t'$ ) directly. This consequence of stationarity can be extended to any product moment. For example $\left\langle u \left( t \right) v \left( t' \right) \right\rangle$ can depend only on the time difference $t'-t$. And $\left\langle u \left( t \right) v \left( t' \right) w \left( t'' \right)\right\rangle$ can depend only on the two time differences $t'- t$ and $t'' - t$ (or $t'' - t'$ ) and not $t$ , $t'$ or $t''$ directly. ## Autocorrelation One of the most useful statistical moments in the study of stationary random processes (and turbulence, in particular) is the autocorrelation defined as the average of the product of the random variable evaluated at two times, i.e. $\left\langle u \left( t \right) u \left( t' \right)\right\rangle$. Since the process is assumed stationary, this product can depend only on the time difference $\tau = t' - t$. Therefore the autocorrelation can be written as: $C \left( \tau \right) \equiv \left\langle u \left( t \right) u \left( t + \tau \right) \right\rangle$ (1) The importance of the autocorrelation lies in the fact that it indicates the "memory" of the process; that is, the time over which is correlated with itself. Contrast the two autocorrelation of deterministic sine wave is simply a cosine as can be easily proven. Note that there is no time beyond which it can be guaranteed to be arbitrarily small since it always "remembers" when it began, and thus always remains correlated with itself. By contrast, a stationary random process like the one illustrated in the figure will eventually lose all correlation and go to zero. In other words it has a "finite memory" and "forgets" how it was. Note that one must be careful to make sure that a correlation really both goes to zero and stays down before drawing conclusions, since even the sine wave was zero at some points. Stationary random process always have two-time correlation functions which eventually go to zero and stay there. Example 1. Consider the motion of an automobile responding to the movement of the wheels over a rough surface. In the usual case where the road roughness is randomly distributed, the motion of the car will be a weighted history of the road's roughness with the most recent bumps having the most influence and with distant bumps eventually forgotten. On the other hand, if the car is travelling down a railroad track, the periodic crossing of the railroad ties represents a determenistic input an the motion will remain correlated with itself indefinitely, a very bad thing if the tie crossing rate corresponds to a natural resonance of the suspension system of the vehicle. Since a random process can never be more than perfectly correlated, it can never achieve a correlation greater than is value at the origin. Thus $\left| C \left( \tau \right) \right| \leq C\left( 0 \right)$ (2) An important consequence of stationarity is that the autocorrelation is symmetric in the time difference $\tau = t' - t$. To see this simply shift the origin in time backwards by an amount $\tau$ and note that independence of origin implies: $\left\langle u \left( t \right) u \left( t + \tau \right) \right\rangle = \left\langle u \left( t - \tau \right) u \left( t \right) \right\rangle$ (3) Since the right hand side is simply $C \left( - \tau \right)$, it follows immediately that: $C \left( \tau \right) = C \left( - \tau \right)$ (4) ## Autocorrelation coefficient It is convenient to define the autocorrelation coefficient as: $\rho \left( \tau \right) \equiv \frac{ C \left( \tau \right)}{ C \left( 0 \right)} = \frac{\left\langle u \left( t \right) u \left( t + \tau \right) \right\rangle}{ \left\langle u'^{2} \right\rangle }$ (5) where $\left\langle u^{2} \right\rangle = \left\langle u \left( t \right) u \left( t \right) \right\rangle = C \left( 0 \right) = var \left[ u \right]$ (6) Since the autocorrelation is symmetric, so is its coefficient, i.e., $\rho \left( \tau \right) = \rho \left( - \tau \right)$ (7) It is also obvious from the fact that the autocorrelation is maximal at the origin that the autocorrelation coefficient must also be maximal there. In fact from the definition it follows that $\rho \left( 0 \right) = 1$ (8) and $\rho \left( \tau \right) \leq 1$ (9) for all values of $\tau$ . ## Integral scale One of the most useful measures of the length of a time a process is correlated with itself is the integral scale defined by $T_{int} \equiv \int^{\infty}_{0} \rho \left( \tau \right) d \tau$ (10) It is easy to see why this works by looking at Figure 5.2. In effect we have replaced the area under the correlation coefficient by a rectangle of height unity and width $T_{int}$ . ## Temporal Taylor microscale The autocorrelation can be expanded about the origin in a MacClaurin series; i.e., $C \left( \tau \right) = C \left( 0 \right) + \tau \frac{ d C }{ d t }|_{\tau = 0} + \frac{1}{2} \tau^{2} \frac{d^{2} C}{d t^{2} }|_{\tau = 0} + \frac{1}{3!} \tau^{3} \frac{d^{3} C}{d t^{3} }|_{\tau = 0}$ (11) But we know the aoutocorrelation is symmetric in $\tau$ , hence the odd terms in $\tau$ must be identically to zero (i.e., $dC / dt |_{\tau = 0} = 0$ , $d^{3}C / dt^{3} |_{\tau = 0} = 0$, etc.). Therefore the expansion of the autocorrelation near the origin reduces to: $C \left( \tau \right) = C \left( 0 \right) + \frac{1}{2} \tau^{2} \frac{d^{2} C}{d t^{2} }|_{\tau = 0} + \cdots$ (12) Similary, the autocorrelation coefficient near the origin can be expanded as: $\rho \left( \tau \right) = 1 + \frac{1}{2}\frac{d^{2}\rho}{d t^{2}}|_{\tau = 0} \tau^{2}+ \cdots$ (13) where we have used the fact that $\rho \left( 0 \right) = 1$ . If we define $' = d / dt$ we can write this compactly as: $\rho \left( \tau \right) = 1 + \frac{1}{2} \rho '' \left( 0 \right) \tau^{2} + \cdots$ (14) Since $\rho \left( \tau \right)$ has its maximum at the origin, obviously $\rho'' \left( 0 \right)$ must be negative. We can use the correlation and its second derivative at the origin to define a special time scale, $\lambda_{\tau}$ (called the Taylor microscale) by: $\lambda^{2}_{\tau} \equiv - \frac{2}{\rho'' \left( 0 \right)}$ (15) Using this in equation 14 yields the expansion for the correlation coefficient near the origin as: $\rho \left( \tau \right) = 1 - \frac{\tau^{2}}{\lambda^{2}_{\tau}} + \cdots$ (16) Thus very near the origin the correlation coefficient (and the autocorrelation as well) simply rolls off parabolically; i.e., $\rho \left( \tau \right) \approx 1 - \frac{\tau^{2}}{\lambda^{2}_{\tau}}$ (17) This parabolic curve is shown in Figure 3 as the osculating (or 'kissing') parabola which approaches zero exactly as the autocorrelation coefficient does. The intercept of this osculating parabola with the $\tau$ -axis is the Taylor microscale, $\lambda_{\tau}$. The Taylor microscale is significant for a number of reasons. First, for many random processes (e.g., Gaussian), the Taylor microscale can be proven to be the average distance between zero-crossing of a random variable in time. This is approximately true for turbulence as well. Thus one can quickly estimate the Taylor microscale by simply observing the zero-crossings using an oscilloscope trace. The Taylor microscale also has a special relationship to the mean square time derivative of the signal, $\left\langle \left[ d u / d t \right]^{2} \right\rangle$. This is easiest to derive if we consider two stationary random signals at two different times say $u = u \left( t \right)$ and $u' = u' \left( t' \right)$. The derivative of the first signal is $d u / d t$ and the second $d u' / d t'$. Now lets multiply these together and rewrite them as: $\frac{du'}{dt'} \frac{du}{dt} = \frac{d^{2}}{dtdt'} u \left( t \right) u' \left( t' \right)$ (18) where the right-hand side follows from our assumption that $u$ is not a function of $t'$ nor $u'$ a function of $t$. Now if we average and interchenge the operations of differentiation and averaging we obtain: $\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = \frac{d^{2}}{dtdt'} \left\langle u \left( t \right) u' \left( t' \right) \right\rangle$ (19) Here comes the first trick: we simply take $u'$ to be exactly $u$ but evaluated at time $t'$. So $u \left( t \right) u' \left( t' \right)$ simply becomes $u \left( t \right) u \left( t' \right)$ and its average is just the autocorrelation, $C \left( \tau \right)$. Thus we are left with: $\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = \frac{d^{2}}{dtdt'} C \left( t' - t \right)$ (20) Now we simply need to use the chain-rule. We have already defined $\tau = t' - t$. Let's also define $\xi = t' + t$ and transform the derivatives involving $t$ and $t'$ to derivatives involving $\tau$ and $\xi$. The result is: $\frac{d^{2}}{dtdt'} = \frac{d^{2}}{d \xi^{2}} - \frac{d^{2}}{d \tau^{2}}$ (21) So equation 20 becomes $\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = \frac{d^{2}}{d \xi^{2}}C \left( \tau \right) - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)$ (22) But since $C$ is a function only of $\tau$, the derivative of it with respect to $\xi$ is identically zero. Thus we are left with: $\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)$ (23) And finally we need the second trick. Let's evaluate both sides at $t = t'$ (or $\tau = 0$ ) to obtain the mean square derivative as: $\left\langle \left( \frac{du}{dt} \right)^{2} \right\rangle = - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)|_{ \tau = 0}$ (24) But from our definition of the Taylor microscale and the facts that $C \left( 0 \right) = \left\langle u^{2} \right\rangle$ and $C \left( \tau \right) = \left\langle u^{2} \right\rangle \rho \left( \tau \right)$, this is exactly the same as: $\left\langle \left( \frac{du}{dt} \right)^{2} \right\rangle = 2 \frac{ \left\langle u^{2} \right\rangle}{\lambda^{2}_{\tau}}$ (25) This amasingly simple result is very important in the study of turbulence, especially after we extend it to spatial derivatives. ## Time averages of stationary processes It is common practice in many scientific disciplines to define a time average by integrating the random variable over a fixed time interval, i.e. , $U_{T} \equiv \frac{1}{T} \int^{T_{2}}_{T_{1}} u \left( t \right) dt$ (26) For the stationary random processes we are considering here, we can define $T_{1}$ to be the origin in time and simply write: $U_{T} \equiv \frac{1}{T} \int^{T}_{0} u \left( t \right) dt$ (27) where $T = T_{2} - T_{1}$ is the integration time. Figure 5.4. shows a portion of a stationary random signal over which such an integration might be performed. The ime integral of $u \left( t \right)$ over the integral $\left( O, T \right)$ corresponds to the shaded area under the curve. Now since $u \left( t \right)$ is random and since it formsthe upper boundary of the shadd area, it is clear that the time average, $U_{T}$ is a lot like the estimator for the mean based on a finite number of independent realization, $X_{N}$ we encountered earlier in section Estimation from a finite number of realizations (see Elements of statistical analysis) It will be shown in the analysis presented below that if the signal is stationary, the time average defined by equation 27 is an unbiased estimator of the true average $U$. Moreover, the estimator converges to $U$ as the time becomes infinite; i.e., for stationary random processes $U = \lim_{T \rightarrow \infty} \frac{1}{T} \int^{T}_{0} u \left( t \right) dt$ (28) Thus the time and ensemble averages are equivalent in the limit as $T \rightarrow \infty$, but only for a stationary random process. ## Bias and variability of time estimators It is easy to show that the estimator, $U_{T}$, is unbiased by taking its ensemble average; i.e., $\left\langle U_{T} \right\rangle = \left\langle \frac{1}{T} \int^{T}_{0} u \left( t \right) dt \right\rangle = \frac{1}{T} \int^{T}_{0} \left\langle u \left( t \right) \right\rangle dt$ (29) Since the process has been assumed stationary, $\left\langle u \left( t \right) \right\rangle$ is independent of time. It follows that: $\left\langle U_{T} \right\rangle = \frac{1}{T} \left\langle u \left( t \right) \right\rangle T = U$ (30) To see whether the etimate improves as $T$ increases, the variability of $U_{T}$ must be examined, exactly as we did for $X_{N}$ earlier in section Bias and convergence of estimators (see chapter The elements of statistical analysis). To do this we need the variance of $U_{T}$ given by: $\begin{matrix} var \left[ U_{T} \right] & = & \left\langle \left[ U_{T} - \left\langle U_{T} \right\rangle \right]^{2} \right\rangle = \left\langle \left[ U_{T} - U \right]^{2} \right\rangle \\ & = & \frac{1}{T^{2}} \left\langle \left\{ \int^{T}_{0} \left[ u \left( t \right) - U \right] \right\}^{2} \right\rangle \\ & = & \frac{1}{T^{2}} \left\langle \int^{T}_{0} \int^{T}_{0} \left[ u \left( t \right) - U \right] \left[ u \left( t' \right) - U \right] dtdt' \right\rangle \\ & = & \frac{1}{T^{2}} \int^{T}_{0} \int^{T}_{0} \left\langle u'\left( t \right) u'\left( t' \right) \right\rangle dtdt' \\ \end{matrix}$ (31) But since the process is assumed stationary $\left\langle u' \left( t \right) u' \left( t' \right) \right\rangle = C \left( t' - t \right)$ where $C \left( t' - t \right) = \left\langle u^{2} \right\rangle \rho \left( t'-t \right)$ is the correlation coefficient. Therefore the integral can be rewritten as: $\begin{matrix} var \left[ U_{T} \right] & = & \frac{1}{T^{2}} \int^{T}_{0} \int^{T}_{0} C \left( t' - t \right) dtdt' \\ & = & \frac{ \left\langle u^{2} \right\rangle }{ T^{2} } \int^{T}_{0} \int^{T}_{0} \rho \left( t' - t \right) dtdt' \\ \end{matrix}$ (33) Now we need to apply some fancy calculus. If new variables $\tau= t'-t$ and $\xi= t'+t$ are defined, the double integral can be transformed to (see Figure 5.5): $var \left[ U_{T} \right] = \frac{var \left[ u \right]}{2 T^{2}} \left[ \int^{T}_{0} d \tau \int^{T-\tau}_{\tau} d \xi \rho \left( \tau \right) + \int^{0}_{-T} d \tau \int^{T+\tau}_{-\tau} d \xi \rho \left( \tau \right) \right]$ (35) where the factor of $1/2$ arises from the Jacobian of the transformation. The integrals over $d \xi$ can be evaluated directly to yield: $var \left[ U_{T} \right] = \frac{var \left[ u \right]}{2 T^{2}} \left\{ \int^{T}_{0} \rho \left( \tau \right) \left[ T - \tau \right] d \tau + \int^{0}_{-T} \rho \left( \tau \right) \left[ T + \tau \right] \right\}$ (36) By noting that the autocorrelation is symmetric, the second integral can be transformed and added to the first to yield at last the result we seek as: $var \left[ U_{T} \right] = \frac{var \left[ u \right]}{T} \int^{T}_{-T} \rho \left( \tau \right) \left[ 1 - \frac{ \left| \tau \right| }{T} \right] d \tau$ (37) Now if our averaging time, $T$, is chosen so large that $\left| \tau \right| / T << 1$ over the range for which $\rho \left( \tau \right)$ is non-zero, the integral reduces: $\begin{matrix} var \left[ U_{T} \right] & \approx & \frac{2 var \left[ u \right]}{T} \int^{T}_{0} \rho \left( \tau \right) d \tau \\ & = & \frac{2 T_{int}}{T} var \left[ u \right] \\ \end{matrix}$ (38) where $T_{int}$ is the integral scale defined by equation 10. Thus the variability of our estimator is given by: $\epsilon^{2}_{U_{T}} = \frac{2T_{int}}{T}$ (39) Therefore the estimator does, in fact, converge (in mean square) to the correct result as the averaging time, $T$ increases relative to the integral scale, $T_{int}$. There is a direct relationship between equation 39 and equation 52 in chapter The elements of statistical analysis ( section Bias and convergence of estimators) which gave the mean square variability for the ensemble estimate from a finite number of statistically independent realizations, $X_{N}$. Obviously the effective number of independent realizations for the finite time estimator is: $N_{eff} = \frac{2T_{int}}{T}$ (40) so that the two expressions are equivalent. Thus, in effect, portions of the record separated by two integral scales behave as though they were statistically independent, at least as far as convergence of finite time estimators is concerned. Thus what is required for convergence is again, many independent pieces of information. This is illustrated in Figure 5.6. That the length of the recordn should be measured in terms of the integral scale should really be no surprise since it is a measure of the rate at which a process forgets its past. Example It is desired to mesure the mean velocity in a turbulent flow to within an rms error of 1% (i.e. $\epsilon = 0.01$ ). The expected fluctuation level of the signal is 25% and integral scale is estimated as 100 ms. What is the required averaging time? From equation 39 $\begin{matrix} T & = & \frac{2T_{int}}{\epsilon^{2}} \frac{var \left[ u \right]}{U^{2}} \\ & = & 2 \times 0.1 \times (0.25)^{2} / (0.01)^{2} = 125 sec \\ \end{matrix}$ (41) Similar considerations apply to any other finite time estimator and equation 55 from chapter Statistical analysis can be applied directly as long as equation 40 is used for the number of independent samples. It is common common experimental practice to not actually carry out an analog integration. Rather the signal is sampled at fixed intervals in time by digital means and the averages are computed as for an esemble with a finite number of realizations. Regardless of the manner in which the signal is processed, only a finite portion of a stationary time series can be analyzed and the preceding considerations always apply. It is important to note that data sampled more rapidly than once every two integral scales do not contribute to the convergence of the estimator since they can not be considered independent. If $N$ is the actual number of samples acquired and $\Delta t$ is the time between samples, then the effective number of independent realizations is $N_{eff} = \left\{ \begin{array}{lll} N \Delta t /T_{int} & if & \Delta t < 2T_{int} \\ N & if & \Delta t \geq 2T_{int} \\ \end{array} \right.$ (42) It should be clear that if you sample faster than $\Delta t = 2T_{int}$ you are processing unnecessary data which does not help your statistics converge. You may wonder why one would ever take data faster than absolutely necessary, since it simply it simply fills up your computer memory with lots of statistically redundant data. When we talk about measuring spectra you will learn that for spectral measurements it is necessary to sample much faster to avoid spactral aliasing. Many wrongly infer that they must sample at these higher rates even when measuring just moments. Obviously this is not the case if you are not measuring spectra. ## Random fields of space and time To this point only temporally varying random fields have been discussed. For turbulence however, random fields can be functions of both space and time. For example, the temperature $\theta$ could be a random scalar function of time $t$ and position $\stackrel{\rightarrow}{x}$, i.e., $\theta = \theta \left( \stackrel{\rightarrow}{x} , t \right)$ (43) The velocity is another example of a random vector function of position and time, i.e., $\stackrel{\rightarrow}{u} = \stackrel{\rightarrow}{u} \left( \stackrel{\rightarrow}{x},t \right)$ (44) or in tensor notation, $u_{i} = u_{i} \left( \stackrel{\rightarrow}{x},t \right)$ (45) In the general case, the ensemble averages of these quantities are functions of both positon and time; i.e., $\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv \Theta \left( \stackrel{\rightarrow}{x},t \right)$ (46) $\left\langle u_{i} \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv U_{i} \left( \stackrel{\rightarrow}{x},t \right)$ (47) If only stationary random processes are considered, then the averages do not depend on time and are functions of $\stackrel{\rightarrow}{x}$ only; i.e., $\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv \Theta \left( \stackrel{\rightarrow}{x} \right)$ (48) $\left\langle u_{i} \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv U_{i} \left( \stackrel{\rightarrow}{x}\right)$ (49) Now the averages may not be position dependent either. For example, if the averages are independent of the origin in position, then the field is said to be homogeneous. Homogenity (the noun corresponding to the adjective homogeneous) is exactly analogous to stationarity except that position is now the variable, and not time. It is, of course, possible (at least in concept) to have homogeneous fields which are either stationary or non stationary. Since position, unlike time, is a vector quantity it is also possible to have only partial homogeneity. For example, a field can be homogeneous in the $x_{1}-$ and $x_{3}-$ directions, but not in the $x_{2}-$ direction so that $U_{i}=U_{i}(X_{2})$ only. In fact, it appears to be dynamically impossible to have flows which are honogeneous in all variables and stationary as well, but the concept is useful, nonetheless. Homogeneity will be seen to have powerful consequences for the equations govering the averaged motion, since the spatial derivative of any averaged quantity must be identically zero. Thus even homogeneity in only one direction can considerably simplify the problem. For example, in the Reynolds stress transport equation, the entire turbulence transport is exactly zero if the field is homogeneous. ## Multi-point statistics in homogeneous field The concept of homogeneity can also be extended to multi-point statistics. Consider for example, the correlation between the velocity at one point and that at another as illustrated in Figure 5.7. If the time dependence is suppressed and the field is assumed statistically homogeneous, this correlation is a function only of the separation of the two points, i.e., $\left\langle u_{i} \left( \stackrel{\rightarrow}{x} , t \right) u_{j} \left( \stackrel{\rightarrow}{x'} , t \right) \right\rangle \equiv B_{i,j} \left( \stackrel{\rightarrow}{r} \right)$ (50) where $\stackrel{\rightarrow}{r}$ is the separation vector defined by $\stackrel{\rightarrow}{r} = \stackrel{\rightarrow}{x'} - \stackrel{\rightarrow}{x}$ (51) or $r_{i} = x'_{i} - x_{i}$ (52) Note that the convention we shall follow for vector quantities is that the first subscript on $B_{i,j}$ is the component of velocity at the first position, $\stackrel{\rightarrow}{x}$ , and the second subscript is the component of velocity at the second, $\stackrel{\rightarrow}{x'}$. For scalar quantities we shall simply put a simbol for the quantity to hold the place. For example, we would write the two-point temperature correlation in a homogeneous field by: $\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) \theta \left( \stackrel{\rightarrow}{x'},t \right) \right\rangle \equiv B_{\theta , \theta} \left( \stackrel{\rightarrow}{r} \right)$ (53) A mixed vector/scalar correlation like the two-point temperature velocity correlation would be written as: $\left\langle u_{i} \left( \stackrel{\rightarrow}{x} , t \right) \theta \left( \stackrel{\rightarrow}{x'},t \right) \right\rangle \equiv B_{i,\theta } \left( \stackrel{\rightarrow}{r} \right)$ (54) On the other hand, if we meant for the temperature to be evaluated at $\stackrel{\rightarrow}{x}$ and the velocity at $\stackrel{\rightarrow}{x'}$ we would have to write: $\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) u_{i} \left( \stackrel{\rightarrow}{x'},t \right) \right\rangle \equiv B_{ \theta, i } \left( \stackrel{\rightarrow}{r} \right)$ (55) Now most books don't bother with the subscript notation, and simply give each new correlation a new symbol. At first this seems much simpler; and it is as long as you are only dealing with one or two different correlations. But introduce a few more, then read about a half-dozen pages, and you will find you completely forget what they are or how they were put together. It is usually very important to know exactly what you are talking about, so we will use this comma system to help us remember. It is easy to see that the consideration of vector quantities raises special considerations. For example, the correlation between a scalar function of position at two points is symmetrical in $\stackrel{\rightarrow}{r}$ , i.e., $B_{\theta,\theta} \left( \stackrel{\rightarrow}{r} \right) = B_{\theta,\theta} \left( - \stackrel{\rightarrow}{r} \right)$ (56) This is easy to show from the definition of $B_{\theta,\theta}$ and the fact that the field is homogeneous. Simply shift each of the position vectors by the same amount $- \stackrel{\rightarrow}{r}$ as shown in Figure 5.8 to obtain: $\begin{matrix} B_{\theta,\theta}\left( \stackrel{\rightarrow}{r},t \right) & \equiv & \left\langle \theta\left( \stackrel{\rightarrow}{x}, t \right) \theta\left( \stackrel{\rightarrow}{x'}, t \right) \right\rangle \\ & = & \left\langle \theta \left( \stackrel{\rightarrow}{x} - \stackrel{\rightarrow}{r} , t \right) \theta \left( \stackrel{\rightarrow}{x'} - \stackrel{\rightarrow}{r} , t \right) \right\rangle \\ & = & B_{\theta,\theta}\left( - \stackrel{\rightarrow}{r},t \right) \\ \end{matrix}$ (57) since $\stackrel{\rightarrow}{x'} - \stackrel{\rightarrow}{r} = \stackrel{\rightarrow}{x}$ ; i.e., the points are reversed and the separation vector is pointing the opposite way. Such is not the case, in general, for vector functions of position. For example, see if you can prove to yourself the following: $B_{\theta,i} \left( \stackrel{\rightarrow}{r} \right) = B_{i,\theta} \left( - \stackrel{\rightarrow}{r} \right)$ (58) and $B_{i,j} \left( \stackrel{\rightarrow}{r} \right) = B_{j,i} \left( - \stackrel{\rightarrow}{r} \right)$ (59) Clearly the latter is symmetrical in the variable $\stackrel{\rightarrow}{r}$ only when $i = j$ . These properties of the two-point correlation function will be seen to play an important role in determining the interrelations among the different two-point statistical quantities. They will be especially important when we talk about spectral quantities. ## Spatial integral and Taylor microscales Just as for a stationary random process, correlations between spatially varying, but statistically homogeneous, random quantities ultimately go to zero;, i.e., they become uncorrelated as their locations become widely separated. Because position (o relative position) is a vector quantity, however, the correlation the carrelation may die off at different rates in different directions. Thus direction must be an important part of the definitions of the integral scales and microscales. Consider for example the one-dimensional spatial correlation which is obtained by measuring the correlation between the temperature at two points along a line in the x-direction, say, $B^{(1)}_{\theta,\theta} \left( r \right) \equiv \left\langle \theta \left( x_{1} + r , x_{2} , x_{3} , t \right) \theta \left( x_{1} , x_{2} , x_{3} , t \right) \right\rangle$ (60) The superscript "(1)" denotes "the coordinate direction in which the separation occurs". This distinguishes it from the vector separation of $B_{\theta,\theta}$ above. Also, note that the correlation at zero separationis just the variance; i.e., $B^{(1)}_{\theta,\theta} \left( 0 \right) = \left\langle \theta^{2} \right\rangle$ (61) The integral scale in the $x$-direction can be defined as: $L^{(1)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x + r, y,z,t \right) \theta \left( x,y,z,t \right) \right\rangle dr$ (62) It is clear that there are at least two more integral scales which could be defined by considering separations in the y and z directions. Thus $L^{(2)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x,y + r,z,t \right) \theta \left( x,y,z,t \right) \right\rangle dr$ (63) and $L^{(3)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x,y,z + r,t \right) \theta \left( x,y,z,t \right) \right\rangle dr$ (64) In fact, an integral scale could be defined for any direction simply by choosing the components of the separation vector $\stackrel{\rightarrow}{r}$. This situation is even more complicated when correlations of vector quantities are considered. For example, consider the correlation of the velocity vectors at two points, $B_{i,j} \left( \stackrel{\rightarrow}{r} \right)$. Clearly $B_{i,j} \left( \stackrel{\rightarrow}{r} \right)$ is not a single correlation, but rather nine separate correlations: $B_{1,1} \left( \stackrel{\rightarrow}{r} \right)$ , $B_{1,2} \left( \stackrel{\rightarrow}{r} \right)$ , $B_{1,3} \left( \stackrel{\rightarrow}{r} \right)$ , $B_{2,1} \left( \stackrel{\rightarrow}{r} \right)$ , $B_{2,2} \left( \stackrel{\rightarrow}{r} \right)$ , etc. For each of these an integral scale can be defined once a direction for the separation vector is chosen. For example, the integral scales associated with $B_{1,1}$ for the principal directions are $L^{(1)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( r,0,0 \right) dr$ (65) $L^{(2)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,r,0 \right) dr$ (66) $L^{(3)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,0,r \right) dr$ (67) Similar integral scales can be defined for the other componentsof the correlation tensor. Two of particular importance in the development of the turbulence theory are: $L^{(2)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,r,0 \right) dr$ (68) $L^{(1)}_{2,2} \equiv \frac{1}{\left\langle u^{2}_{2} \right\rangle} \int^{\infty}_{0} B_{2,2} \left( r,0,0 \right) dr$ (69) In general, each of these integral scales will be different, unless restrictions beyond simple homogeneity are placed on the process (e.g., like isotropy discussed below). Thus, it is important to specify precisely which integral scale is being referred to; i.e., which components of the vector quantities are being used and in which direction the integration is being performed. Similar considerations apply to the Taylor microscales, regardless of whether they are being determined from the correlations at small separations, or from the mean square fluctuating gradients. The two most commonly used Taylor microscales are often referred to as $\lambda_{f}$ and $\lambda_{g}$ and are defined by $\lambda^{2}_{f} \equiv 2 \frac{ \left\langle u^{2}_{1} \right\rangle }{ \left\langle \left[ \partial u_{1} / \partial x_{1} \right]^{2} \right\rangle }$ (70) and $\lambda^{2}_{g} \equiv 2 \frac{ \left\langle u^{2}_{1} \right\rangle }{ \left\langle \left[ \partial u_{1} / \partial x_{2} \right]^{2} \right\rangle }$ (71) The subscripts f and g refer to the autocorrelation coefficients defined by: $f \left( r \right) \equiv \frac{\left\langle u_{1} \left( x_{1} + r,x_{2},x_{3} \right) u_{1} \left( x_{1},x_{2},x_{3} \right) \right\rangle}{ \left\langle u^{2}_{1} \right\rangle } = \frac{B_{1,1} \left( r,0,0 \right)}{ B_{1,1} \left( 0,0,0 \right) }$ (72) and $g \left( r \right) \equiv \frac{\left\langle u_{1} \left( x_{1},x_{2}+r,x_{3} \right) u_{1} \left( x_{1},x_{2},x_{3} \right) \right\rangle}{ \left\langle u^{2}_{1} \right\rangle } = \frac{B_{1,1} \left( 0,r,0 \right)}{ B_{1,1} \left( 0,0,0 \right) }$ (73) It is straightforward to show from the definitions that $\lambda_{f}$ and $\lambda_{g}$ are related to the curvature of the $f$ and $g$ correlation functions at $r=0$. Specifically, $\lambda^{2}_{f}= \frac{2}{d^{2} f / dr^{2} |_{r=0} }$ (74) and $\lambda^{2}_{g}= \frac{2}{d^{2} g / dr^{2} |_{r=0} }$ (75) Since both $f$ and $g$ are symmetrical functions of $r$, $df/dr$ and $dg/dr$ must be zero at $r=0$. It follows immediately that the leading $r$-dependent term in the expansions about the origin of both autocorrelations are of parabolic form; i.e., $f \left( r \right) = 1 - \frac{r^{2}}{\lambda^{2}_{f}} + \cdots$ (76) and $g \left( r \right) = 1 - \frac{r^{2}}{\lambda^{2}_{g}} + \cdots$ (77) This is illustrated in Figure 5.9 which shows that the Taylor microscales are the intersection with the $r$-axis of a parabola fitted to the appropriate correlation function at the origin. Fitting a parabola is a common way to determine the Taylor microscale, but to do so you must make sure you resolve accurately to scales much smaller than it (typically an order of magnitude smaller is required). Otherwise you are simply determining the spatial filtering of your probe or numerical algorithm. ## Credits This text was based on "Lectures in Turbulence for the 21st Century" by Professor William K. George, Professor of Turbulence, Chalmers University of Technology, Gothenburg, Sweden.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 216, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.859460175037384, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/32780/where-is-the-critical-moment-where-the-microcanonical-ensemble-enters-the-justif
# Where is the critical moment where the microcanonical ensemble enters the justification for the equilibium state? As explained in many books, for the microscopic justification of the second law of thermodynamics (lets formulate it as the total entropy takes maximum among all possible exchanges of two systems), you don't have to enter the realm of the canonical ensamble. Where is the microcanonical phase space density $$\varrho=const. \ \ \ \ \text{if}\ \ \ \ E<H<E+\Delta,$$ used in the computation of the phase space which comes from the composition of two other systems? Let $E=E_1+E_2$. For fixed energies $E_1$ and $E_2$, the new volume is given by $\Gamma(E)=\Gamma(E_1)\Gamma(E_2)$ and at the intermediate point where one considers all the possible energy exchanges, one writes $\Gamma(E)=\sum_{\epsilon}\Gamma(E_1+\epsilon)\Gamma(E_2-\epsilon)$. I don't see how the construction of the composed phase space is computationally influenced by $\varrho$, and it also seems to me that this composed space would be computable without stating $\varrho$ explicitly. It's a volume after all, it shold just be the product in any case. Furthermore, is $\varrho$ involved in the derivation that the maximum among the possible composed phase volumes is very sharp? (Is there a general derivation?) - I think one should also consider the number of particles exchanges. – jjcale Jul 24 '12 at 23:42 @jjcale: One certainly can (I don't know if one has to, though). – Nick Kidman Jul 25 '12 at 7:04 ## 1 Answer The "microcanonical ensemble" is just saying each of the $\Gamma(E)$ states is equally likely. When you multiply the volumes at energy $\pm\epsilon$ and add over $\epsilon$, you are using the microcanonical ensemble assumption that all the states are equally likely, to get that the probability is the volume. As for the sharpness, this is from the thermodynamic observation that one of the systems 1 or 2 is very big, so that it has a value of $\partial_U S$, which tells you how the volume changes with energy. This volume changes in a way proportional to the macroscopic size, since the S is extensive, and there are Avogadro's number of particles. - Okay so in the first paragraph you say that it justifies the ' "geometric" volume equals propability'-statement, right? In the second paragraph, is the "has a volume" sentence complete. What has this value you mention - it would be $\frac{\Gamma'(E)}{\Gamma(E)}$, wouldn't it? – Nick Kidman Jul 24 '12 at 21:49 @NickKidman: I don't find an incomplete sentence, nor the phrase 'has a volume'. I don't know what you mean by the ratio of the volumes. If you calculate in one example, like an ideal gas or a harmonic solid, you will no longer be confused on these things. – Ron Maimon Jul 24 '12 at 22:01 "has a volume" was ments to be has a value. The ration is the derivative of $S(E)=log(\Gamma(E))$. – Nick Kidman Jul 24 '12 at 22:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411452412605286, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/47095?sort=newest
## Series of squared Fourier coefficients ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, if the Fourier series development of $g(t)$ (periodic, $C^\infty$) is $$g(t)=\sum_{-\infty}^{+\infty}a_n e^{in\omega t}$$ does the series $$\sum_{-\infty}^{+\infty}\frac{a_n^2}{n^2}?$$ converges toward something known like average $g^2$ or something like that? - What do you know about $g(t)$? If it is merely $L^1$, the best you can say is given by the Riemann-Lebesgue lemma (which in particular tells you that the series you wrote down converges, since $a_n$ is better than bounded). Please modify the question to be more precise on what you are asking about. – Willie Wong Nov 23 2010 at 14:16 2 Do you mean $a_n^2$ or $|a_n|^2$? Assume for now $a_0 = 0$. If it were not, then the sum diverges at $n = 0$. In this case, there exists $G(t)$ smooth periodic with $G' = g$. $G'$ has the property that its Fourier coefficients, $b_n$, are given by $b_n = a_n / (in\omega)$. So if you are looking at $|a_n|^2$ your expression is just the $L^2$ norm of $G$ (up to some constant factor due to the normalization of the Fourier transform), and if you are looking at $a_n^2$, your expression is some constant factor times $(g*g)(0)$, where the convolution is evaluated over the circle. – Willie Wong Nov 23 2010 at 15:46 1 @Willie, I deleted my answer, since (a) your comment is better, and (b) as we established in the comments to my answer, this is a bit too elementary for MO. In fact, I think maybe we ought to close this one. – Harald Hanche-Olsen Nov 23 2010 at 15:59 ## 1 Answer Assume $a_0=0$, which easily can be arranged by adding a constant to $g$. Then the function $$h(t)=\frac1{i\omega } \sum_{n}\frac{a_n}ne^{in\omega t}$$ is the primitive of $g$. Let $h^* (x)=\overline{h(-x)}$, then the sum you asked for equals the inner product $$\langle h,h^*\rangle.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9506115913391113, "perplexity_flag": "head"}
http://mathoverflow.net/questions/89658/adjacency-matrices-of-graphs-as-parity-check-matrices-of-error-correcting-codes
## Adjacency matrices of graphs as parity check matrices of error correcting codes ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider bipartite graph. Consider its adjacency matrix. It will have a form 0 A^t A 0 Take matrix $A$. Consider the null-space $L$ of $A$ over $F_2^N$. Question Can we say something about the $L$ from graph theoretic perspective ? For example to determine what is minimum Hamming weight for vectors in $L$ ? In error correction codes community the following words are used: Original graph is called Tanner graph for $A$. Matrix $A$ is called parity-check matrix. Let dim(L)=k, any linear map $F_2^k \to L\subset F_2^N$ is called "encoder". - 1 What you have called the incidence matrix is actually the adjacency matrix (as will be clear if you read the article you've linked to). Your approach translates all coding theory questions into graph theory questions. It seems unlikely that much will be gained by the translation. Of course there are many papers studying the case when $A$ is sparse (LDPC codes). – Chris Godsil Feb 27 2012 at 13:35 @Chris Thank You, I corrected ! Shame on me I am always mixing these matrices. Fields medalist G. Margulis constructed some "good" LDPC codes by Cayley graphs (???) of some groups GL(F,p) - as I heard but not really understand. There are lots of papers about "expanders" graphs which is probably related to this question... I just start learning these things so, may be question is not really good... – Alexander Chervov Feb 27 2012 at 13:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.929298996925354, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/72251?sort=votes
## Packing moebius bands ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I know that in the smooth category the following is true. There are at most countable many embedded moebius bands in euclidean 3-space. Is this also true in topological category? - What exactly are you counting? Isotopy classes of single embeddings, or perhaps the number of disjoint embeddings? – Ryan Budney Aug 6 2011 at 17:04 Disjont embeddings. – Michal Aug 6 2011 at 17:09 4 How do you know this in the smooth category? – Igor Rivin Aug 6 2011 at 20:48 A few years ago I came across a paper by Grushin and Palamodov from 1962 called "On the maximal number of mutually disjoint, pairwise homeomorphic figures which can be packed in 3-space" (Uspekhi Mat. Nauk, 1962, Volume 17, Issue 3(105), Pages 163–168) where the case of Möbius strips is considered. Unfortunately the paper is in Russian and I know of no translations so I wasn't able to read it...and I am also not sure if this answers the OP's question or it's just the case he affirms to know already...It would be great if someone could tell me some details about what that paper says! – godelian Aug 7 2011 at 1:49 ## 1 Answer There are at most countably many disjoint embeddings of homeomorphic images of a non-orientable hypersurface in $\mathbb R^k$. This is theorem 2 in "An uncountable family of disjoint spatial continua in Euclidean space" by V.K. Ionin and Yu.G. Nikonorov. - Wow. Only countably many Moebius strips. Can you explain briefly which is the idea of the proof, or the heuristic reason why nonorientable things don't pack in a continuum? – Qfwfq Aug 7 2011 at 11:18 4 Well, as far as heuristics, non-orientable hypersurfaces are "thick", in the sense that two of them cannot be very close together, other wise one of them has to be the orientable double cover of the other. So if you consider their neighborhoods, an uncountable pigeonhole principle will give the desired contradiction. Compare to mathoverflow.net/questions/27244/… – Gjergji Zaimi Aug 7 2011 at 12:19 Actually, if you look at Young's paper (referenced in the question Gjergji references, the argument is a bit different from the pigeonhole principle. The basic idea there is that there is a region where the heads of the tacks foliate a ball, and so a point on the foot of the tack is separated from the head by a piece of another head, which contradicts either embeddedness or disjointness. As Gjergji says, nonorientable surfaces should be "thick" in some sense, which means that a similar argument should work. – Igor Rivin Aug 7 2011 at 14:05 PS: Young's paper (which is one page) is available for free from the AMS (since it was published in BAMS). – Igor Rivin Aug 7 2011 at 14:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937467634677887, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/86966?sort=oldest
Identifying factors of higher order in a determinant Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider a $n\times n$ matrix $A$ whose elements are some polynomials in the indeterminates $x_1, x_2,\ldots,x_m$. To calculate the determinant of such a matrix, one of the usual ways is to treat the determinant as a polynomial in $x_1,\ldots,x_m$ and identify its factors. The usual idea being if $x = y$ makes the determinant vanish then $x -y$ is one of the factors. What I however do not understand is how to identify its order, that is to identify the exact $k$ such that $(x - y)^k$ is the factor. - 1 Answer If setting $x=y$ makes the rank go down by $k$, then $(x-y)^k$ is a factor. Harald Helfgott and I used this idea in evaluating a determinant http://www.combinatorics.org/Volume_6/Abstracts/v6i1r16.html); actually the determinant was evaluated earlier by this method by Zavrotsky. The reference we gave for the fact relating the rank of the matrix and the multiplicity of $x-y$ as a factor is R. A. Frazer, W. J. Duncan, and A. R. Collar, Elementary Matrices and Some Applications to Dynamics and Differential Equations, Cambridge University Press, 1947, page 17. - I do not seem to have understood your idea of rank going down. For example, consider the matrix $\left( \begin{array}{ccc} (x + a_1)^2 & (x + a_1)*(y + a_1) & (y + a_1) \\ (x + a_2)^2 & (x + a_2)*(y + a_2) & (y + a_2) \\ (x + a_3)^2 & (x + a_1)*(y + a_3) & (y + a_3) \\ \end{array} \right)$. The determinant of this matrix is given by $(x - y)^2 * (a_1 - a_2) * (a_1 - a_3) * (a_2 - a_3).$ The rank initially is $3$. After substituting $x= y$, the rank becomes $2$. Am i missing something? The idea of derivatives given in the book that you suggested, however, is something I could work with. – Harish Jan 30 2012 at 3:06 It seems that this theorem does not always give the exact power of $x-y$ dividing the determinant: if setting $x=y$ makes the rank go down by $k$ then $(x-y)^k$ is a factor of the determinant, but the highest power of $x-y$ dividing the determinant could be greater than $k$. I don't know of a stronger result that gives the exact power of $x-y$ – Ira Gessel Jan 30 2012 at 3:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398252367973328, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/72432/cyclic-group-with-exactly-3-subgroups-itself-e-and-one-of-order-7-isn
# Cyclic group with exactly 3 subgroups: itself $\{e\}$ and one of order $7$. Isn't this impossible? Suppose a cyclic group has exactly three subgroups: $G$ itself, $\{e\}$, and a subgroup of order $7$. What is $|G|$? What can you say if $7$ is replaced with $p$ where $p$ is a prime? Well, I see a contradiction: the order should be $7$, but that is only possible if there are only two subgroups. Isn't it impossible to have three subgroups that fit this description? If G is cyclic of order n, then $\frac{n}{k} = 7$. But if there are only three subgroups, and one is of order 1, then 7 is the only factor of n, and $n = 7$. But then there are only two subgroups. Is this like a trick question? edit: nevermind. The order is $7^2$ right? - 2 The cyclic group of order $7^2$ has exactly three subgroups. – Mariano Suárez-Alvarez♦ Oct 13 '11 at 22:35 Hint $\frac{n}{k}=7$, what is your $k$? What can it be? – N. S. Oct 13 '11 at 22:35 Hint: Think about the cyclic group of order $49$. – André Nicolas Oct 13 '11 at 22:36 @MarianoSuárez-Alvarez ha yeah, I realized this as I was editing my question :) – iDontKnowBetter Oct 13 '11 at 22:37 ## 1 Answer Hint: how can a number other than $7$ not have other factors? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380608201026917, "perplexity_flag": "head"}
http://mathoverflow.net/questions/27375?sort=oldest
## Geometric interpretation of group rings? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For a group $G$, is there an interpretation of $\mathbb C[G]$ as functions over some noncommutative space? If so, what does this space "look like"? What are its properties? How are they related to properties of $G$? - ## 2 Answers The noncommutative space defined by $C[G]$ is (by definition) the dual $\widehat{G}$ of G. There are as many ways to make sense of this space as there are theories of noncommutative geometry. (Edit: In particular if G is not finite you have lots of possible meanings for the group ring, depending on what kind of regularity and support conditions you put on G, or equivalently, what class of representations of G you wish to consider, and I will ignore all such issues - which are the main technical part of the subject - below.) One basic principle is that noncommutative geometry is not about algebras up to isomorphism, it's about algebras up to Morita equivalence -- in other words, it's in fact about categories of modules over algebras (the basic invariant of Morita equivalence). You can think of these (depending on context) as vector bundles or sheaves of some kind on the dual. In this case we're looking at the category of complex representations of G, which are sheaves on the dual $\widehat{G}$. You can think of this as a form of the Fourier transform (modules for functions on G with convolution = modules for functions on the dual with multiplication), though obviously at this level of detail it's a complete tautology. Coarser invariants such as K-theory of group algebras, Hochschild homology etc give invariants, eg K-theory and cohomology, of the dual, noncommutative as it may be. There are many conjectures about this noncommutative topology, most famously the Baum-Connes conjecture relating the K-theory of this "space" to that of classifying spaces associated to G. As to what the dual looks like, this is of course highly dependent on the group. For completely arbitrary groups I don't know of anything meaningful to say beyond structural things of the Baum-Connes flavor, so you have to pick a class of groups to study. If G is abelian, the dual is itself a group (the dual group). The formal thing you can say in general is that the dual of G fibers over the dual of the center of G -- this is a form of Schur's lemma, saying irreducible reps live over a particular point of the dual of the center (ie the center acts by evaluation by a character). You might get some more traction by looking at the "Bernstein center" or Hochschild cohomology --- endomorphisms of the identity functor of G-reps. This is a commutative algebra and the dual fibers over its spectrum. In many cases this is a very good approximation to the dual -- ie the "fibers are finite" (this is what happens for say real and p-adic groups). The orbit method of Kirillov says that for a nilpotent or solvable group, the dual looks like the dual space of the Lie algebra, modulo the coadjoint action. So again that's quite nice. Very very roughly the Langlands philosophy says that for reductive groups G (in particular over local or finite fields) the dual of G is related to conjugacy classes in a dual group $G^\vee$. This is if you'd like a way to make meaningful the observation that conjugacy classes and irreps are in bijection for a finite group -- you roughly want to say they're in CANONICAL bijection if the two groups are "dual". Rather than say it this coarsely, it's better to think in terms of the Harish-Chandra / Gelfand philosophy, which (again whittled down to one coarse snippet) says that the dual of a reductive group (over any field) is a union of "series", ie a union of subspaces each of which looks like the dual of a torus modulo a Weyl group. In other words, you look at all conjugacy classes of tori in G, for each torus you construct its dual (which is a group now!), and mod out by the symmetries inherited by the torus from its embedding in G, and this is the dual of G roughly. (This is also very close to saying semisimple conjugacy classes in the dual group of G, which is where the Langlands interpretation comes from). Anyway this is saying that the dual is a very nice and manageable, even algebraic, object. Kazhdan formulated this philosophy as saying that the dual of a reductive group is an algebraic object --- the reps of the group over a field F are something like the F-points of one fixed variety (or stack) over the algebraic closure. Anyway one can go much further, and that's what the Langlands program does. - Thank you, this answer is awesome! It is full of new ideas (at least new to me...) I can think about :) – Jan Weidner Jun 7 2010 at 20:54 1 A small remark: while the extension of the orbit from nilpotent to exponential groups (groups s.t. the exponential map is a diffeomorphism) is straightforward, general solvable groups require extra technical assumptions and machinery (polarizability, Pukánszky condition, etc). – Victor Protsak Jun 8 2010 at 0:36 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I hope that people with more expertise also answer, and give more information than I can. My memory is that `$\mathbb C[G]$` is something like the functions on `$\{{\rm pt}\}/G$`. More generally, Connes says that when `$G$` acts on a (nice, say locally compact Hausdorff) space `$X$`, then the functions on the noncommutative space `$X/G$` are given by the semidirect product `$\mathcal C_0(X) \rtimes G$`, where by `$\mathcal C_0(X)$` I mean the functions that are smaller than $\epsilon$ outside of a compact set, and the semidirect product is as a vector space (a C-star completion of) the tensor product `$\mathcal C_0(X) \otimes \mathbb C[G]$`, and the algebra structure is such that `$\mathcal C_0(X)$` and `$\mathbb C[G]$` are subalgebras. Connes encourages this way of thinking about bad quotients; the typical example being the `$\mathbb R$` action on a torus given by an irrational line. I'm not entirely sure that I like this answer, however. First of all, when `$G$` happens to be commutative, then `$\operatorname{Spec}(\mathbb C[G])$` is the dual group, which seems to me very different from how I think about `$\{{\rm pt}\}/G$`. And there are, to my mind, better ways to think about bad quotients, namely through the language of groupoids and stacks. But I am not an expert on NCG. - Thanks for the answer. Do you have an explanation, why $\mathcal{C}_0\otimes \mathbb{C}[G]$ should be functions on the quotient $X/G$? – Jan Weidner Jun 7 2010 at 19:50 3 The languages of stacks/groupoids and nc algebras as above are roughly equivalent - you can encode a groupoid up to equivalence in its groupoid algebra up to Morita equivalence. So you can eg think of vector bundles on a stack X/G as momdules over the groupoid algebra - functions on X smash with the group algebra of G. Depending on what you mean by group algebra, classifying space and representation it's either a complete tautology or deep (the Baum-Connes conjecture) that modules for the group algebra = sheaves on dual = sheaves on BG. – David Ben-Zvi Jun 7 2010 at 19:57 2 @Jan: The center of ${\mathcal C}_0 \otimes C[G]$ is $G$-invariant functions on $X$. So there should be a map from noncommutative $X/G$ to commutative $X/G$, where the fiber over a point is the noncommutative point $Spec C[$stabilizer]. – Allen Knutson Jun 8 2010 at 16:15 @DBZ: Oh, OK. I think I have heard something like this before; it's not a story I understand. – Theo Johnson-Freyd Jun 12 2010 at 6:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440616369247437, "perplexity_flag": "head"}
http://mathoverflow.net/questions/63323?sort=votes
the free loop space fibration is a locally trivial fiber bundle - reference? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $Q$ be a compact Riemannian manifold. Then $\Lambda Q\rightarrow Q,$ $\gamma\mapsto \gamma(0)$ can be shown to be a locally trivial fiber bundle of Hilbert manifolds. Here, $\Lambda Q$ denotes the space of maps $S^1\rightarrow Q$ of Sobolev class $W^{1,2}.$ Question: Who proved it first? Is there an appropriate reference? I once read it attributed to Klingenberg, but didn't find the proof (nor the statement) in the corresponding reference. I only know a proof due to Abbondandolo/Schwarz, but they claim no originality when asked. - 1 Answer The proof isn't difficult, and doesn't depend on the class of maps, so I think it is one of those that is easier to write down a new proof than to trace it back in the literature. If you want a reference to a published article containing a full proof, it is in my paper Constructing smooth manifolds of loop spaces as Corollary 4.8. This most certainly is not the first place that it appears (for one, I had a similar proof for the smooth case in my notes on the differential topology of loop spaces), but when I proved it then I did not rely on any other source. Also, the article version deals with a very wide range of classes of maps. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9384851455688477, "perplexity_flag": "head"}
http://www.scottaaronson.com/democritus/lec2.html
PHYS771 Lecture 2: Sets Scott Aaronson Thursday's class started out with a brief presentation by Rahul Jain about atomist ideas in Jainism (circa 500BC). It seems the Jain (the ancient ones, not Rahul) were barking up more or less the same tree as Democritus, but their ideas (like many of the pre-Socratics') were mixed with generous helpings of mysticism. I mentioned another example of East-West convergence: apparently, several ancient cultures independently came up with the same proof that A = π r2. It's obvious that the area of a circle should go like the radius2; the question is why the constant of proportionality (π) should be the same one that relates circumference to diameter. Proof by pizza: Cut a circle of radius r into thin pizza slices, and then "Sicilianize" (i.e. stack the slices into a rectangle of height r and length π r). QED. One thing I forgot to share with you on Tuesday was a quote from Democritus: "I would rather discover a single cause than become king of the Persians." Something to keep in mind when you consider those job offers from Microsoft or Google... Today we're gonna talk about sets. What will these sets contain? Other sets! Like a bunch of cardboard boxes that you open only to find more cardboard boxes, and so on all the way down. You might ask, "how is this relevant to a class on quantum computing?" I can give three answers: 1. When I gave that puzzle on Tuesday (which by the way, we're going to "answer" today), some of you asked what "countable" means. OK, dude. Math is the foundation of all human thought, and set theory -- countable, uncountable, etc. -- that's the foundation of math. So even if this class was about Sanskrit literature, it should still probably start with set theory. 2. I have a hidden agenda: I'm told we have some physicists here, and I intend to browbeat you into thinking like mathematicians. I mean, what you do in the lab is your own business, but now you're in theorem country. 3. There actually is a tenuous connection between quantum computing and set theory, which I'll touch on in the next lecture. To give a sneak preview, the connection is that quantum mechanics applied to finite-dimensional systems (like qubits) seems like an interesting "intermediate" case between a continuous and a discrete theory. That is, it involves quantities (the amplitudes) that vary continuously, but that are not directly observable. In this way, it seems to avoid the "paradoxes" associated with the continuum in a way that other continuous physical theories do not. But what are those paradoxes? Well, welcome to my haunted house horror tour of the continuous and the transfinite... So let's start with the empty set and see how far we get. THE EMPTY SET. Any questions so far? Actually, before we talk about sets, we need a language for talking about sets. The language that Frege, Russell, and others developed is called first-order logic. It includes Boolean connectives (and, or, not), the equals sign, parentheses, variables, predicates, quantifiers ("there exists" and "for all") -- and that's about it. So for example, here are the Peano axioms for the nonnegative integers (where S(x) is the successor function, intuitively S(x)=x+1, and I'm assuming functions have already been defined): • Zero Exists: There exists a z such that for all x, S(x) is not equal to z. • Every Integer Has At Most One Predecessor: If S(x)=S(y) then x=y. The nonnegative integers themselves are called a model for the axioms (though interestingly, they're not the only model). Writing down these axioms seems like pointless hairsplitting -- and indeed, as someone pointed out in class, there's an obvious chicken-and-egg problem. How can we state axioms that will "put the integers on a more secure foundation," when the very symbols and so on that we're using to write down the axioms presuppose that we already know what the integers are? Well, precisely because of this point, I don't think that axioms and formal logic can be used to "place arithmetic on a more secure foundation" (whatever that would mean). But this stuff is still extremely interesting for at least three reasons: 1. The situation will change once we start talking not about integers, but about different sizes of infinity. There, writing down axioms and working out their consequences is pretty much all we have to go on! 2. Once we've formalized everything, we can then program a computer to reason for us: • Premise 1: For all x, if A(x) is true then B(x) is true. • Premise 2: There exists an x such that A(x) is true. • Conclusion: There exists an x such that B(x) is true. Well, you get the idea. The point is that deriving the conclusion from the premises is purely a syntactic operation -- one that doesn't require any understanding of what the statements mean. 3. Besides having a computer find proofs for us, we can also treat proofs themselves as mathematical objects, which opens the way to metamathematics. Anyway, enough pussyfooting around. Let's see some axioms for set theory. (I'll state the axioms in English; converting them to first-order logic is left as an "exercise for the reader.") • Empty Set: There exists an empty set. • Extensionality: If two sets have the same members then they're equal. • Pairing: For all sets x,y there exists a set {x,y}. • Union: For all sets x, there exists a set equal to the union of all sets in x. • Existence of Infinite Sets: There exists a set x that contains the empty set and that contains y∪{y} for every y∈x. • Power Set: For all sets x there exists a set consisting of the subsets of x. • Replacement (for every function A): For all sets x, there exists a set {A(y) | y∈x}. • Foundation: All nonempty sets x have a member y such that for all z, either z∉x or z∉y. (This is a technical axiom, whose point is to rule out sets like {{{{...}}}}.) These axioms -- called the Zermelo-Fraenkel axioms -- are the foundation for basically all of math. So I thought you should see them at least once in your life. Alright, one of the most basic questions we can ask about a set is, how big is it? What's its size, its cardinality? You might say, just count how many elements it has. But what if there are infinitely many? Are there more integers than even integers? This brings us to Georg Cantor (1845-1918), and the first of his several enormous contributions to human knowledge. He says, two sets have the same cardinality if and only if their elements can be put in one-to-one correspondence. Period. And if, whenever you try to pair off the elements, one set always has elements left over, the set with the elements left over is the bigger set. What possible cardinalities are there? Of course there are finite ones, and then there's the first infinite cardinality, the cardinality of the integers, which Cantor called ℵ0 ("Aleph-Zero"). The rational numbers have the same cardinality ℵ0, a fact that's also expressed by saying that the rational numbers are "countable" (i.e., can be placed in one-to-one correspondence with the integers). What's the proof that the rational numbers are countable? You haven't seen it before? Oh, alright. First list all the rational numbers where the sum of the numerator and the denominator is 2. Then list all the rational numbers where the sum of the numerator and the denominator is 3. And so on. It's clear that every rational number will eventually appear in this list. Hence there's only a countable infinity of them. QED. But Cantor's biggest contribution was to show that not every infinity is countable -- so for example, the infinity of real numbers is greater than the infinity of integers. More generally, just as there are infinitely many numbers, there are also infinitely many infinities. You haven't seen the proof of that either? Alright, alright. Let's say you have an infinite set A. We'll show how to produce another infinite set, B, which is even bigger than A. This B will simply be the set of all subsets of A (which is guaranteed to exist by the Zermelo-Fraenkel axioms). How do we know B is bigger than A? Well, suppose we could pair off every element a∈A with an element f(a)∈B, in such a way that no elements of B were left over. Then we could define a new subset S⊆A, consisting of every a that's not contained in f(a). Notice that this S can't have been paired off with any a∈A -- since otherwise, a would be contained in f(a) if and only if it wasn't contained in f(a), contradiction. Therefore B is larger than A, and we've ended up with a bigger infinity than the one we started with. This is certainly one of the four or five greatest proofs in all of math -- again, good to see at least once in your life. Besides cardinal numbers, it's also useful to discuss ordinal numbers. Rather than defining these, it's easier to just illustrate them. We start with the natural numbers: 1, 2, 3, ... Then we say, let's define something that's greater than every natural number: ω What comes after omega? ω+1, ω+2, ... Now, what comes after all of these? 2ω Alright, we get the idea: 3ω, 4ω, ... Alright, we get the idea: ω2, ω3, ... Alright, we get the idea: ωω, $\omega^{\omega^{\omega}}$, ... We could go on for quite a while! Basically, for any set of ordinal numbers (finite or infinite), we stipulate that there's a first ordinal number that comes after everything in that set. The set of ordinal numbers has the important property of being well-ordered, which means that every subset has a minimum element. Now, here's something interesting. All of the ordinal numbers I've listed have a special property, which is that they have at most countably many predecessors (i.e., at most ℵ0 of them). What if we consider the set of all ordinals with at most countably many predecessors? Well, that set also has a successor, call it α. But does α itself have ℵ0 predecessors? Certainly not, since otherwise α wouldn't be the successor to the set; it would be in the set! The set of predecessors of α has the next possible cardinality, which is called ℵ1. What this sort of argument proves is that the set of cardinalities is itself well-ordered. After the infinity of the integers, there's a "next bigger infinity," and a "next bigger infinity after that," and so on. You never see an infinite decreasing sequence of infinities, as you do with the real numbers. So, starting from ℵ0 (the cardinality of the integers), we've seen two different ways to produce "bigger infinities than infinity." One of those ways yields the cardinality of sets of integers (or equivalently, the cardinality of real numbers), which we denote 2. The other way yields ℵ1. Is 2 equal to ℵ1? Or to put it another way: is there any infinity of intermediate size between the infinity of the integers and the infinity of the reals? (Note: No sooner had I revealed that there were more reals than integers, than a student actually asked this. He claimed never to have heard of the question before; he thought he was just asking for a technical clarification.) Well, the question of whether there are any "intermediate" infinities between the integers and the reals was David Hilbert's first problem in his famous 1900 address. It stood as one of the great math problems for over half a century, until it was finally "solved" (in a rather disappointing way, as you'll see). Cantor himself believed there were no intermediate infinities, and called this conjecture the Continuum Hypothesis. Cantor was extremely frustrated with himself for not being able to prove it. Besides the Continuum Hypothesis, there's another statement about these infinite sets that no one could prove or disprove from the Zermelo-Fraenkel axioms. This statement is the infamous Axiom of Choice. It says that, if you have a (possibly infinite) set of sets, then it's possible to form a new set by choosing one item from each set. Sound reasonable? Well, if you accept it, you also have to accept that there's a way to cut a solid sphere into a finite number of pieces, and then rearrange those pieces into another solid sphere a thousand times its size. (That's the "Banach-Tarski paradox." Admittedly, the "pieces" are a bit hard to cut out with a knife...) Why does the Axiom of Choice have such dramatic consequences? Basically, because it asserts that certain sets exist, but without giving any rule for forming those sets. As Bertrand Russell put it: "To choose one sock from each of infinitely many pairs of socks requires the Axiom of Choice, but for shoes the Axiom is not needed." (What's the difference?) The Axiom of Choice turns out to be equivalent to the statement that every set can be well-ordered: in other words, the elements of any set can be paired off with the ordinals 1, 2, ..., ω, ω+1, ... 2ω, 3ω, ... If you think (for example) about the set of real numbers, this seems far from obvious. It's easy to see that well-ordering implies the Axiom of Choice: just well-order the whole infinity of socks, then choose the sock from each pair that comes first in the ordering. Do you want to see the other direction: why the Axiom of Choice implies that every set can be well-ordered? Yes? OK! We have a set A that we want to well-order. For every proper subset B⊂A, we'll use the Axiom of Choice to pick an element f(B)∈A\B. Now we can start well-ordering A, as follows: first let s0 = f({}), then let s1 = f({s0}), s2 = f({s0,s1}), and so on. Can this process go on forever? No, it can't. For if it did, then by a process of "transfinite induction," we could stuff arbitrarily large infinite cardinalities into A. And while admittedly A is infinite, it has at most a fixed infinite size! So the process has to stop somewhere. But where? At a proper subset B of A? No, it can't do that either -- since if it did, then we'd just continue the process by adding f(B). So the only place it can stop is A itself. Therefore A can be well-ordered. OK, should we come back to the puzzle from Tuesday? We have a box, [0,1]2. To each real number x∈[0,1], we associate a countable subset S(x)⊂[0,1]. Now, can we choose S in such a way that for every (x,y) pair, either y∈S(x) or x∈S(y)? What do you think? I'll give you two answers: that it isn't possible, and that it is possible. Which answer do you want to see first? Alright, we'll start with why it isn't possible. For this I'll assume that the Continuum Hypothesis is false. Then there's some proper subset A⊂[0,1] that has cardinality ℵ1. Let B be the union of S(x) over all x∈A. Then B also has cardinality ℵ1. So, since we assumed that ℵ1 is less than 2, there must be some y∈[0,1] not in B. Now observe that there are ℵ1 real numbers x∈A, but none of them satisfy y∈S(x), and only ℵ0 < ℵ1 of them can satisfy x∈S(y). Now let's see why it is possible. For this I want to assume both the Axiom of Choice and the Continuum Hypothesis. By the Continuum Hypothesis, there are only ℵ1 real numbers in [0,1]. So by the Axiom of Choice, we can well-order those real numbers, and do it in such a way that every number has at most ℵ0 predecessors. Now put y in S(x) if and only if y≤x, where ≤ means with respect to the well-ordering (not the usual ordering on real numbers). Then for every (x,y), clearly either y∈S(x) or x∈S(y). Today's puzzle is about the power of self-esteem and positive thinking. Is there any theorem that you can only prove by assuming as an axiom that the theorem can be proved? [Discussion of this lecture on blog] [← Previous lecture | Next lecture →]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548562169075012, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/53280/extremal-obstructions-to-gowers-uniformity
## Extremal Obstructions to Gowers Uniformity ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Recall the definition of the Gowers uniformity norm `$\|f\|_{U^{k}(G)}$`, ```\begin{align} \|f\|_{U^{k}(G)} := \left( \mathbb{E}_{x,h_1,\ldots,h_k \in G} \Delta_{h_1} \ldots \Delta_{h_{k}} f(x) \right)^{2^{-k}} \, \end{align}``` where the operator $\Delta_h$ is a multiplicative analog of a derivative given by \begin{align} \Delta_h f(x) := f(x+h) \overline{f(x)} \,, \end{align} and $G$ is a finite abelian group. I'm specifically interested in the case `$G=\mathbb{Z}_d$` of integers modulo $d$, and $k=3$. Therefore, I'll just use the shorthand notation `$\|f\|_{U^{k}(\mathbb{Z}_d)} = \|f\|_{U^{k}}$`. I'm interested in functions `$f:\mathbb{Z}_d \to \mathbb{C}$` which have some fixed value of $\|f\|_2$, say 1, meaning that ```\begin{align} \|f\|_2^2 = \sum_{h \in \mathbb{Z}_d} f(h) \overline{f(h)} = 1\,. \end{align}``` Then my question is, What are the functions having unit 2-norm which minimize $\|f\|_{U^3}$? I can prove a lower bound of `$\|f\|^8_{U^3} \ge \frac{2}{d^{4} (d+1)}$`, so such functions cannot have arbitrarily small Gowers norm. This bound seems to be tight for all values of $d$ (via numerics) but there is no obvious function which provably saturates the bound for all $d$. From what I can tell, it appears that such obstructions to Gowers uniformity, like the 2-norm constraint above, have been studied before. But I cannot tell if such extremal problems have been studied, or even if they are thought to be tractable. - ## 1 Answer I'm not quite sure where your lower bound comes from, but something close comes from functions such as `\[f(x)=e(x^3/d).\]` This (after rescaling by $d^{-1/2}$ to match your definition) has $L^2$ norm 1, and $U^3$ norm `\[\|f\|_{U^3}^8\leq\frac{2}{d^5}.\]` In general, these `phase functions' of degree $n$ will give functions with small $U^n$ norm (because 'differentiating' such a phase function $n-1$ times gives a sum over linear phase functions and hence a lot of a cancellation), and I suspect the extremal example will be of this sort. - Thanks Thomas! I guess this upper bound only works in prime dimensions, though, right? Unfortunately, these functions don't look anything like the ones which are actual extreme points, so I don't see how to get there from here. The ones which minimize don't ever seem to have a constant absolute value like the phase function you've proposed. It's maddeningly close to the lower bound, though. – Steve Flammia Jan 31 2011 at 2:54 Yes, sorry; I think you do need $d$ prime here. Could you give examples of functions you've found with extreme points? e.g. could they be constructed from such phase functions with sums and/or dilates? – Thomas Bloom Jan 31 2011 at 9:35 1 Since the solutions are numerical, they don't really have a useful form. An analytic solution for d=2 is: $f(0)=\sqrt{3+\sqrt{3}}$ , and $f(1) = \sqrt{3-\sqrt{3}} e(1/8)$. I can email you a few other examples for larger d if you want. I haven't checked if they can be formed from sums over polynomial phase functions, because I don't know how to check this when the phase functions are nonlinear. Is there a way to get a "nonlinear Fourier decomposition" of a function? – Steve Flammia Jan 31 2011 at 23:34 I'd be interested in seeing your results; my email is on my webpage on my profile, thanks. I don't think there's a canonical way to decompose a function into nonlinear phase functions like the Fourier transform, since they no longer form a canonical basis of the dual space. – Thomas Bloom Feb 1 2011 at 9:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9328173398971558, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/129112/exactness-axiom-of-homology-theory
# Exactness Axiom of Homology Theory Axioms we are using for Homology Theory: 1) Homotopy: if $f$ and $g$ are homotopic, then $h_{n}(f) = h_{n}(g)$ 2) exactness: each map $f:(X,A)\to (Y,B)$ gives us a commuting ladder of long exact sequences (the top bar of which I have included below in my question) 3) Excision: if $(X,A)$ is a pair and $C\subset A$ with the closure of $C$ contained in the interior of $A$, then the inclusion $e:(X-C,A-C)\to (X,A)$ induces an isomorphism $h_{n}(e):h_{n}(X,A)\to (Y,B)$. Exactness axiom: For each $f:(X,A)\to (Y,B)$ there is a commuting ladder of long exact sequences: $\dots \to h_{n}(A,\phi)\to h_{n}(X,\phi)\to h_{n}(X,A)\to h_{n-1}A\to ...$ My question: Based on my notes, I can't find a definition for $h_{n-1}A$, nor the map $h_{n}(X,A)\to h_{n-1}A$ (which is, however, labelled as $\partial_{(X,A)}$) I browsed some online sources and found that it is refered to as a boundary map. But what is its definition? (same question for the space $h_{n-1}A$). Thanks so much in advance! - 2 $h_{n-1}(A)$ means $h_{n-1}(A,\emptyset)$. This boundary map is what ties together $h_n$ with $h_{n-1}$ (otherwise they would be independent homotopy functors... we don't want that). What are the axioms you are using for a homology theory? – Thomas Belulovich Apr 7 '12 at 21:39 We listed 6 axioms, but I think we are only using 3 (I will add them in the question right now.) – Kyle Apr 7 '12 at 21:43 1 As far as the question goes, it doesn't quite make sense - these are axioms, describing what a homology theory is, so defining these maps is something you do for a particular homology theory. Are you asking about what the boundary map is for, say, singular homology theory? – Martin Wanvik Apr 7 '12 at 21:49 2 Also, i suspect that the terms $(X,\phi)$ and $(A,\phi)$ in the long exact sequence you've written down, are supposed to be $(X,\emptyset)$ and $(A,\emptyset)$. – Martin Wanvik Apr 7 '12 at 21:53 2 – Martin Wanvik Apr 7 '12 at 22:18 show 4 more comments ## 1 Answer Let me try to answer the question "what is the boundary map $\partial: H_n(X, A) \to H_{n-1}(A)$?" As noted in the comments, this is part of the axioms of a homology theory, so actually the official answer is that it's given to you as part of the homology theory. Obviously, this is not very satisfying, so instead let me answer the question "what is the boundary map in singular homology?" The answer is that if your given homology theory is constructed as the homology of some chain complexes (which singular homology certainly is!), the boundary map is something you get for free for abstract nonsense reasons (i.e. diagram chasing). Given a pair $(X, A)$, for each $k$ we have the following commuting diagram $$\begin{array}{ccccccccc} 0 &\to& C_k(A) &\xrightarrow{i_k}& C_k(X) &\xrightarrow{q_k}& C_k(X)/C_k(A) &\to& 0 \\ & & \downarrow & & \downarrow & & \downarrow & & \\ 0 &\to& C_{k-1}(A) &\xrightarrow{i_{k-1}}& C_{k-1}(X) &\xrightarrow{q_{k-1}}& C_{k-1}(X)/C_{k-1}(A) & \to & 0 \end{array}$$ where $i_k$ and $q_k$ denote inclusion and quotient, respectively, and all the downward arrows are $\partial_k$. Suppose I have some element of $H_k(X, A)$. Then I can pick some relative cycle $\tilde{c} \in C_k(X)/C_k(A)$ representing it, and since $q_k$ is surjective I can pick some $c \in C_k(X)$ such that $q_k(c) = \tilde{c}$. Consider $\partial_k c \in C_{k-1}(X)$. By commutativity, of the diagram, $q_{k-1} \partial_k(c) = \partial_k q_k(c) = \partial_k \tilde{c} = 0$, so $\partial_k c \in \ker q_{k-1}$. By exactness of the bottom row, there is some $b_{k-1} \in C_{k-1}(A)$ such that $i_{k-1}b_{k-1} = \partial_k c_k$. So let us define a map $H_k(X,A)$ by sending $[\tilde{c}]$ to $[b_{k-1}] \in H_{k-1}(A)$. A similar diagram chase shows that this map is well-defined (i.e. independent of all the choices we made). This is the well-known connecting homomorphism, which in homology we typically denote by $\partial_k: H_k(X,A) \to H_{k-1}(A)$. This construction is usually called the snake lemma. - First of all, thanks very much for this. But I'm really quite lost. What is $C_{k}(\cdot)$ representing? – Kyle Apr 10 '12 at 21:56 $C_k(\cdot)$ are the chains, with boundary map $\partial: C_k(\cdot) \to C_{k-1}(\cdot)$, so that $H_k = \ker \partial_k / \mathrm{im} \partial_{k+1}$. They depend on what homology theory you are considering, but they are always there in the background somewhere. For the singular homology of a space $X$ they are formal linear combinations of maps $\Delta^k \to X$ where $\Delta^k$ is the standard $k$-simplex. For simplicial and cellular homology there are analogous definitions. Whenever you encounter any kind of homology you should think of it as the homology of some underlying chain complex. – Jonathan Apr 12 '12 at 1:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 52, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512436985969543, "perplexity_flag": "head"}
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.103.251802
# Synopsis: How to find a “leptophobic” Z′ boson at the LHC #### Six-Lepton Z′ Resonance at the Large Hadron Collider Vernon Barger, Paul Langacker, and Hye-Sung Lee Published December 16, 2009 The first collisions have just been observed at the Large Hadron Collider (LHC), which is now the world’s highest energy accelerator. Two of the central goals of the LHC are to find the Higgs boson and to look for physics beyond the standard model. Among its primary targets for the latter is finding a new heavy neutral spin-1 particle, called a $Z′$ boson, since such a particle arises in any extension of the standard model to include an additional local phase-rotation, or $U(1)$, symmetry. Searches for a $Z′$ boson at hadron colliders usually look for decays of the $Z′$ into two leptons (e.g., an electron and a positron) because such $Z′$ events stand out from the flood of background events. But what if the $Z′$ boson, unlike the standard model $Z$ boson, is “leptophobic,” meaning it doesn’t couple very strongly to leptons? This would make the $Z′$ very difficult to detect. In a paper published in Physical Review Letters, Vernon Barger at the University of Wisconsin, Paul Langacker at Princeton, and Hye-Sung Lee at the University of California, Riverside, all in the US, propose an interesting new model-independent way to study such leptophobic $Z′$ bosons at the LHC. They show that the $Z′$ can decay, via a Higgs boson, into three $Z$ bosons, each of which can then decay into two leptons. So a $Z′$ boson that does not decay into two leptons could paradoxically be found by this indirect decay into six leptons. There will undoubtedly be surprises at the LHC, and this striking channel could be the discovery mode for new physics, if it takes the form of a leptophobic $Z′$ boson. – Robert Garisto ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9182866811752319, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/107837/equality-of-measures
# Equality of measures I have two measures $\mu$ and $\nu$ supported on compacts in $\mbox{int } \mathbb{R}^{n}_+$. Are there some sufficiently general classes of such measures for which $$\int\limits_{\mathbb{R}^n_+} \frac{\mu(dx)}{x_1^{z_1} x_2^{z_2}\cdots x_n^{z_n}} = \int\limits_{\mathbb{R}^n_+} \frac{\nu(dx)}{x_1^{z_1} x_2^{z_2}\cdots x_n^{z_n}}$$ that holds for any $\Re z_1 > R$, $\Re z_2 > R$, ..., $\Re z_n > R$ and $R>1$ implies $\mu = \nu$? - Do you know such a "sufficiently general class" in the case $n=1$? – GEdgar Feb 10 '12 at 15:47 Maybe, a class of finite sums of delta functions and its natural generalisation. More nontrivial classes are given by the Mellin inversion theorem. I don't know, it there a Mellin inversion theorem for measures in $\mathbb{R}^n_+$? Actually, my question is about it. – Nimza Feb 10 '12 at 16:06 ## 1 Answer It's true for any regular complex Borel measures with compact support. Let $K$ be the union of the supports of the two measures. The linear span $V_0$ of the functions $1/(x_1^{z_1} \ldots x_n^{z_n})$ for $\Re z_j > 0$ is dense in $C(K)$ by the Stone-Weierstrass Theorem. Since your functions are of the form $f/(x_1 \ldots x_n)^R$ for $f \in V_0$ and $(x_1 \ldots x_n)^R$ has no zeros on $K$, their linear span is also dense in $C(K)$. So $\mu$ and $\nu$, corresponding to continuous linear functionals on $C(K)$, agree on a dense set and therefore are equal. - Thank you, but the Stone-Weierstrass Theorem requires subalgebra to contain a nonzero constant. I think that we can get rid of this problem adding the equality $\mu(\mathbb{R}^n_+) = \nu(\mathbb{R}^n_+)$. And what about the finite measures with a noncompact support? Can we use some analogues of the Stone-Weierstrass Theorem? – Nimza Feb 10 '12 at 18:45 $V_0$ doesn't contain a constant, but it contains functions that approximate a constant arbitrarily closely, so Stone-Weierstrass still applies (or if you prefer you could change $\Re z_j > 0$ to $\Re z_j \ge 0$). For finite measures $\mu$ and $\nu$ with noncompact support, there is a generalization of Stone-Weierstrass: if $X$ is locally compact, a subalgebra $V$ of $C_0(X)$ (the continuous functions that vanish at $\infty$) is dense in $C_0(X)$ if it separates points and there is no $x \in X$ where all members of $V$ vanish. – Robert Israel Feb 10 '12 at 20:19 Thank you, it's clear. – Nimza Feb 11 '12 at 11:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319953322410583, "perplexity_flag": "head"}
http://mathoverflow.net/questions/12531/non-commutative-versions-of-x-g
## Non-commutative versions of X/G ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a Riemannian manifold and let $G$ be a (at most countable, if that matters) discrete group acting properly and by isometries on $X$. Let $\mathcal{O}$ be the sheaf of analytic functions on $X$. By analogy with what happens for finite groups acting on vector spaces, one is tempted to study a sheaf written $\mathcal{O} \rtimes G$. What is a good reference for an algebraist to learn about the various convergence conditions one might impose to define this sheaf, and their relationship with the quotient X/G? [I'm fine with an answer that works in a different category---e.g. complex analytic spaces, but I want there to be some convergence conditions imposed at some point. The ideal reference is a short survey paper with precise definitions.] - I have deleted my purported answer, after seeing ur response that you need a precise reference. Maybe this will help .. alainconnes.org/docs/shortsurvey.pdf – Anweshi Jan 21 2010 at 15:15 I actually have half a mind to delete this question until I figure out a less ignorant thing to ask. Right now, my central problem (aside from not knowing the literature in this area at all) is that the Wiki page on crossed product algebras doesn't give specific enough references. – GS Jan 21 2010 at 15:30 ## 1 Answer Noncommutative versions of sheaves and holomorphic functions are not very well understood. Better understood are noncommutative versions of measurable, continuous, or smooth functions. I generally work with the continuous functions, i.e. $C^*$-algebras, or various subalgebras that deserve to be called smooth. I'll describe things in the $C^*$-framework. What came to mind immediately for me is the notion of strong Morita equivalence, due to Rieffel. It works like this: suppose you have a locally compact group $G$ acting on a $C^*$- algebra $A$ (think of $A$ as $C(X)$ here). You can form what is called the crossed product algebra, which is a $C^*$-algebra containing $A$ and $G$, and where the action of $G$ on $A$ is implemented via conjugation by $G$; i.e. if $a \in A$ and $g \in G$, then `$g a g^* = \alpha_g(a)$`, where `$\alpha$` is the action. This can be done when $A$ is unital or not, and $G$ can be discrete or not. The resulting algebra, which I would denote $A \times_\alpha G$, is unital if and only if $A$ is unital and $G$ is discrete. Now suppose that $X$ is a compact Hausdorff space with an action of $G$. Then $G$ also acts on $A = C(X)$, and so we can make the crossed product algebra $C(X) \times_\alpha G$. Here's the punchline: when the action of $G$ on $X$ is free and proper, so that the quotient $X/G$ is well-behaved, then the crossed product algebra is strongly Morita equivalent to the algebra $C(X/G)$ of functions on the quotient. When the action is not free and proper, the quotient may be very bad (e.g. the integers acting on the circle by rotation by an irrational angle) and so the algebra $C(X/G)$ may be reduced to nothing more than scalars, and so be useless for obtaining any information about the quotient. In this case, one uses the crossed-product algebra as a sort of substitute for the algebra of functions on the quotient. A reference for this is the paper "Applications of Strong Morita Equivalence to Transformation Group $C^*$-algebras, by Rieffel, which is available on his website. Unfortunately it doesn't have the definitions of crossed products (which he calls transformation group algebras), but the wikipedia page is ok, although phrased just for von Neumann algebras. - Thanks for your very helpful response, and especially the reference to Rieffel's paper! – GS Jan 21 2010 at 20:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353329539299011, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/182254-lim-x-0-cosec-x-x.html
# Thread: 1. ## lim as x --> 0 of [cosec(x)]^x Lim (cosecx)^x x-> 0 how to solve this ? I know the initial step is to take log on both sides, but couldn't proceed further. Thanks. the answer to the problem is 1 2. $x\cdot ln(csc(x)) = x\cdot [-ln(sin(x))] = \frac{-ln(sin(x))}{\frac{1}{x}}$ I'm starting to see it. There is a reason why you studied trigonometry and algebra. Now is the time to REMEMBER! 3. Originally Posted by gaganvj Lim (cosecx)^x x-> 0 how to solve this ? I know the initial step is to take log on both sides, but couldn't proceed further. Thanks. the answer to the problem is 1 limit as x approaches 0 of &#40;Cosec&#91;x&#93;&#41;&#94;x - Wolfram|Alpha Click on Show steps. 4. Originally Posted by gaganvj Lim (cosecx)^x x-> 0 how to solve this ? I know the initial step is to take log on both sides, but couldn't proceed further. Thanks. the answer to the problem is 1 Because is $(\sin x)^{-x}= e^{-x\ \ln \sin x}$ what we have to valuate is... $\lim_{x \rightarrow 0} x\ \ln \sin x$ (1) From the 'infinite product'... $\sin x = x\ \prod_{n=1}^{\infty} (1-\frac{x^{2}}{n^{2}\ \pi^{2}})$ (2) ... we derive... $x\ \ln \sin x = x\ \{\ln x + \sum_{n=1}^{\infty} \ln (1-\frac{x^{2}}{n^{2}\ \pi^{2}})\}$ (3) ... so that is... $\lim_{x \rightarrow 0}x\ \ln \sin x =0$ (4) What is interesting is the fact that the limit (4) is the same for $x \rightarrow 0+$ and $x \rightarrow 0-$ so that the function $(\sin x)^{-x}$ is continous in x=0... Kind regards $\chi$ $\sigma$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307676553726196, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/400/a-gentleman-never-chooses-a-basis/405
## “A gentleman never chooses a basis.” ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Around these parts, the aphorism "A gentleman never chooses a basis," has become popular. Is there a gentlemanly way to prove that the natural map from V to V** is surjective if V is finite dimensionsal? As in life, the exact standards for gentlemanliness are a bit vague. Some arguments seem to be implicitly pick basis. I'm hoping there's an argument which is unambigously gentlemanly. - 6 I'm having trouble coming up with a sufficiently patriarchal argument. Does "these parts" refer to the pre-suffrage era? – S. Carnahan♦ Oct 13 2009 at 3:31 3 That's fair. I should have gone for something more gender neutral. Although "gentlemanly/ladylike" is a bit awkward, and something like "classy" doesn't have the same anachronistic feel. Any suggestions? – Richard Dore Oct 13 2009 at 18:50 8 My personal preference is to avoid any reference to gender or class (or indeed membership in any group associated to historical persecution - e.g., we don't say that bases are for Jewish or homosexual people). This may make your question seem less colorful, but I think it is worthwhile to make mathematics more welcoming to people of all kinds. If you're still looking for an obnoxious elitist tone, I suggest replacing "gentleman" with "true mathematician" and "gentlemanly" with "mathematically cultured". – S. Carnahan♦ Oct 14 2009 at 1:55 38 In my mind, "gentleman" refers to politeness rather than social class, but I can see where the problem comes from. Perhaps a good alternative is "my mommy said it's not polite to choose a basis." My mom didn't tell me that, so as a kid, I chose bases left and right; now I regret it. – Anton Geraschenko♦ Oct 14 2009 at 14:33 1 This doesn't constitute a proof, but: Suppose that the result of a certain proof looks obvious in notation A, but deep and mysterious in notation B. This is usually a reason to prefer notation A. In Penrose's abstract index notation, which doesn't require a choice of basis, mapping one-dimensional space V to V* takes element $x_a$ to element $x^a$. If you then continue with V* to V**, you take $x^a$ to (drumroll, plese) $x_a$. If the mapping from V to V** wasn't surjective (and, in fact, an isomorphism) then abstract index notation would be inconsistent. – Ben Crowell Sep 18 at 4:12 show 3 more comments ## 13 Answers Following up on Qiaochu's query, one way of distinguishing a finite-dimensional $V$ from an infinite one is that there exists a space $W$ together with maps $e: W \otimes V \to k$, $f: k \to V \otimes W$ making the usual triangular equations hold. The data $(W, e, f)$ is uniquely determined up to canonical isomorphism, namely $W$ is canonically isomorphic to the dual of $V$; the $e$ is of course the evaluation pairing. (While it is hard to write down an explicit formula for $f: k \to V \otimes V^*$ without referring to a basis, it is nevertheless independent of basis: is the same map no matter which basis you pick, and thus canonical.) By swapping $V$ and $W$ using the symmetry of the tensor, there are maps $V \otimes W \to k$, $k \to W \otimes V$ which exhibit $V$ as the dual of $W$, hence $V$ is canonically isomorphic to the dual of its dual. Just to be a tiny bit more explicit, the inverse to the double dual embedding $V \to V^{**}$ would be given by $$V^{\ast\ast} \to V \otimes V^* \otimes V^{\ast\ast} \to V$$ where the description of the maps uses the data above. - OK, great! So you can define finite-dimensionality without mentioning bases (or chains of subspaces). The answer to the question is then easy. But this recasting of the definition of finite-dimensionality is, I think, much the most interesting thing. – Tom Leinster Oct 21 2009 at 22:54 Yes, there a number of ways one might think of characterizing finite-dimensionality (including being isomorphic to its double dual!), Noetherian/Artinian hypotheses, etc. But some of these characterizations don't port so well to modules over other commutative rings. The present characterization is equivalent to being finitely generated and projective, for any commutative ring. – Todd Trimble Oct 22 2009 at 4:15 1 When you say "isomorphic to its double dual" you presumably mean its algebraic double dual. – Andrew Stacey Nov 8 2009 at 21:26 1 Presumably Andrew means that one almost never talks about unadorned infinite-dimensional vector spaces. An analyst naturally thinks of the dual of a finite-dimensional vector space as a special case of the continuous dual of a topological vector space, and in this situation spaces are rarely isomorphic to their double duals. – Qiaochu Yuan Nov 23 2009 at 15:24 2 I guess Andrew also means that, for example, Hilbert spaces <em>are</em> isomorphic to their continuous double duals. – Qiaochu Yuan Nov 23 2009 at 15:26 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. At the price of being too categorical for the question, one can follow up Todd's answer as follows. Consider any closed symmetric monoidal category $\mathcal{V}$ with product $\otimes$ and unit object $k$, such as vector spaces over a field $k$. Let $V$ be an object of $\mathcal{V}$ and let $DV = Hom(V,k)$. Just from formal properties of $\mathcal{V}$, there are canonical maps $\iota\colon k\to Hom(V,V)$ and $\nu\colon DV\otimes V\to Hom(V,V)$, which are the usual things for vector spaces. Say that $V$ is dualizable if there is a map $\eta\colon k\to V\otimes DV$ such that $\nu \circ \gamma \circ \eta = \iota$, where $\gamma$ is the commutativity isomorphism. Formal arguments show that $\nu$ is then an isomorphism and if $\epsilon\colon DV\otimes V \to k$ is the evaluation map (there formally), then, with $W=DV$, $\eta$ and $\epsilon$ satisfy the conditions Todd stated for $e$ and $f$. This is general enough that it can't have anything to do with bases. But restricting to vector spaces, we can choose a finite set of elements $f_i\in DV$ and $e_i\in V$ such that $\nu(\sum f_i\otimes e_i) = id$. Then it is formal that ${e_i}$ is a basis for $V$ with dual basis ${f_i}$. This proves that $V$ is finite dimensional, and the converse is clear as in Todd's answer. There is a result in Cartan-Eilenberg called the dual basis theorem that essentially points out that the precisely analogous characterization holds for finitely generated projective modules over a commutative ring $k$, with the same proof. - Yes, this is a nice argument, Peter. – Todd Trimble Sep 17 at 18:52 1 To be pedantic, in the case of f.g. projective modules over a commutative ring, "dual basis theorem" is a slightly unfortunate name, since neither $\{e_i\}$ or $\{f_i\}$ are necessarily bases of $V$ or $DV$. – Peter Samuelson Sep 17 at 22:54 Perhaps it would be most appropriate to answer your question with another question: how do you distinguish a finite-dimensional vector space from an infinite-dimensional one without talking about bases? - Every increasing (or decreasing) sequence of subspaces stabilizes in finitely many steps. – Richard Dore Oct 14 2009 at 4:57 1 Let me suggest the following strategy, then: to any chain of subspaces in V there is associated a dual chain in V*. If one can show that strict inclusions are sent to strict inclusions, then V and V* have the same dimension. – Qiaochu Yuan Oct 15 2009 at 18:22 Work in modules over a commutative unital ring R, why not. There is a natural transformation from the identity functor to the double dual functor, Hom(Hom(-,R),R). This is described in the other comments. This is easily verified to be an isomorphism when restricted to R. At this stage we use a basis consisting of the element 1, but that doesn't involve any choices. Both functors commute with finite sums, so the natural transformation is an isomorphism for all finitely generated free modules. Since isomorphisms are closed under retracts we also obtain that our map is an isomorphism for all finitely generated projectives. - But then you have to prove that any vector space is a free module over a field, which sounds a lot like proving the existence of a basis. By the way, why did you make this answer community wiki? – Anton Geraschenko♦ Oct 16 2009 at 16:14 I don't know what community wiki means, I just wanted people to be able to add comments. – Josh Shadlen Oct 17 2009 at 4:24 @Josh: you don't need to make your posts community wiki for people to be able to add comments. Community wiki means (1) you don't earn any reputation from the post, and (2) other people can edit it if they have 100 reputation (if it's not community wiki, they need 2000 reputation). – Anton Geraschenko♦ Oct 22 2009 at 1:10 there is a canonical map $ev:V \to V^{**}$ defined by $ev(v)(\phi) = \phi(v)$. to check that it is an isomorphism in the finite dimensional setting you can just check that it is injective and this is evident from the definition. - 3 How do you know it's surjective though? You have to know they're the same dimension. I don't know how to prove that without getting dirty with a basis. – Richard Dore Oct 13 2009 at 18:46 Hi Qiaochu, I meant of course the canonical map between E and the bidual E**, as is clarified on the next paragraph. I have edited several times my post because of problems with latex syntax and html tags, so I have simplified the source text but some typos remain. Also I can't see why you should not use the word basis since its existence is what defines a finite dimensional vector space. I understand the question more like "we should not fix an arbitrary basis in the process of proving the theorem". You can face a similar problem if you consider stricly increasing sequences of vector spaces, because you shall not fix an arbitrary flag in the process of proving the theorem... Best regards, Eric - Some kind of solution proposal: Let V be a n-dimensional vector space over a field (or a free R-module, where R is a commutative unital ring). There is a morphism V tensor V* to End(V), which sends each v tensor lambda to the endomorphism of V that sends each w to lambda(w)v. It is an epimorphism since it's image are all finite rank endomorphisms, so it's surjective. It is a monomorphism as you can check by calculation. So this is an isomorphism. We can calculate the dimensions: dim(V tensor V* ) = dim(End(V)), where dim(V tensor V* ) = dimV * dimV* and dim(End(V))=(dimV)^2. So the result is n * dimV* = n^2 and we get dimV* = n = dimV. Now notice that every short exact sequence in our category splits. That implies for every monomorphism V to W, that W is isomorphic to a direct sum of V and W/V and therefore we have a dimension formula dimV + dim(V/W) = dim W. We get the result that every monomorphism from V to W with dimW=dimV is an isomorphism. Look at the linear map ev : V to V**, which sends v to ev_v : (lambda mapsto lambda(v)), the evaluation-at-v-map. Now we make an induction: for dimV=0, the map ev is trivially an isomorphism. For dimV=n, the kernel of ev is a subspace, so we have V = ker(ev) + W with some complement W and either ker(ev)=V or ker(ev)=0 or the two subspaces have strictly smaller dimension. That would mean, by induction hypothesis, that their evaluation map, which is the restriction of the evaluation map of V, has no kernel and so we get ker(ev)=0. The case ker(ev)=V remains, where we get that V*=0 which contradicts n=dimV=dimV*. Now ev is a monomorphism and dim(V**** )=dim(V* )=dim(V), therefore ev is an isomorphism. One can check easily that this is "functorial", that is: we have a natural transformation from the identity functor to the bidual functor. One could object that I have chosen an arbitrary flag, when I take the complement of the kernel in the induction step... but I guess without that you wouldn't use the "free" property of the modules in question, and for non-free modules there are counter-examples. If I did something wrong, please tell me. - 1 How do you prove that dim(End(V)) = n^2 without choosing a basis? – Qiaochu Yuan Oct 23 2009 at 5:31 Hi all, For Todd Trimble: how do you prove that a vector space V fulfilling your conditon "there exists a space W together with maps e: W \otimes V \to k, f: k \to V \otimes W making triangular equations hold" implies that V is finite dimensional? Thanks and best regards, Eric - 1 Here's one way: the condition says that the functor V \otimes - is left adjoint to the functor W \otimes -. But it's also left adjoint to hom(V, -), hence we have a canonical iso W \otimes - \cong hom(V, -). In particular, hom(V, -) preserves all colimits. Now V is the union, i.e., the filtered colimit of the system of inclusions V_i \to V_j of its finite-dimensional subspaces. So hom(V, V) is the filtered colimit (union) of subspaces hom(V, V_i). In particular, the identity map 1_V must factor through one of the V_i, hence is finite-dimensional. – Todd Trimble Oct 23 2009 at 14:54 But Todd, you're defining a finite dimensional vector space using finite dimensional subspaces? I must have missed something? Rgds, Eric - Defining? No... you asked me to prove that the conditions I gave imply that V is finite-dimensional. So, I did. My proof shows that there exists a finite-dimensional subspace of V that contains V. It uses the fact that any vector space is the union of its finite-dimensional subspaces. – Todd Trimble Oct 23 2009 at 23:37 Over real or complex (or other similar) field, where we know that for a finite-dimensional vector space all reasonable vector-space topologies coincide... V is dense in V** in the weak topology, hence in all topologies, but the (unique) topology is also complete, so V = V** (I think this works and avoids choosing a basis. Of course you would have to prove those other facts also without choosing a basis.) - I'm sorry if this should be a comment rather than answer. It is an addendum to my previous answer. I should have pointed out that, still in a general symmetric monoidal category, if $V$ is dualizable, then a formal argument also shows that the canonical map $V \to V^{**}$ (again defined formally) is an isomorphism. Also, in answer to Peter Samuelson, while the name dual basis theorem'' dates from long before my time, it does have some justification. When $\mathcal{V}$ is modules over a commutative ring $k$, if one takes a dualizable $V$ and constructs the free module $F$ on basis ${d_i}$ in 1-1 correspondence with the $e_i$ in my previous post, then $\alpha(v) = \sum f_i(v) d_i$ specifies a monomorphism $\alpha\colon V\to F$ such that $\pi\alpha = id$, where $\pi(d_i) = e_i$. This completes the proof that dualizable implies finitely generated projective, with a relevant basis in plain sight. - This can be added to your previous answer, if you like. – David Roberts Sep 18 at 2:57 Fine with me. I'm not adept at adding things or changing things, as I'm sure you have noticed. Thanks. – Peter May Sep 18 at 3:39 Your question sounds interesting to me because it's one example of my "symmetry hypothesis" I have settled some years ago and that could be phrased that way: If we can prove a theorem that involves some mathematical objects, such that the set of hypothesis is invariant under the action of a symmetry operation acting on a subset of the objects, and the set of conclusions is also symmetric for the same objects under the same symmetry, and the proof does not involve even indirectly the axiom of choice, then there exists a proof of this theorem (assimilated to a kind of graph), such that at any step of the proof any assertion is symmetric under the same above mentioned symmetry operation. Of course the first problem with this conjecture is to settle it with rigorous mathematical words, which is not yet the case, but I found a lot of examples for it, even non trivial: Euclid algorithm: the integers you're looking for are involved in the Bezout theorem which is symmetric between the two relatively prime integers under consideration. However the algorithm is not symmetric. You arbitrarily choose one and divide by the other. However this algorithm can be symmetrized. Fondamental theorem of algebra: if you state it as "any complex polynomials has exactly n roots", then the problem is symmetric among these roots, but how to find them all at once in a symmetric manner. This is far from being trivial but a solution exists, considering the set of all lattices in the plane and computing some winding numbers over each cell (and prooving that only finitely many of them are non zero). Engel Theorem: any Lie algebra made of nilpotent operators is a nilpotent Lie algebra. The standard proof make use of a particular basis that is built so that all operators have an upper triangular representation in this basis... but there may be many other possible basis... Again a solution exists using a lot of quotient spaces. Etc ... In the present example I disagree with Josh, choosing 1 on R is an arbitrary choice of a basis, you could have choosen 2 or -1... However the suggestion of Qiaochu is interesting. I'll think about that... Best regards, Eric Chopin - Hi again, For the proposed problem I suggest the following: Denote by C the canonical map from E to E*. The map that sends a basis of E (as a subset of Edim(E)) to its dual familly in (E)dim(E) maps a basis of E to a basis of E* (I'm not a specialist in categories but I would say it looks like a contravariant? functor when we take for the morphims the isomorphisms of E and E*). Therefore dim(E*) = dim(E) = dim(E). Let x be in E such that f(x)=0=C(x)(f) for all f in E* then if x is not 0, for any Hyperlane not containing x the linear form giving the component of x in the decomposition E=H oplus Kx should always give 1 when applied to x, giving a contradiction. Therefore x=0 which means that the canonical map C is injective. By the first isomorphism theorem E/Ker(C) is canonically isomorphic to Im(C), therefore Im(C) is isomorphic to E and since an isomorphism maps any basis to another basis, dim(Im(C)) = dim(E) = dim(E**) and Im(C) subset E**, therefore Im(C) = E** and we are done. By the way, to come back to my "conjecture", another interesting example is given by the following. Suppose I have a basis e_i of a finite dimensional euclidean vector space and I want to build from it an orthonormal basis. I can use the Schmidt orthogonalization process but I must choose an arbitrary ordering in my basis, and changing this ordering changes the orthonormal basis... Is there a way to build such an orthonormal basis such that any permutation of the initial vectors is reflected by the same permutation within the computed orthonormal basis? I found an answer consisting in computing the square root of the Gram matrix of the initial basis. However I have in mind to compute this square root using a Taylor expansion of the square root, thus the computed basis is obtained only by successive approximations which is a major drawback of this algorithm compared to Schmidt orthogonalization.. Best regards, Eric [edit: change latex supscripts to html ones] - 2 There is no canonical map from E to E*, since the dual basis doesn't always exist. And the whole point of this exercise was not to use the word "basis." – Qiaochu Yuan Oct 22 2009 at 13:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343429207801819, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/10680/how-to-not-use-statistics
# How to NOT use statistics This is sort of an open ended question but I wanna be clear. Given a sufficient population you might be able to learn something (this is the open part) but whatever you learn about your population, when is it ever applicable to a member of the population? From what I understand of statistics it's never applicable to a single member of a population, however, all to often I find myself in a discussion where the other person goes "I read that 10% of the world population has this disease" and continue to conclude that every tenth person in the room has this disease. I understand that ten people in this room is not a big enough sample for the statistic to be relevant but apparently a lot don't. Then there's this thing about large enough samples. You only need to probe a large enough population to get reliable statistics. This though, isn't it proportional to the complexity of the statistic? If I'm measuring something that's very rare, doesn't that mean I need a much bigger sample to be able to determine the relevance for such a statistic? The thing is, I truly question the validity of any newspaper or article when statistics is involved, they way it's used to build confidence. That's a bit of background. Back to the question, in what ways can you NOT or may you NOT use statistics to form an argument. I negated the question because I'd like to find out more about common misconceptions regarding statistics. - 2 This is only a very partial answer, so I won't actually post it as an answer. You ARE correct that complex statistics need larger populations; you're referring to the concept of "degrees of freedom", which is simply the number of independent variables minus one. Also, when doing something like a p-test, your rejection threshold depends on the number of degrees of freedom in addition to the p-value you chose (typically .05). – El'endia Starman May 11 '11 at 17:55 2 – J. M. May 11 '11 at 18:11 1 I think you might benefit from asking this question on stats stackexchange -- I flagged the question so maybe it will be moved over there. – InterestedGuest May 11 '11 at 18:27 I didn't even know we had a forum dedicated to statistical analysis. I'd move the question, If I knew how... – John Leidegren May 11 '11 at 19:00 2 – Peter Taylor May 11 '11 at 21:14 ## 7 Answers To make conclusions about a group based on the population the group must be representative of the population and independent. Others have discussed this, so I will not dwell on this piece. One other thing to consider is the non-intuitivness of probabilities. Let's assume that we have a group of 10 people who are independent and representative of the population (random sample) and that we know that in the population 10% have a particular characteristic. Therefore each of the 10 people has a 10% chance of having the characteristic. The common assumption is that it is fairly certain that at least 1 will have the characteristic. But that is a simple binomial problem, we can calculate the probability that none of the 10 have the characteristic, it is about 35% (converges to 1/e for bigger group/smaller probability) which is much higher than most people would guess. There is also a 26% chance that 2 or more people have the characteristic. - Unless the people in the room are a random sample of the world's population, any conclusions based on statistics about the world's population are going to be very suspect. One out of every 5 people in the world is Chinese, but none of my five children are... - 1. To address overapplication of statistics to small samples, I recommend countering with well-known jokes ("I am so excited, my mother is pregnant again and my baby sibling will be Chinese." "Why?" "I have read that every fourth baby is Chinese."). 2. Actually, I recommend jokes to address all kinds of misconception in statistics, see http://xkcd.com/552/ for correlation and causation. 3. The problem with newspaper articles is rarely the fact that they treat a rare phenomenon. 4. Simpsons's paradox comes to mind as example that statistics can rarely be used without analysis of the causes. - 2 The variation of the "Chinese baby" joke I've heard had the expectant mother being afraid that her baby might be considered as an illegal alien and thus deported... – J. M. May 11 '11 at 18:13 There is an interesting article by Mary Gray on misuse of statistics in court cases and things like that... Gray, Mary W.; Statistics and the Law. Math. Mag. 56 (1983), no. 2, 67–81 - When it comes to logic and common sense, be careful, those two are rare. With certain "discussions" you might recognize something......the point of the argument is the argument. http://www.wired.com/wiredscience/2011/05/the-sad-reason-we-reason/ - That's was a very intresting read. – John Leidegren May 12 '11 at 5:56 Statistical analysis or statistical data? I think this example in your question relates to statistical data: "I read that 10% of the world population has this disease". In other words, in this example some one is using numbers to help communicate quantity more effectively than just saying 'many people'. My guess is that the answer to your question is hidden in the motivation of the speaker on why she is using numbers. It could be to communicate some notion better or it could be to show authority or it could be to dazzle the listener. The good thing about stating numbers rather than saying 'very big' is that people can refute the number. See Popper's idea on refutation. - Hypothesis: $A$ (Textbook) Result: Do no reject $A$ ($\sigma = c$) Your Statement: $A$ holds with probability $\sigma$! Correct would be: In this case, you know nothing. If you want to "prove" $A$, your hypothesis has to be $\neg A$; reject it with $\sigma$ to get the desired statement. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587662816047668, "perplexity_flag": "middle"}
http://www.reference.com/browse/bereft+life
Definitions # Half-life [haf-lahyf, hahf-] /ˈhæfˌlaɪf, ˈhɑf-/ The half-life of a quantity whose value decreases with time is the interval required for the quantity to decay to half of its initial value. The concept originated in describing how long it takes atoms to undergo radioactive decay, but also applies in a wide variety of other situations. The term "half-life" dates to 1907. The original term was "half-life period", but that was shortened to "half-life" starting in the early 1950s. Half-lives are very often used to describe quantities undergoing exponential decay—for example radioactive decay. However, a half-life can also be defined for non-exponential decay processes. For a general introduction and description of exponential decay, see the article exponential decay. For a general introduction and description of non-exponential decay, see the article rate law. Number of half-lives elapsed Fraction remaining Percentage remaining 0 1/1 100 1 1/2 50 2 1/4 25 3 1/8 12 .5 4 1/16 6 .25 5 1/32 3 .125 6 1/64 1 .563 7 1/128 0 .781 ... ... ... $n$ $1/2^n$ $100\left(1/2^n\right)$ The table at right shows the reduction of the quantity in terms of the number of half-lives elapsed. ## Probabilistic nature of half-life A half-life often describes the decay of discrete entities, such as radioactive atoms. In that case, it does not work to use the definition "half-life is the time required for exactly half of the entities to decay". For example, if there is just one radioactive atom with a half-life of 1 second, there will not be "half of an atom" left after 1 second. There will be either zero atoms left or one atom left, depending on whether or not the atom happens to decay. Instead, the half-life is defined in terms of probability. It is the time when the of the number of entities that have decayed is equal to half the original number. For example, one can start with a single radioactive atom, wait its half-life, and measure whether or not it decays in that period of time. Perhaps it will and perhaps it will not. But if this experiment is repeated again and again, it will be seen that it decays within the half life 50% of the time. In some experiments (such as the synthesis of a superheavy element), there is in fact only one radioactive atom produced at a time, with its lifetime individually measured. In this case, statistical analysis is required to infer the half-life. In other cases, a very large number of identical radioactive atoms decay in the time-range measured. In this case, the central limit theorem ensures that the number of atoms that actually decay is essentially equal to the number of atoms that are expected to decay. In other words, with a large enough number of decaying atoms, the probabilistic aspects of the process can be ignored. There are various simple exercises that demonstrate probabilistic decay, for example involving flipping coins or running a computer program. See the following websites: , , ## Formulae for half-life in exponential decay An exponential decay process can be described by any of the following three equivalent formulae: $N_t = N_0 \left(1/2\right)^\left\{t/t_\left\{1/2\right\}\right\}$ $N_t = N_0 e^\left\{-t/tau\right\}$ $N_t = N_0 e^\left\{-lambda t\right\}$ where *$N_0$ is the initial quantity of the thing that will decay (this quantity may be measured in grams, moles, number of atoms, etc.), *$N_t$ is the quantity that still remains and has not yet decayed after a time t, *$t_\left\{1/2\right\}$ is the half-life of the decaying quantity, *τ is a positive number called the mean lifetime of the decaying quantity, *λ is a positive number called the decay constant of the decaying quantity. The three parameters $t_\left\{1/2\right\}$, τ, and λ are all directly related in the following way: $t_\left\{1/2\right\} = frac\left\{ln \left(2\right)\right\}\left\{lambda\right\} = tau ln\left(2\right)$ where ln(2) is the natural logarithm of 2 (approximately 0.693). Click "show" to see a detailed derivation of the relationship between half-life, decay time, and decay constant. $N_t = N_0 \left(1/2\right)^\left\{t/t_\left\{1/2\right\}\right\}$ $N_t = N_0 e^\left\{-t/tau\right\}$ $N_t = N_0 e^\left\{-lambda t\right\}$ We want to find a relationship between $t_\left\{1/2\right\}$, τ, and λ, such that these three equations describe exactly the same exponential decay process. Comparing the equations, we find the following condition: $\left(1/2\right)^\left\{t/t_\left\{1/2\right\}\right\} = e^\left\{-t/tau\right\} = e^\left\{-lambda t\right\}$ Next, we'll take the natural logarithm of each of these quantities. $ln\left(\left(1/2\right)^\left\{t/t_\left\{1/2\right\}\right\}\right) = ln\left(e^\left\{-t/tau\right\}\right) = ln\left(e^\left\{-lambda t\right\}\right)$ Using the properties of logarithms, this simplifies to the following: $\left(t/t_\left\{1/2\right\}\right)ln \left(1/2\right) = \left(-t/tau\right)ln\left(e\right) = \left(-lambda t\right)ln\left(e\right)$ Since the natural logarithm of e is 1, we get: $\left(t/t_\left\{1/2\right\}\right)ln \left(1/2\right) = -t/tau = -lambda t$ Canceling the factor of t and plugging in $ln\left(1/2\right)=-ln 2$, the eventual result is: $t_\left\{1/2\right\} = tau ln 2 = frac\left\{ln 2\right\}\left\{lambda\right\}.$ By plugging in and manipulating these relationships, we get all of the following equivalent descriptions of exponential decay, in terms of the half-life: $N_t = N_0 \left(1/2\right)^\left\{t/t_\left\{1/2\right\}\right\} = N_0 2^\left\{-t/t_\left\{1/2\right\}\right\} = N_0 e^\left\{-tln\left(2\right)/t_\left\{1/2\right\}\right\}$ $t_\left\{1/2\right\} = t/log_2\left(N_0/N_t\right) = t/\left(log_2\left(N_0\right)-log_2\left(N_t\right)\right) = \left(log_\left\{2^t\right\}\left(N_0/N_t\right)\right)^\left\{-1\right\} = tln\left(2\right)/ln\left(N_0/N_t\right)$ Regardless of how it's written, we can plug into the formula to get • $N_t=N_0$ at t=0 (as expected—this is the definition of "initial quantity") • $N_t=\left(1/2\right)N_0$ at $t=t_\left\{1/2\right\}$ (as expected—this is the definition of half-life) • $N_t$ approaches zero when t approaches infinity (as expected—the longer we wait, the less remains). ### Decay by two or more processes Some quantities decay by two exponential-decay processes simultaneously. In this case, the actual half-life T1/2 can be related to the half-lives t1 and t2 that the quantity would have if each of the decay processes acted in isolation: $frac\left\{1\right\}\left\{T_\left\{1/2\right\}\right\} = frac\left\{1\right\}\left\{t_1\right\} + frac\left\{1\right\}\left\{t_2\right\}$ For three or more processes, the analogous formula is: $frac\left\{1\right\}\left\{T_\left\{1/2\right\}\right\} = frac\left\{1\right\}\left\{t_1\right\} + frac\left\{1\right\}\left\{t_2\right\} + frac\left\{1\right\}\left\{t_3\right\} + cdots$ For a proof of these formulae, see Decay by two or more processes. ### Examples There is a half-life describing any exponential-decay process. For example: • The current flowing through an RC circuit or RL circuit decays with a half-life of $RCln\left(2\right)$ or $ln\left(2\right)L/R$, respectively. • In a first-order chemical reaction, the half-life of the reactant is $ln\left(2\right)/lambda$, where λ is the reaction rate constant. • In radioactive decay, the half-life is the length of time after which there is a 50% chance that an atom will have undergone nuclear decay. It varies depending on the atom type and isotope, and is usually determined experimentally. ## Half-life in non-exponential decay Many quantities decay in a way not described by exponential decay—for example, the evaporation of water from a puddle, or (often) the chemical reaction of a molecule. In this case, the half-life is defined the same way as before: The time elapsed before half of the original quantity has decayed. However, unlike in an exponential decay, the half-life depends on the initial quantity, and changes over time as the quantity decays. As an example, the radioactive decay of carbon-14 is exponential with a half-life of 5730 years. If you have a quantity of carbon-14, half of it (on average) will have decayed after 5730 years, regardless of how big or small the original quantity was. If you wait another 5730 years, one-quarter of the original will remain. On the other hand, the time it will take a puddle to half-evaporate depends on how deep the puddle is. Perhaps a puddle of a certain size will evaporate down to half its original volume in one day. But if you wait a second day, there is no reason to expect that precisely one-quarter of the puddle will remain; in fact, it will probably be much less than that. This is an example where the half-life reduces as time goes on. (In other non-exponential decays, it can increase instead.) For specific, quantitative examples of half-lives in non-exponential decays, see the article Rate equation. A biological half-life is also a type of half-life associated with a non-exponential decay, namely the decay of the activity of a drug or other substance after it is introduced into the body.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 32, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9165628552436829, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/14518/applications-of-noncommutative-geometry/14526
Applications of Noncommutative Geometry Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is related to Anweshi's question about theories of noncommutative geometry. Let's start out by saying that I live, mostly, in a commutative universe. The only noncommutative rings I have much truck with are either supercommutative, almost commutative (filtered, with commutative associated graded), group algebras or matrix algebras, none of which really show many of the true difficulties of noncommutative things. So, here's my (somewhat pithy) question: what's noncommutative geometry good for? To be a bit more precise, I have a vague sense that $C^*$ stuff is supposed to work well in quantum mechanics, but I'm somewhat more interested in more algebraic noncommutative geometry. What sorts of problems does it solve that we can't solve without leaving the commutative world? Why should, say, a complex algebraic geometer learn some noncommutative geometry? - In light of the answers and (probably) your interests, maybe you could add a parenthetical "(algebraic)" in front of "geometry" in the title. – Kevin Lin Feb 8 2010 at 23:23 While algebraic geometry is my primary interest, I'm curious about anything applying it to more traditional geometry. – Charles Siegel Feb 8 2010 at 23:30 Charles, I'm not an expert on NCG but it seems to me that most of the phase spaces in the real world are NONcommutative. You drink after pouring into the cup, marry after dating (or at least the order tells us the culture!), put the car in drive before pressing the gas, and so on. Euclidean distance seems a special case in this regard. – isomorphismes Mar 19 at 7:15 6 Answers Charles, a couple of reasons why a complex algebraic geometer (certainly someone who is interested in moduli spaces of vector bundles, as your profile tells me) might at least keep an open verdict on the stuff NC-algebraic geometers (NCAGers from now on) are trying to do. in recent years ,a lot of progress has been made towards understanding moduli spaces of semi-stable representations of 'formally smooth' algebras (think 'smooth in the NC-world). in particular when it comes to their etale local structure and their rationality. for example, there is this book, by someone. this may not seem terribly relevant to you until you realize that some of the more interesting moduli spaces in algebraic geometry are among those studied. for example, the moduli space of semi-stable rank n bundles of degree 0 over a curve of genus g is the moduli space of representations of a certain dimension vector over a specific formally smooth algebra, as Aidan Schofield showed. he also applied this to rationality results about these spaces. likewise, the moduli space of semi-stable rank n vectorbundles on the projective plane with Chern classes c1=0 and c2=n is birational to that of semi-simple n-dimensional representations of the free algebra in two variables. the corresponding rationality problem has been studied by NCAG-ers (aka 'ringtheorists' at the time) since the early 70ties (work by S.A. Amitsur, Claudio Procesi and Ed Formanek). by their results, we NCAGers, knew that the method of 'proof' by Maruyama of their stable rationality in the mid 80ties, couldn't possibly work. it's rather ironic that the best rationality results on these moduli spaces (of bundles over the projective plane) are not due to AGers but to NCAGers : Procesi for n=2, Formanek for n=3 and 4 and Bessenrodt and some guy for n=5 and 7. together with a result by Aidan Schofield these results show that this moduli space is stably rational for all divisors n of 420. further, what a crepant resolution of a quotient singularity is to you, is to NCAGers the moduli space of certain representations of a nice noncommutative algebra over the singularity. likewise, when you AGers mumble 'Deligne-Mumford stack', we NCAGers say 'ah! a noncommutative algebra'. - 4 Dear Lieven, could you tell "someone" that he has written a very interesting book ? – Georges Elencwajg Feb 7 2010 at 21:48 3 Dear Mr Le Bruyn, could you please elaborate on the connection betwteen DM-stacks and non-commutative algebras? – Dima Sustretov Oct 21 2010 at 17:49 I just encountered the paper "Noncommutative Coordinate Rings and Stacks": web.maths.unsw.edu.au/~danielch/paper/stacks.pdf Chan and Ingalls construct a structure sheaf of noncommutative algebras over a stack that meets some restrictive conditions. I'm told that this result has been extended somewhat, but could not find a reference. Their condition seems to be met for $\mathcal{M}_g$, $g\geq 2$. Whether such a relationship exists in general seems to be an open question. – Andrew Dudzik Oct 31 2011 at 17:36 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A cool application which I can somehow appreciate is Van den Bergh's proof of dimension $3$ case of the Bondal-Orlov conjecture that two birational smooth Calabi-Yau varieties $X,X'$ have equivalent derived category $D(X) \sim D(X')$. Note that since one can construct the pluricanonical ring from $D(X)$ , this is a generalization of the Batyrev's conjecture that they have the same Hodge numbers. The dimension $3$ case was first proved by Bridgeland, but Van den Bergh's proof uses non-commutative stuff in a very concrete way. Some references can be found in this question I asked. It goes as follows: by some mimimal model program results, in dimension $3$ any birational $X,X'$ are related by a series of flops $X \to Y \leftarrow X^+$. So we only need to prove $D(X) \sim D(X^+)$. Then one builds a special vector bundle (sum of an exceptional collection of line bundles in this case, some perverse stuff!) on X. Pushforward to $Y$, one gets a coherent sheaf $E$. Let $A= End(E)$. The funny thing is that $D(X) \sim D(A)$. Note that $A$ is non-commutative. Now, do the same thing for $X+$ one gets $A+$, say. But it is fairly easy to prove that $D(A) \sim D(A^+)$ directly on $Y$, so we are done. If we carry out this on an Atiyah's flop or Reid's pagoda, one can see actually that $A$ and $A+$ are the same. This indicates that the non-commutative route can simplify things. There seem to be many experts on this site, so surely you will get a lot of much better answers. But this example seems most down to earth for me, and the conjectures come from complex geometry (motivated by physics?) so I hope it helps you as well. EDIT: here is something to complement Lieven's great answer above: given $A$ one can actually construct back $X$ as a moduli space of certain A-representations (see Section 6 of this). One needs $A$ to have finite global dimension so $X$ can be smooth (the fact that $X$ is smooth is proved via the very algebraic intersection theorem). In dimension $3$, this example explains Lieven's sentence: further, what a crepant resolution of a quotient singularity is to you, is to NCAGers the moduli space of certain representations of a nice noncommutative algebra over the singularity. - Reference for Atiyah's flop or Reid's pagoda: (en.wikipedia.org/wiki/…) – Hailong Dao Feb 7 2010 at 19:30 As I mentioned in my post on the other question, there are two very fertile applications to Non-commutative Algebraic Geometry in the style of A. Rosenberg; 1. Representation Theory 2. Physics I provided references in my post over there for both of these, but here are some again 1. Rep Theory: First, Second. 2. Physics: First, Second, Third. - 2 I do not want to repeat the comment I put to Feb7's answer, but it applies here, too. – Orbicular Feb 7 2010 at 21:01 Because there are several approaches to noncommutative algebraic geometry. I will just briefly talk about the motivation of Rosenberg's machine(categorical geometry) and Kontsevich-Rosenberg's machine(functor view point). The main motivation for Rosenberg's approach is representation theory. Spectrum of abelian category is an important notion in this approach which provided the suitable language to talk about the irreducible representations. For example, say module over noncommutative algebra. So the irreducible representations of this noncommutative algebra is one to one corresponding to the closed points of this spectrum. If we have a noncommutative algebra A, and we have subalgebra B(not any subalgebra,the choice of B is depended on a theorem proved in this framework). What we always do is constructing representations of A from representations of B(which called induction). Using the language of spectrum of abelian category. This induction process can be viewed as the morphism from spectrum of B-mod to Spectrum of A-mod. Well, why is it good? Rosenberg developed a machine dealing with these things. He proved a theorem(I will not mention it at the moment)which allows one to construct all the irreducible representations of A from irreducible representations of B(which is easy to see) in a very functorial way. This theorem is just like an algorithm. Using this method, we can also find some generic representations of B(which correspondence to the generic points of the spectrum). In fact, he introduced a class of algebra called hyperbolic algebra(some other people called it generalized weyl algebra). Many noncommutative associative algebra in representation theory and mathematical physics are of this kind. For example, n-th weyl algebra, Heisenberg algebra, enveloping algebra of Lie algebra and their quantum analogue. One can used the machine he developed construct the all irreducible representations of these algebras very easily and canonically(I'd like to say, it is somehow like the "automata" way). I'd like to point out what I talked above is how spectral theory of abelian category comes into representation theory. But this is just a start of this game. More interesting machinery in this framework is one can reduced the representation of some associative algebra(say enveloping algebra of Lie algebra or quantum analogue)to representations of hyperbolic algebra(say, weyl algebra and some others). This story is very interesting because this process can be viewed as find affine cover for noncommutative scheme: Let me elaborate a bit: It is well known to all representation theorist, we have Beilinson-Bernstein localization for Lie algebra. From the point of view of noncommutative algebraic geometry. category of D-modules on flag variety of Lie algebra is a certain kind of noncommutative projective scheme. We can construct the affine cover(D-module on affine space)for this noncommutative projective scheme which is category of module over Weyl algebra. Now, we have reduced the representations of Lie algebra to the representations of Weyl algebra. Then, using the Gluing machinery of Rosenberg's(application of Barr-Beck's theorem). We can construct irreducible representations of Lie algebra(global)from that of Weyl algebra(local). Additonally,for quantum group, we can apply this framework directly to get representations of quantum group(noncommutative scheme)from reps of algebra of quantum differential operator(analogue of Weyl algebra)(affine cover).There is a theorem which supported the method I mentioned above, called Locality theorem. - I do not think NCG arose as a way to solve problems in algebraic geometry using new methods. The motivation seems to be to broaden your horizons further. The answer of Bischof to the question you have cited, gives many contact points with classical topics in algebraic geometry such as deformation theory, invariant theory, moduli spaces, etc.. Also see this article of Lieven le Bryun, in which he speaks of points that can "talk to each other" via common tangent information. In other words, the "Chinese Remainder Theorem" fails for noncommutative rings and so points can be exceedingly close to each other. This is a more interesting way of looking at more general spaces. I suggest that you look more formally at the "noncommutative torus", which is a very important example in NCG. This space is a quintessential example of the above property of points being very close to each other. Then again, with the more abstract topics in algebraic geometry, n-categories, stacks and all that stuff, these developments could be carried over to noncommutative geometry, and since NCG is at the heart of many developments in physics, it might give wonderful applications to string theory etc., and in a deeper understanding of our physical world. - 3 Please keep in mind that - even though it is stated very often - noncommutative geometry does not give "real" insight to physics. The reason is that they only have toy models, all of which are unphysical (in the sense that they predict things which differ from real world measurements). Furthermore even the toy models are usually extremely complicated, killing most expectations to get a "real" model (which is not toyish). – Orbicular Feb 7 2010 at 21:00 There is a lecture course by M. Kontsevich (ENS, 1998) which has a chapter entitled "algebraic geometry from a noncommutative viewpoint". The notes are available here www.math.uchicago.edu/~mitya/langlands/kontsevich.ps It mentions in particular Bondal-Orlov's theorem that if a variety is Fano or of general type, then one can reconstruct it from its coherent derived category. There is also a section on Mirror symmetry. Proofs are only sketched, but these sketches can be most illuminating (or sometimes not). In general, the "noncommutative geometry" section of http://www.math.uchicago.edu/~mitya/langlands.html can be useful, but some of the references there (Keller, Lefevre) do not specifically deal with applications to algebraic geometry. (And maybe someone else here can elaborate on those that do.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303898811340332, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/163988/homotopy-versus-isotopy
# Homotopy versus Isotopy First of all, I apologize for the crudeness of my question. Consider the construction of the homotopy groups. We mod out the space of "loops" at point by the equivalence relation generated by homotopy equivalence then give the new space a group structure were the operation is "concatenation" of loops. My question: Could we, instead, mod out the space of "loops" (without reference to a specific point) by the equivalence relation generated by isotopy equivalence then give this space a group structure using some kind of "surgery" on the equivalence classes? - 1 Homotopy equivalence is already an equivalence relation. Are you talking about ambient isotopy here? What do you mean by "some kind of surgery"? – Qiaochu Yuan Jun 28 '12 at 2:10 yes, ambient isotopy, was what I was referencing. By "surgery" I meant something like what follows: let f:[0,1] --> M and g:[0, 1] --> M be two loops on the space M, then remove a disk from f and a disk from g then form a new "loop" by attaching line segments to the end points in an obvious manner (perserving orientation?). – Mathmonkey Jun 28 '12 at 2:27 do you want to cut out disks that "bound" the images of $f$ and $g$? This may only be possible for trivial loops, those homotopic to the constant loop. – Olivier Bégassat Jun 28 '12 at 2:30 1 What do you mean by "remove a disk from $f$"? – Qiaochu Yuan Jun 28 '12 at 2:34 1 You cannot remove such a disc from the constant loop. Besides, the line segments you want to attach are not canonical (there are several homotopy classes of them and you have to choose one). – user17786 Jun 28 '12 at 11:49 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177159070968628, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/64237?sort=newest
## Limits in functor categories ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $C$ have all limits of diagrams indexed by $J$, and for each $b\in B$ let $E_b:C^B \rightarrow C$ the evaluation functor that evaluates at $b$. Given a diagram $F: J \rightarrow C^B$, we can calculate its limit by doing it pointwise, i.e. since for each $b\in B$, $E_b \circ F : J \rightarrow C$ has a limit $v_b$ we may construct a functor in $V: B \rightarrow C$ that's a limit of $F$ by letting $V(b) = v_b$ on objects $b\in B$ and blah blah for arrows. My question is: if $F$ has a limit, does each $E_b\circ F$ have a limit? This is similar to the problem of mono/epimorphisms, answered in http://mathoverflow.net/questions/17953/can-epi-mono-for-natural-transformations-be-checked-pointwise So basically if some property is pointwise true in the original category $C$ then it's true in any functor category $C^B$, but which properties are true the other way around? - ## 1 Answer What you're asking is whether every limit in a functor category $[B,C]$ is a pointwise limit. The answer is yes if C is complete, but not always otherwise. Kelly gives an example in Basic Concepts of Enriched Category Theory, section 3.3, of a limit in a functor category that is not a pointwise limit. I don't know of any striking examples of always-pointwise properties. One obvious example is that of being an absolute limit or colimit (i.e. one preserved by every functor). The latter are characterized by the fact that a category is Cauchy complete if and only if it has all absolute colimits. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9062705636024475, "perplexity_flag": "head"}
http://mathoverflow.net/questions/29462?sort=oldest
## Galois representation attached to elliptic curves ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Unfortunately the question I am asking isnt very well-defined. But I will try to make it as precise as possible. Supposed I am given a mod-p representation of $G_Q$ into $Gl_2(F_p)$. I want to check for arithmetic invariants so that I can conclude that the representation comes from a modular form but not an elliptic curve. The whole point of this exercise is to understand the difference between the representations coming from elliptic curves and cusp forms in general. I hope I was able to make the question precise. A few things that one can look at is the conductor of an elliptic curve (i.e. the exponent of 2 in the level of modular form is too high then it cant come from an elliptic curve) or one can look at the Hasse bound for $a_l$ for different primes. But I want to know some non-trivial arithmetic constraints attached to such invariants. Also if such a representation doesnt come from an elliptic curve then it must come from an abelian variety of $GL_2$ type. Can anything be said about that abelian variety in general. - ## 1 Answer Since your representation $\overline{\rho}$ is defined over $\mathbb F_p$, you can't do things like the Hasse bounds, since the traces $a_{\ell}$ of Frobenius elements at unramified primes are just integers mod $p$, and so don't have a well-defined absolute value. One thing you can do is check the determinant; this should be the mod $p$ cyclotomic character if $\overline{\rho}$ is to come from an elliptic curve. In general (or more precisely, if $p$ is at least 7), that condition is not sufficient (although it is sufficient if $p = 2,3$ or 5); see the various results discussed in this paper of Frank Calegari, for example. In particular, the proof of Theorem 3.3 in that paper should give you a feel for what can happen in the mod $p$ Galois representation attached to weight 2 modular forms that are not defined over $\mathbb Q$, while the proof of Theorem 3.4 should give you a sense of the ramification constraints on a mod $p$ representation imposed by coming from an elliptic curve. - Ok I should have added that. I am assuming that the Galois representation is coming from a modular form so the determinant already has cyclotomic character. As for the example for p=7 there is indeed a form of level 29 and weight 2 whose mod 7 Galois representation doesnt come from a modular form. So that got me thinking what went wrong for that prime. As you have pointed out the condition is not sufficient for higher primes, that raises the natural question about the arithmetic of these representations. Anyway thanks a lot for your answer. I will look into Calegari's paper – Arijit Jun 25 2010 at 3:55 2 By "coming from a modular form", do you mean "modular form of weight 2 and trivial nebentypus"? In general, the Galois rep. coming from a modular form of weight k and nebentypus epsilon has determinant cyclotomic^(k-1) epsilon (or the inverse of this, depending on conventions). Also, "doesn't come from a modular form" should probably read "doesn't come from an elliptic curve". – Emerton Jun 25 2010 at 4:08 Oh I am being really very careless. I briefly looked at the paper. But it doesnt say anything about the abelian variety that corresponds to the modular form. I believe that it is not a very appropriate question because my knowledge in this field is really very limited. Thanks again Prof. Emerton for clarifying my doubts. – Arijit Jun 25 2010 at 4:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490084648132324, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/26311/proof-that-mathbbr-setminus-mathbbq-is-not-an-f-sigma-set
# Proof That $\mathbb{R} \setminus \mathbb{Q}$ Is Not an $F_{\sigma}$ Set I am trying to prove that the set of irrational numbers $\mathbb{R} \setminus \mathbb{Q}$ is not an $F_{\sigma}$ set. Here's my attempt: Assume that indeed $\mathbb{R} \setminus \mathbb{Q}$ is an $F_{\sigma}$ set. Then we may write it as a countable union of closed subsets $C_i$: $$\mathbb{R} \setminus \mathbb{Q} = \bigcup_{i=1}^{\infty} \ C_i$$ But $\text{int} ( \mathbb{R} \setminus \mathbb{Q}) = \emptyset$, so in fact each $C_i$ has empty interior as well. But then each $C_i$ is nowhere dense, hence $\mathbb{R} \setminus \mathbb{Q} = \bigcup_{i=1}^{\infty} \ C_i$ is thin. But we know $\mathbb{R} \setminus \mathbb{Q}$ is thick, a contradiction. This seems a bit too simple. I looked this up online, and although I haven't found the solution anywhere, many times there is a hint: Use Baire's Theorem. Have I skipped an important step I should explain further or is Baire's Theorem used implicitly in my proof? Or is my proof wrong? Thanks. EDIT: Thin and thick might not be the most standard terms so: Thin = meager = 1st Baire category - ## 2 Answers Your solution is correct. You could also argue that $\mathbb{R} = \bigcup_{i =1}^{\infty} C_{i} \cup \bigcup_{q \in \mathbb{Q}} \{q\}$, so by Baire one of the $C_{i}$ must have non-empty interior, contradicting the fact that $\mathbb{R} \smallsetminus \mathbb{Q}$ has empty interior. - @milcak: I forgot to mention that you assume Baire implicitly when asserting that $\mathbb{R}\smallsetminus\mathbb{Q}$ is thick – t.b. Mar 12 '11 at 2:59 Suppose $\mathbb{R} \setminus \mathbb{Q}$ is an $F_{\sigma}$ set. Then $\mathbb{Q}$ is a $G_{\delta}$ set which is a contradiction. In other words, suppose $$\mathbb{R} \setminus \mathbb{Q} = \bigcup_{i=1}^{\infty} \ C_i$$ Then by Demorgan's Laws $$\mathbb{Q} = \bigcap_{i=1}^{\infty} \ C_i^{c}$$ which is a contradiction since $\mathbb{R} \setminus \mathbb{Q}$ is a countable intersection of open sets and we know that the intersection between the rational is irrationals is $\emptyset$. So $\emptyset$ is a countable intersection of open dense subsets which contradicts Baire's category theorem. - 1 What does this line mean: " we know that the intersection between the rational is irrationals is ∅"? – Soarer Mar 11 '11 at 7:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606278538703918, "perplexity_flag": "head"}
http://mathoverflow.net/questions/78115/references-about-pseudoeffective-cone/78117
## References about pseudoeffective cone ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm looking for references of explicit computation of the pseudoeffective cone $\overline{\text{Eff}}(X)$ of a projective variety $X$. - ## 1 Answer I'd say Lazarsfeld's book "Positivity in algebraic geometry I,II" is the standard reference these days. In particular, Volume I has a lot of explicit examples. I also recommend Debarre's book 'Higher dimensional algebraic geometry" which is similar in style with a lot of nice examples and explicit computations. If you are familiar with toric geometry, there is (not surprisingly) a simple description of the psedudoeffective cone in terms of the combinatorial data in the fan. See Cox-Little-Schenck's new book 'Toric varieties' for details. This gives a hoard of interesting examples. If you are looking for examples with non-toric varieties, I'd recommend starting with the case where $X$ is a surface. In that case the effective cone coincides with the cone of curves and can be studied using the intersection form. Let me give an example: Example. Let $X$ be the blow-up of $\mathbb{P}^2$ at two points and let $E_1,E_2$ be the exceptional divisors. A basis for $Pic(X)$ is given by $L,E_1,E_2$ where $L$ is the pull back of a general line in $\mathbb{P}^2$. We show that $\overline{Eff}(X)$ is spanned by $E_1,E_2$ and the strict transform of the line $L_0=L-E_1-E_2$. Let $\tau$ be the cone spanned by these three classes. Since they are all effective we have $\tau\subset \overline{Eff}(X)$. Coversely, let $D$ be any effective divisor with class $aL+bE_1+cE_2$. We will show that $D$ can be written as a sum of elements from $\tau$. We may assume $D$ to be irreducible. If $D$ is not one of the $E_1,E_2,L_0$, we then have $D.E_i\ge 0$ and $D.L_0\ge 0$. In particular, $D$ belongs to the dual cone of $\tau$, which is easily computed as $\tau^*=\langle L,L-E_1,L-E_2\rangle_{\ge 0}$. Now $L, L-E_1, L-E_2$ are all effective and can be written as positive linear combinations of $E_1,E_2,L_0$, and hence so can $D$. As a by-product, we have just computed the nef cone, which is $\tau^*$. Of course, this example is in fact toric, but the main point is that this type of argument works for more general surfaces, as long as you have a good description of the surface. For example, the argument above generalizes to show that a Del Pezzo surface, $\overline{Eff}(X)$ is spanned by the $(-1)$-curves on $X$ (this is shown in Debarre's book, I think). In general, the effective cone of rational surfaces have been studied a lot using their models as blow-ups. For material on the effective cones of surfaces, see for example B. Harbourne "Global aspects of the geometry of surfaces" and Y. Tschikel "Algebraic varieties with many rational points. For K3 surfaces, S. Kovacs has a nice paper on the 'Cone of curves of a K3 surface' (see also this answer). There are also many explicit examples in Artebani-Hausen-Laface's paper On Cox rings of K3-surfaces. I can also recommend Artie Prendergast-Smith's papers at his homepage. In particular, his PhD thesis contains a very explicit example where he computes the of the effective cone of a rational threefold. In addition to pseduoeffective cones, you might also be interested in seeing explicit computations of Cox rings, which are graded by the monoid of effective divisors (in particular if you have a description of the Cox ring, you know all about the effective cone). Here I can recommend the following papers: A. Laface, M. Velasco, A survey on Cox rings I. Arzhantsev, U. Derenthal, J. Hausen, A. Laface, Cox rings J. Gonzalez, M. Hering, S. Payne, H. Süß Cox rings and pseudoeffective cones of projectivized toric vector bundles and M. Artebani, A. Laface Cox rings of surfaces and the anticanonical Iitaka dimension -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360163807868958, "perplexity_flag": "head"}
http://mathoverflow.net/questions/32269?sort=newest
## Guess a number with at most one wrong answer ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider a game where one player picks an integer number between 1 and 1000 and other has to guess it asking yes/no questions. If the second player always gives correct answers than it's clear that in worst case it's enought to ask 10 questions. And 10 is the smallest such number. What if the second player is allowed to give wrong answers? I'm interested in a case when the second player is allowed to give at most one wrong answer. I know the strategy with 15 guesses in worst case. Consider a number in range [1..1000] as 10 bits. At first you ask the values of all 10 bits ("Is it true that $i$-th bit is zero?"). After that you get some number. Ask if this number is the number first player guessed. And if not you have to find where he gave wrong answer. There are 11 positions. Using the similar argument you can do it in 4 questions. Is it possible to ask less then 15 questions in worst case? - 4 What if the person lies when you go for the number on Question 11? – Bruce Westbury Jul 17 2010 at 8:02 This is a standard ECC (en.wikipedia.org/wiki/Error-correcting_code) problem. – BlueRaja Jul 18 2010 at 7:22 BlueRaja -- no, it's not: see Peter Shor's comment to falagar's answer. – JBL Jul 18 2010 at 14:37 You can see the difference between adaptive and non-adaptive (i.e. ECC's) in shreevatsa's answer below. – Peter Shor Jul 18 2010 at 18:43 ## 5 Answers Yes, there is a way to guess a number asking 14 questions in worst case. To do it you need a linear code with length 14, dimension 10 and distance at least 3. One such code can be built based on Hamming code (see http://en.wikipedia.org/wiki/Hamming_code). Here is the strategy. Let us denote bits of first player's number as $a_i$, $i \in [1..10]$. We start with asking values of all those bits. That is we ask the following questions: "is it true that i-th bit of your number is zero?" Let us denote answers on those questions as $b_i$, $i \in [1..10]$. Now we ask 4 additional questions: Is it true that $a_{1} \otimes a_{2} \otimes a_{4} \otimes a_{5} \otimes a_{7} \otimes a_{9}$ is equal to zero? ($\otimes$ is sumation modulo $2$). Is it true that $a_{1} \otimes a_{3} \otimes a_{4} \otimes a_{6} \otimes a_{7} \otimes a_{10}$ is equal to zero? Is it true that $a_{2} \otimes a_{3} \otimes a_{4} \otimes a_{8} \otimes a_{9} \otimes a_{10}$ is equal to zero? Is it true that $a_{5} \otimes a_{6} \otimes a_{7} \otimes a_{8} \otimes a_{9} \otimes a_{10}$ is equal to zero? Let $q_1$, $q_2$, $q_3$ and $q_4$ be answers on those additional questions. Now second player calculates $t_{i}$ ($i \in [1..4]$) --- answers on those questions based on bits $b_j$ which he previously got from first player. Now there are 16 ways how bits $q_i$ can differ from $t_i$. Let $d_i = q_i \otimes t_i$ (hence $d_i = 1$ iff $q_i \ne t_i$). Let us make table of all possible errors and corresponding values of $d_i$: position of error -> $(d_1, d_2, d_3, d_4)$ no error -> (0, 0, 0, 0) error in $b_1$ -> (1, 1, 0, 0) error in $b_2$ -> (1, 0, 1, 0) error in $b_3$ -> (0, 1, 1, 0) error in $b_4$ -> (1, 1, 1, 0) error in $b_5$ -> (1, 0, 0, 1) error in $b_6$ -> (0, 1, 0, 1) error in $b_7$ -> (1, 1, 0, 1) error in $b_8$ -> (0, 0, 1, 1) error in $b_9$ -> (1, 0, 1, 1) error in $b_{10}$ -> (0, 1, 1, 1) error in $q_1$ -> (1, 0, 0, 0) error in $q_2$ -> (0, 1, 0, 0) error in $q_3$ -> (0, 0, 1, 0) error in $q_4$ -> (0, 0, 0, 1) All the values of $(d_1, d_2, d_3, d_4)$ are different. Hence we can find where were an error and hence find all $a_i$. - 2 +1- This is really clever! – Dylan Wilson Jul 17 2010 at 8:32 5 This is a non-adaptive strategy (meaning the questions don't depend on previous answers. For one lie, I think non-adaptive and adaptive strategies give the same answer, but this is no longer the case for more than one lie. See the survey article by Pelc referenced in another answer. – Peter Shor Jul 17 2010 at 14:47 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Some more references. Joel Spencer's web page has several downloadable papers on searching with lies: http://cs.nyu.edu/spencer/papers/papers.html Ivan Niven, "Coding Theory applied to a Problem of Ulam", http://www.jstor.org/pss/2689543 , gives the Hamming code approach. Andrszej Pelc, who solved the original Ulam liar problem with $n=10^6$ and one optional lie, also has a number of papers on extensions of the problem to more lies and to other models of searching with noisy queries: http://w3.uqo.ca/pelc/search.html - The other answers are truly excellent and have settled the intended question. For a bit of fun, however, allow me to mention the following paradoxical solution. Namely, with a certain precise and reasonable understanding of the rules of your game, which I shall presently give, I claim that no additional questions are required for the lie-telling game over the truth-telling game. In particular, in your case 10 questions still suffice! Specifically, to be a bit more definite about what it means to give a wrong answer, I propose that the rules should allow that the second player, at most once during the game, decides that a given round will be a lie-telling round, for which he will privately ponder the correct truthful answer, but then give as his answer precisely the opposite of the correct answer. So if a truthful answer would have been Yes, then on this lie-telling round he says No, and conversely. (In particular, in this version of the game, the wrong answer is not a random answer in any sense, although it could be that the choice of which round is to be a lie-telling round is determined randomly.) On the other rounds, he tells the truth. Secondly, I note that you didn't insist that the questions of player 1 must have a particular form. With these rules for the liar game, I claim that no additional questions over the fully truthful case are required to determine the secret number. The reason is a simple logic trick: if in the fully truthful game, one would want on a round to ask a question $Q$, then in this liar game, one should instead ask the question $P$: • If I were to have asked $Q$ on this round, would you have said Yes? Consider how the second player will react. First, if he is on a truth-telling round, then he will give the same answer to this question that he would have given to $Q$. If in contrast he is on a lie-telling round, then he ponders question $P$, and considers that if the first player had asked $Q$ on this round and a truthful answer had been Yes, then he would have said No, since this is a lie-telling round, and so a truthful answer to $P$ is No, but since it is a lie-telling round, he answers Yes to $P$. Similarly, if a truthful answer to $Q$ would have been No, then a lie-telling answer to $Q$ would be Yes, and so a truthful answer to $P$ would be Yes, but since it is a lie-telling round, he answers No. Thus, because of the double-negation effect, the lie-telling answer to $P$ is the same as the truth-telling answer to $Q$. Therefore, the first player can in effect gain exactly the same information from the second player in the liar game that he can in the fully truth-telling game. The same argument shows that, in fact, it doesn't matter how often the second player decides to lie, as long as he lies by stating each time the exact opposite of a truthful answer. Indeed, the second player could randomly decide for each round whether he will lie or tell the truth on that round, but the double-negation trick of question $P$ allows the first player nevertheless to gain exactly the same information, and so no additional questions beyond the truth-telling case are required, even if the second player decides randomly at the beginning of every round whether to lie or tell the truth on that round. Ha! - One could alternatively use the question: Does $Q$ hold if and only if this is a truth-telling round? – Joel David Hamkins Jul 18 2010 at 3:01 Nice trick. :-) The standard formulation of the game avoids this possibility (so that lies do matter) by requiring that questions must be of the form "Does the number lie in set A?" for some $A \subseteq [n]$. – shreevatsa Jul 18 2010 at 4:08 Shreevatsa, in that case, I would make `A_P=\{\,n\,|\,n\in A_Q\iff\text{this is truth-telling round }\}\$'. – Joel David Hamkins Jul 18 2010 at 11:35 Yeah, you're right; ignore my previous comment. It has nothing to do with the form of the question; it's rather that the notion of a "lie-telling round" which forces Carole to commit to being a "liar" even internally is too restrictive and self-referential (like the "one who always lies" logic puzzles). To lie here is to give an answer other than the truth, and your restriction on the round effectively takes away that option... I guess the original questioner's statement of "at most one wrong answer" is better after all. :-) – shreevatsa Jul 18 2010 at 19:54 1 Shreevatsa, you could say that player 1 must list the elements of $A$ explicitly, since my description of $A$ is a set that only player 2 can compute, and then your remark would regain its effect. – Joel David Hamkins Jul 19 2010 at 1:21 I wrote an expository paper on this kind of problem, http://www.austms.org.au/Publ/Gazette/2009/May09/TechPaperMeyerson.pdf - Gerry, this discloses the info about yourself! I would be happy to add +1 more for your openness. – Wadim Zudilin Jul 17 2010 at 14:32 7 Wadim, considering the subject of this thread, it is of course possible that in alleging that I wrote the paper, I have given my one wrong answer! – Gerry Myerson Jul 18 2010 at 7:40 BTW, this problem is known as the Ulam(-Renyi) liar problem or Ulam's searching game (or just "playing Twenty Questions with a liar"), and has an extensive literature. The following is a survey as of 2002: • Andrzej Pelc, Searching games with errors--fifty years of coping with liars, Theoretical Computer Science, Volume 270 (2002), pp. 71-109 In particular, with 1 lie allowed, to guess a number in {1…n} where n is even, the number of queries needed is the smallest integer q which satisfies n ≤ 2q/(q+1), which for n=1000 is indeed 14. There are alternative solutions to the one-lie game in more recent papers like this and this. As observed by Peter Shor in a comment above, the general adaptive strategy when multiple lies are allowed does not look like Hamming codes. Edit: Since this has been bumped up, I may as well mention a nice result in the more general setting, proved by Joel Spencer and Peter Winkler in their paper Three Thresholds for a Liar. It is traditional to name the two players Paul and Carole, where Paul (named after Paul Erdős) is the one who asks the questions, and Carole (an anagram of oracle) is the one who answers them. Paul asks $q$ questions in all, and Carole is allowed to lie a fraction $r$ of the time. We will consider three progressively harder (for Paul) versions of what this means. In Version A, Carole is allowed to lie at most $\lfloor ri\rfloor$ times to the first $i$ questions, for all $i$. In Version B, Carole is only required to lie at most $\lfloor rq \rfloor$ times in total — she can choose to exhaust all her lies at the beginning, for instance. In Version C (nonadaptive), Paul must ask all his questions in one batch, and Carole can choose up to $\lfloor rq \rfloor$ ones to lie to. Note that if no lies are allowed ($r = 0$), the number of questions needed is exactly $\lceil \log_2 n\rceil$, and that, intuitively, if $r$ is too large, Paul cannot guess correctly at all. Specifically, they show that: • In version A, Paul wins with $\Theta(\log n)$ questions if $r < 1/2$, but Carole wins if $r \ge 1/2$. • In version B, Paul wins with $\Theta(\log n)$ questions if $r < 1/3$, but Carole wins if $r \ge 1/3$. • In version C, Paul wins with $\Theta(\log n)$ questions if $r < 1/4$, Carole wins if $r > 1/4$, and if $r = 1/4$, Paul wins but needs $\Theta(n)$ questions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 81, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444509744644165, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/6689/dlp-based-crypto-systems-with-multiple-independent-generators/6695
# DLP based crypto systems with multiple independent generators One example of a DLP based crypto system (or rather DDH based crypto system) where the public key parameters include two independent generators of the subgroup, is Cramer Shoup. Since the security proof of this scheme is based on DDH, it seems at least one of the two generators has to be uniformly chosen anew for each public key, and made part of the public key, rather than made part of domain parameters that are shared between multiple key pairs. This seems a bit impractical, assuming you want a crypto system where the public key parameters are shared between multiple key pairs, and the individual public keys contain no more information than necessary. Furthermore it is a bit discomforting that e.g. Cramer Shoup does not include a method for verifying the validity of the generator choices, even though implicit verification is entailed by the trust model - you can e.g. test that the prime parameters are indeed prime, but it is harder to test if the generators have been deliberately chosen to facilitate discrete logarithm calculation. Does this matter in the case of Cramer Shoup? I don't know, and that's not really my primary question interest. It might be noted that the relation between the two generators $g_1, g_2$ is not retained as part of the private key, and that the owner of the private key at no point in the normal operation of the scheme gets to prove that these generators have been correctly generated, but that might be insignificant. Rather, I am asking whether other schemes exist that also make use of more than one generator, and which might include those generators in domain parameters rather than include them in the individual public key. Just a crazy idea: Suppose, for instance, you choose a prime $p$ and a prime $q$ such that $p = 2kq + 1$ and such that the calculation of $\log_23 (\mod p)$ and $\log_32 (\mod p)$ is conjectured to be DLP-hard in $Z_p^*$. In such case the generators $g_1 = 2^{2k}(\mod p)$ and $g_2 = 3^{2k}(\mod p)$ might be deterministically generated (which has its own advantages with respect to assurance of validity) and included as part of the system parameters, provided that the security proof for the crypto system is such that it is sufficient that the relation between the generators $g_1$ and $g_2$ is DLP-hard. Does there exist any public key crypto system for which choosing such public key parameters would make sense, i.e. one that rests on the assumption that $\log_{g_1}g_2$ and $\log_{g_2}g_1$ are DLP-hard, rather than the assumption that the $g_2$ element in $(g_1,g_2)$ is indistinguishable from random? In order for such a crypto system to exist, the following four criteria have to be met: 1. To begin with, the crypto system obviously has to use certain operations, such as modular exponentiation in $Z^*_p$, and have certain parameters, such as use multiple generators for a suitable subgroup. 2. Finding $log_23\mod p$ and $log_32\mod p$ have to be hard problems. If $p-1$ e.g. is divisible by a smooth number, they might not be hard problems. There might be other criteria I am not aware of. 3. The crypto system has to be based on a hard problem that is reducible to DLP, such as REPRESENTATION. There might be other hard problems that qualify. 4. There has to be specific reasons, either practical reasons or security reasons, for using deterministically generated generators instead of random ones (such as storage/time/bandwidth requirements, or inclusion of the generators in domain parameters). As pointed out in one of the answers below, in some cases such reasons do not exist. That doesn't however entail that such reasons can't exist in other cases. - ## 1 Answer It's not impractical. Choosing a random generator $g$ is easy and can be done efficiently. It's efficient enough that it wouldn't be a problem to generate a random generator when you generate your keypair. I think your premise (your assumption that this is a problem or that it should be discomforting) is not valid. Consequently, I don't see the relevance of the question. Let me elaborate a bit. There are two kinds of values that might be floating around: • Domain parameters are parameters that are shared by many users. For instance, in a discrete-log based cryptosystem, if the modulus $p$ is shared by everyone, then it is a domain parameter. • Individual parameters (to coin a term) are values that are used by a single party, and are generated by the same party who uses/relies upon them. For instance, a user's private key is an individual parameter, since it is generated by that user and only used by that user. These two different kinds of values have different security and verifiability requirements: • Domain parameters are critical and particularly sensitive, since they're generated by one person (e.g., a CA or trusted authority) but used and relied upon by everyone else. If the CA secretly and dishonestly constructs a domain parameter in a way that introduces a backdoor (rather than following the algorithm they were supposed to use), everyone loses. That's very bad. The standard mitigation is to provide everyone a way to verify that the CA generated the domain parameter honestly. In summary, domain parameters must be verifiable. • Individual parameters don't have this problem. There's no reason for Alice to secretly embed a backdoor in her private key, since the only person's security who is harmed is Alice's (at worst others may obtain the ability to eavesdrop on messages encrypted to Alice, but a malicious Alice could always have achieved a similar effect simply by publishing all the messages she receives, and there's no way for others to verify Alice hasn't done that). There's no way for Bob to verify that Alice has kept the messages he sends her secret. Therefore, there's no reason why Bob would need to verify that Alice generated her private key honestly. For instance, we don't require that RSA provide a way for Bob to verify that Alice generated her RSA secret key using the algorithm she was supposed to use; if she does something different, we say it is her tough luck. There's no point in trying to prohibit what you can't prevent. In summary, individual parameters do not need to be verifiable, and there's no clear value to verifiability. This means that your question is based upon a faulty premise. Your premise is that there's something wrong with a cryptosystem if the public key (an individual parameter) is not verifiable. This premise is not accurate; actually, there's nothing wrong with a cryptosystem where the public and private keys are not verifiable by third parties. We use cryptosystems like that all the time. Therefore, the best answer to the question is to un-ask the question. I am making numerous assumptions about what you are asking. I confess I might not have understood what you really wanted to know, and what problem you are trying to solve; if so, my apologies. - – Henrick Hellström Mar 14 at 20:37 @HenrickHellström, I think you are misinterpreting FIPS. FIPS requires verifiable algorithms for domain parameter generation, i.e., for common parameters, since the interests of the person generating the parameters may differ from the interests of the people using the parameters. That's one thing. But it doesn't require verifiability for single-user parameters. – D.W. Mar 15 at 4:18 For instance, FIPS doesn't require that everyone else should be able to verify I generated my RSA key correctly. Why should a dlog-based cryptosystem be any different? Answer: it's not. There's no need for that kind of verification (and FIPS doesn't require it). – D.W. Mar 15 at 4:19 Was that not clear from my question? "it seems at least one of the two generators has to be uniformly chosen anew for each public key, and made part of the public key, rather than made part of the public key parameters." – Henrick Hellström Mar 15 at 8:35 "There's no reason for Alice to secretly embed a backdoor in her private key, since the only person's security who is harmed is Alice's". That is certainly true as long as you have sufficient confidence in the authenticity of the public key. If you don't, chosen key attacks might be a concern. – Henrick Hellström Mar 15 at 9:21 show 6 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462087154388428, "perplexity_flag": "head"}
http://mathoverflow.net/questions/27729/what-are-normalized-singular-chains-good-for/27752
What are normalized singular chains good for? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) One of the common definitions of homology using the singular chains, i.e. maps from the simplex into your space. The free abelian group on these can be made into a chain complex and one can take the homology of this. The result is usually called singular homology. However, one can use a smaller chain complex instead, by taking the quotient with the degenerate chains (those that are the image of a degeneracy map $\sigma$). This will give the same result in homology. In some articles, I've seen authors replace the singular chains with the normalized singular chains, often claiming that this is "for technical reasons" (a example is Costello's article on the Gromov-Witten potential associated to a TCFT). What are the important technical differences between these two functors? In which situations is there a preferred one? - 3 Answers Before going to linearity, note a technical advantage of normalized geometric realization is that it preserves products. That probably has linear consequences. The simplest technical advantage of normalized chains is the Dold-Kan theorem that the normalized chains give an equivalence of categories between simplicial abelian groups and chain complexes in positive degrees. But the tensor structures on the two categories (detailed in Tom's answer) do not match under this equivalence. The normalized functor is both lax monoidal and oplax monoidal. I think the fat chain functor is still oplax monoidal. The technical advantage is that only the normalized functor is lax monoidal. My guess is that this is what Costello wants, in particular the consequence Tom mentions that a simplicial ring becomes a dga. Greg's parenthetical is another multiplicative property that I would expect to demonstrate the failure of the lax monoidal property, but I don't see that it does. I have never seen an example where abnormal chains are technically advantageous; degenerate simplices tend only to get in the way of precise considerations. But if they don't get in the way, thinking about them may be a distraction, as Tom says. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. • I don't know Costello's reasons, but for example I find it convenient that the normalized chains on the one-point space are concentrated in degree zero (so the normalized chains functor takes the unit objet to unit object). Generally, when two construction have all the same good properties one tends to prefer the smaller one... - The question was about chains on a space, presumably singular chains. If what you're up to is, for example, learning or teaching the basic facts about singular homology, like homotopy-invariance and excision, then the switch to normalized chains seems like an unnecessary complication. In that context, you could say that non-normalized chains have the advantage. To expand on Ben's comments: For A and B simplicial abelian groups and NA and NB the associated normalized chain complexes, there is a quasi-isomorphism $NA\otimes NB\rightarrow N(A\otimes B)$, where the tensor product of simplicial groups is defined levelwise $(A\otimes B)_n=A_n\otimes B_n$ and the tensor product of chain complexes is the usual $(C\otimes D)_n=\oplus_p (C_p\otimes D_{n-p})$. An associative simplicial ring leads in this way to an associative differential graded ring. Another technical advantage of the normalized complex is its role in proving that homotopy groups of a simplicial abelian group are homology groups (Dold-Thom Theorem). - My first version of the above answer had some nonsense, now edited out. Thanks, Ben, for the private heads-up. – Tom Goodwillie Jun 11 2010 at 3:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344576001167297, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/45496/algorithm-for-partitioning-n-into-distinct-primes
# Algorithm for partitioning n into distinct primes I am looking for an algorithm that will partition a positive integer into distinct primes. The number of partitions is given by this OEIS sequence: https://oeis.org/A000586 To be more specific, I am only interested in sets of 5 primes that add up to the original number, and I would like to get all possible such partitions for each number. Is anyone aware of an algorithm along these lines? - 1 Can you assume that you have a list of the primes below $n$? If so an algorithm to do what you want is trivial: you write $f(n,k)$ which lists the partitions of $n$ into a sum of $k$ primes, and for each prime $j<n$ you call $f(n-j, k-1)$. Your base case if $f(n,1)=\{n\}$ if $n$ is prime, and $f(n,1)=\{\}$ otherwise. This will probably scale quite horribly (maybe worse than exponentially) although you will be able to improve the performance significantly by memoization. – Chris Taylor Jun 15 '11 at 10:54 I can use a sieve to get the primes if needed. I was interested in seeing if there are any cleverer than trivial-but-slow algorithms out there. – Alexandros Marinos Jun 15 '11 at 10:56 ## 1 Answer The first step is to consider whether 2 is included. Since other primes are odd, 2 is included if you are trying to finad partitions of an even number into an odd number of distict primes or an odd number into an even number of distinct primes; 2 is excluded otherwise. The next step is consider the partition of $n$ into $k$ distinct odd primes less than or equal to $m$ (there is no point looking at $m$ greater than the difference between $n$ and the sum of the first $k-1$ odd primes). If $k=1$ then test whether $n$ is prime. If $k>1$ then the largest part is greater than $\frac{n}{k}$ but less than or equal to $m$. So you can choose a set of primes in this range as potential parts; if one of these is $p$ then you then next need to find partitions of $n-p$ into $k-1$ distinct odd primes less than or equal to $p-2$, which takes you back to the beginning of this step. There may be stages where the same question is being asked several times, and so it may be more efficient to precalculate solutions As an illustration, take the partition of 40 into five distinct primes. 40 is even and five is odd so 2 must be a part. Now we want paritions of 38 into four distinct odd primes. The largest part must be more than $\frac{38}{4}$ and less than or equal to $38-3-5-7$, so must be in $\{ 11,13,17,19,23\}$. Consider these in turn. • We want partitions of 27 into three distinct odd primes less than or equal to 9. That is not possible since they must also be greater than $\frac{27}{3}$. • We want partitions of 25 into three distinct odd primes less than or equal to 11. The largest part must also be greater than $\frac{25}{3}$ so the only possibility for the largest part is 11, leaving us to look for partitions of 14 into two distinct odd primes less than or equal to 9, which at the next stage will turn out not to be possible since there are no primes greater than 7 and less than or equal to 9. • We want partitions of 21 into three distinct odd primes less than or equal to 15, though we can reduce this 15 to $21-3-5=13$. The largest part must also be greater than $\frac{21}{3}$,so must be in $\{ 11,13\}$, leaving us to look for leaving us to look for partitions of 10 into two distinct odd primes less than or equal to 9 (reduce to $10-3=7$) [which at the next step will give us $3+7$], and for partitions of 8 into two distinct odd primes less than or equal to 11 (reduce to $8-3=5$) [which at the next step will give us $3+5$]. So reconstructing back to the start, this gives us the partitions $2+3+7+11+17$ and $2+3+5+13+17$. • We want partitions of 19 into three distinct odd primes less than or equal to 17, though we can reduce this 17 to $19-3-5=11$. The largest part must also be greater than $\frac{19}{3}$, so must be in $\{7,11\}$, leaving us to look for partitions of 12 into two distinct odd primes less than or equal to 5 (not possible), and for partitions of 8 into two distinct odd primes less than or equal to 9 (reduce to $8-3=5$) [we have already found $3+5$]. So reconstructing back to the start, this gives us the partition $2+3+5+11+19$. • We want partitions of 15 into three distinct odd primes less than or equal to 23, though we can reduce this 23 to $15-3-5=7$. The largest part must also be greater than $\frac{15}{3}$, so must be in $\{7\}$, leaving us to look for partitions of 8 into two distinct odd primes less than or equal to 5 [we have already found $3+5$]. So reconstructing back to the start, this gives us the partition $2+3+5+7+23$. So this method does indeed the four desired partitions of 40. - This is excellent, thank you. Will code it up and see how it goes. – Alexandros Marinos Jun 15 '11 at 11:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503173232078552, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/125372-positive-divisors.html
Thread: 1. Positive divisors There are $x$ number of positive divisors of $72$, $y$ number of positive divisors of $900$. The positive divisors of $900$ that are not divisors of $72$ are $z$. Hoping there's a formula and you're not actually supposed to go over all possible numbers... Any help appreciated. 2. Originally Posted by davidman There are $x$ number of positive divisors of $72$, $y$ number of positive divisors of $900$. The positive divisors of $900$ that are not divisors of $72$ are $z$. I find this question a bit confusing. Is this what you mean? Suppose that $t$ is the number of divisors of $72~\&~900$. Then $z=y-t$. Note that $t\le x$. 3. Not quite sure what your $t$ represents. $z$ is the number of divisors of $900$ minus the divisors that overlap with $72$. 4. Originally Posted by davidman Not quite sure what your $t$ represents. $z$ is the number of divisors of $900$ minus the divisors that overlap with $72$. What is the point of this reply? You did not read my reply? It says clearly what $t$ equals. Do you have difficulty translating Enghish? 5. Originally Posted by Plato What is the point of this reply? First, to tell you that I was not quite sure what you meant with what you defined $t$ to be. In other words, $72~\&~900$ is confusing and threw me off, so I do not understand what you mean when you say "the number of divisors of $72~\&~900$". Is it the number of divisors that overlap for the two? Is it the sum of the number of divisors, not taking into regard the case of overlap? Second, to clarify what it was I meant. You did ask me to, didn't you? You did not read my reply? It says clearly what $t$ equals. Do you have difficulty translating Enghish? English is my first language. I just don't have a very wide vocabulary when it comes to mathematics. But sure, I'll look over your post again for hints to what I might not be seeing clearly. 6. What does it mean to say that a number is a divisor of seventy-two and ninety? Does it mean that the number divides $72$ and divides $90$? BTW. ‘Overlap’ in not mathematical. 7. Originally Posted by Plato What does it mean to say that a number is a divisor of seventy-two and ninety? Does it mean that the number divides $72$ and divides $90$? Yes, that makes sense. If we assume that is what it means, how do you figure out the number of divisors of a certain number (or two numbers for that matter)? Would be great to know how. 8. Any positive integer can be factors as powers of primes. $900=2^2\cdot 3^2\cdot 5^2$ look at the exponents: Then add one to each exponent and multiply: $(2+1)(2+1)(2+1)=27$. So there are 27 divisors of 900. $72=2^3\cdot 3^2$, so $(3+1)(2+1)=12$. There 12 divisors of 72. The greatest common divisor: $\text{GCD}(900,72)=2^2\cdot 3^2$ so $(2+1)(2+1)=9$ common divisors of both 72 and 900. Thus there are $27-9=18$ divisors of 900 that are not divisors of 72. 9. Thank you so much! It was a lot more straightforward than I expected.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 42, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9581632018089294, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/185513-independent-random-variables-averages-more.html
# Thread: 1. ## independent random variables, averages, and more... Suppose X_1, X_2, ... , X_100 are independent random variables with common mean "mu" and variance "sigma squared." Let X be their average. What is the probability that |X - "mu" | is greater than or equal to 0.25? I can tell that this has something to do with either the weak law of large numbers or the central limit theorem, but something isn't clicking. Thanks! 2. ## Re: independent random variables, averages, and more... Hello, You won't be able to have the exact probability... You can have an upper bound for the probability with Chebyshev's inequality. And you can have an approximation thanks to the CLT. In both cases, it would depend on $\sigma^2$, I guess that's what is missing.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550169110298157, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=2988397
Physics Forums ## Complex Representation of Free Vibration Hi all I am struggling with going between various representations of vibrations in paticular the complex form. I am using Rao as my text btw so for a free vibration and making it simple no damping the euqation of motion is $$mx^{..}$$ + kx = 0 with the general solution being x = C1$$e^{iwnt}$$ + C2$$e^{-iwnt}$$ Here is where the confusion starts, I am only suposed to consider the real portion of the solution above and disregard the imaginary. So using the euler identity becomes x = (C1+ C2)cos(wnt) + (C1-C2)isin(wnt) which is Now based on the statement above i would disregard the second piece since its imaginary. but the problem is the book follws up with x = C1'cos(wnt) + C2'sin(wnt) is including the second piece and now considering real. FRom here on I am fine but I am lost on this jump Any help would be much apreciated. Thanks PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor hi koab1mjr! (have an omega: ω and try using the X2 and X2 icons just above the Reply box ) i'm not familiar with the use of complex numbers in this way, but i think the original C1 and C2 are allowed to be complex, so when you take the real parts you do get a combination of cos and sin (alternatively, you get a phase) Recognitions: Science Advisor What you really have is x = Real part of {(C1+ C2)cos(wnt) + (C1-C2)isin(wnt)} where C1 and C2 are complex. If you unpick that expression, it amounts to x = A cos (wnt) + B sin (wnt) where A and B are real constants. That can be written in a simpler form using complex numbers, namely x = Real part of {C exp(iwnt)} where C is a complex constant. This still represents the complete solution to the differential equation, with two indepedent arbitrary (real) constants, namely the real and imaginary parts of C. You can then write x' = Real part of {iwC exp(iwmt)} etc However, the "Real part of" is just assumed almost all the time, except in situations where you need to be explicit about exactly what real part you mean. So you would normally just write x = C exp(iwnt) x' = iwC exp(iwmt) etc. As a general principle, don't try to "discard the imaginary parts" too soon in the math. It is usally simpler to keep all the math in complex variables, and only take the real part at the end to relate the math back to the "real world". Thread Tools | | | | |---------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Complex Representation of Free Vibration | | | | Thread | Forum | Replies | | | General Physics | 16 | | | General Physics | 1 | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 5 | | | Engineering Systems & Design | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8917391896247864, "perplexity_flag": "middle"}
http://electronics.stackexchange.com/questions/16153/frequency-and-phase-response-of-various-filters
# Frequency and phase response of various filters I have been set the task, by my Computing teacher, to write a program to simulate a variety of fixed electronic filters (the circuits are fixed.) The goal is to plot amplitude and phase against frequency. The problem is, a lot of the math involves complex numbers. For example simulating even a simple RC filter requires an understanding of complex impedance - something I don't understand (yet - still in college electronics education.) Are there any "simple" formulas which can compute the response of a filter accurately? Is there a good web resource for how to understand this? A list of formulas for frequency vs phase and amplitude? Ideally, I could extend it to all types RC as well as LC filters and other types like Butterworth, but that might get really complicated. - 5 The complex number formulas are the simple ones. The trigonometric forms are more complex and harder to manipulate! – Kevin Vermeer Jun 29 '11 at 12:44 – Kellenjb Jun 29 '11 at 13:00 2 @Thomas O learning what the complex numbers mean will be a huge benefit to your overall understanding of the relationship of time domain, frequency domain, and phase. If you are being asked to write a program to do this I would assume it is expected for you to understand this material and not just find the easy way out. – Kellenjb Jun 29 '11 at 13:32 What programming language and what library restrictions are there? You could put together a program in python with scipy handling all the complex math and matplotlib to make pretty plots. Shouldn't take more than a couple days and you don't really need to know much of anything about complex numbers, just solve for Vout using s-domain models for the parts and use scipy to do the complex math and spit out a vector for matplotlib to display as the result. – Mark Jun 29 '11 at 20:13 @Mark Python would be great if I can get it to work as I know it inside-out. But the college only has Delphi installed by default, so I'd need to create some kind of Python exe. – Thomas O Jun 29 '11 at 22:04 show 4 more comments ## 4 Answers The normal way to solve these problems is as stevenvh described. LaPlace transforms and complex analysis are used because such problems are easier to solve by humans in that space. However, this was all before there were computers with essentially unlimited and free compute cycles. It is possible to get meaningful answers from purely time domain analysis. It will take the computer way more cycles, but so what? Nowadays what you want to conserve are your time and effort. Of course everything can be solved in the time domain somehow. After all, the real signals exist there. Imaginary components are, well, imaginary. They are useful mathematical aids in understanding and manipulating signals where both the frequency and phase are relevant. Using such mathematical abstractions greatly reduces the computation, but they are not neccessary. Programs like Spice do pretty much everything in the time domain, I think. They look at the voltages and currents at any one time, then use that to decide what the voltages and currents are a little tiny step further in time. The computer does this a few million times in a row and now you've got real signals as a function of time. The tricky part here is to decide how long those computational time steps are. Too long, and the computation will miss details and get the wrong answer. Too short, and intermediate increments will be so small they will be lost when added to the existing values. This latter part depends on the number precision used inside the computer. For example, with normal IEEE 32 bit floating point, if increments are 65000 times smaller than the thing they are adding to, the increment is effectively limited to 8 bits resolution. To get around this, the simple answer is to use the highest precision floating point the computer has. This is usually called something like "double precision" in most languages. That may take a little more time in the computer to manipulate the wider numbers, but again that's pretty much irrelevant. If you try this path, you need to model the various resistors, capacitors, and inductors in incremental form inside the computer. Or you can model the whole filter incrementally in some cases. For example, the algorithm for a single pole RC low pass filter is: FILT <-- FILT + FF(NEW - FILT) This is very common inside microcontrollers where it is applied to real world readings with some noise on them, for example. Once you have code that models the filter of interest, you create a higher level that successively sends it sine waves of different frequencies. After waiting to make sure the filter has reached steady state for each frequency, you look at the amplitude and phase of a output cycle relative to the input cycle. To be clear, I actually agree with stevenvh that using complex impedance is the best way to analyze such things. My main intent was to point out that you don't need complex math to solve this problem, or any problem in electronics for that matter, only that it can make it much simpler. I also wanted to point out that in today's world trying to conserve computation is wrong headed. That may have applied 20 years ago, or perhaps even 10 years ago, but not today. Another point is that you need to learn complex math. You've got no business trying to be a electrical engineer, or pretty much any type of engineer, without it. So instead of finding ways to avoid complex analysis, dig in and learn it. This is a important tool every EE must have at his disposal. - 1 @Thomas - Olin is absolutely right: you've got to learn complex math. (Also comes handy if you meet a girl engineer; female engineers can be very complex! ;-)) – stevenvh Jun 29 '11 at 15:14 2 @stevenvh that joke was so bad I just had to laugh. – Kellenjb Jun 29 '11 at 15:29 @Kellenjb - it can't always be caviar! – stevenvh Jun 29 '11 at 15:38 @stevenh At least we're not purely imaginary... – Aphaea Jun 29 '11 at 15:41 1 @stevenvh @Kortuk: Is this really about breaking a 9-paragraph post into titled sections? Seems a little over the top to me. The entire posts fits on my screen at one time. – Olin Lathrop Jun 29 '11 at 20:03 show 7 more comments This is often done in the s-domain, via the Laplace transform. Every engineering formulas compendium has a list of time-domain functions mapped to their s-domain equivalent. (In DSP this will be done in the Z-domain, the Z-transform being the discrete (sampled) form of the continuous Laplace.) Transforms like the the Laplace transform are not uncommon in (engineering) mathematics. You convert (Laplace) a problem to a simpler one (s-domain), solve it there and convert it back (inverse Laplace). A common example are logarithms, which were used by every engineer for hundreds of years until the digital computer. A multiplication becomes a sum if you take the logarithm, and that's easier to solve than a multiplication. Same for division and exponentiation. Taking the logarithm requires some effort, so it isn't worthwhile for simple products. But since the discovery of logarithms by Napier in the 17th century "computers" (people who compute) have compiled logarithm tables, and the slide rule was the engineer's computer until electronic calculators arrived. If you have some experience with a slide rule you can do some calculations almost as fast as with a calculator, which would be more difficult without the transform. Disclaimer: no, I'm not that old that I've worked with slide rules. My career in high school started with one of the first programmable calculators, the TI-57. If you're uncomfortable with that, you can also remain in the time-domain. You don't give examples of the circuits you have to analyze, I presume RLC circuits. You find the transfer function just like you would for a linear circuit which just uses resistors, by calculating parallel and series equivalents, using tools like Kirchhoff's laws. Of course you'll have to carry $j\omega$ around all the time, but your final result should be a complex number with a real and an imaginary term. Convert this to polar notation using Euler's identity $e^{j\omega t}=cos(\omega t) + j sin(\omega t)$ and it gives you magnitude and phase. You could use a spreadsheet program like Calc or Excel to plot the function by varying the frequency $\omega$ over the desired spectrum. edit Olin mentions SPICE, and that's of course a great way to get the graphs, but you'll have a hard time convincing your teacher that you did anything yourself, less show that you understand it. - RLC circuits are linear in the technical sense. That's why the Laplace transform works! – Aphaea Jun 29 '11 at 15:15 @Aphaea - Yeah, that's the problem with words like "linear", they can mean anything :-(. What it means here and to me is that resistors have a fixed, and linear!, relationship between voltage and current... except for the non-linear resistors, that is :-/ – stevenvh Jun 29 '11 at 16:52 1 Linear and time invariant, none of my favorite problems are. But as long as they stay deterministic I can live with it. – Kortuk♦ Jun 29 '11 at 17:55 1 @Kortuk: and causal! – Federico Russo Jun 29 '11 at 18:25 1 kortuk has got it with LTI - Linear time invariant - that is that all the values remain the same with respect to time! so a 1 ohm resistor is a 1 ohm resistor, today, tomorrow and next year. Of course long term drift, temperature coefficients and initial tolerance play a part to make it "non linear" but for circuit analysis all that is needed is a monte carlo analysis to work out the extremes of fc, phase respone and stop band (and so on) but I am getting of topic - LTI is a perquisite for performing circuit analysis of passive components – smashtastic Jun 29 '11 at 18:44 show 2 more comments It might help you to think of linear circuits as essentially solving differential equations in hardware. Since you want to be in frequency space anyway for the Bode plots, (amplitude and phase vs. time) it really does make the most sense use the complex impedances to find the transfer function. With practice, you can make a Bode plot by hand immediately from the poles and zeros, with some amount of fiddling at the corner frequencies. MATLAB also has built in functions to generate bode plots, pole-zero maps, etc. from arbitrary transfer functions. However, if you don't want to use complex impedances, you're left with a set of differential equations. Which you can solve using an ODE solver, but you'll still need to transform to the frequency domain. So you really might as well get comfortable with Laplace and complex impedance. - The best web resource is Wofram Alpha. For example plots can be calculated online, using seardch page like this http://www.wolframalpha.com/input/?i=Butterworth+filter - @RS - Alpha is good, but it's like what I said about SPICE: it doesn't help you to convince the teacher that you did actually do any work, let alone understand it. – stevenvh Jun 29 '11 at 16:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9450644254684448, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/115788-find-last-digit.html
# Thread: 1. ## Find the last digit Find the last digit of 541^(341). Clearly, the last digit would be one, but I don't know how to go about showing that it would be one. Can someone help please? 2. $541^{341} \equiv 1^{341} \equiv 1 \bmod{10}$ i.e. the last digit is going to be 1. If this doesn't make sense to you prove it by induction. I.e. prove that $541^n$ has ending digit one for all n. 3. by induction : 541^n , for n = 1 , 541^1 = 541 => it ends with 1 true assume that 541^k ends with one is true and proof that 541^(k+1) ends with one now 541^(k+1) = 541^k * 541 ,,, we prooved that 541^k ends with one and 541 ends with one the multiplication of two numbers that ends with one will result in a number that ends with one .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440197348594666, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/54678/adiabatic-expansion-of-steam-through-a-valve
# Adiabatic expansion of steam through a valve I'm working on a homework problem, and I have a suspicion the textbook is trying to trick me. The question is: "Steam at 20 bar and 300 C is to be continuously expanded to 1 bar. Compute the entropy generated and the work obtained per kilogram of steam if this expansion is done by passing the steam through an adiabatic expansion valve." My energy and entropy balances give: $$\dot{W} = \dot{M}(\hat{H}_2 - \hat{H}_1)$$ $$\dot{S}_{gen} = \dot{M}(\hat{S}_2 - \hat{S}_1)$$ where $$\dot{W} = \dot{W}_s - P\frac{dV}{dt}$$ ... but, there is no $\dot{W}_s$ since it's an expansion valve, right? If it was a turbine (which the next question asks), then there would be shaft work, but unless I'm mistaken, there is no shaft work for an expansion valve, right? - ## 1 Answer Yes. Expansion valve has no work. So you have the final enthalpy (same as initial enthalpy) and pressure. That will give you $S_2$ from the steam tables :). - Thanks! Just what I thought. – Nick Feb 21 at 23:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.953417956829071, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/02/20/group-actions/?like=1&_wpnonce=d19428bcb3
# The Unapologetic Mathematician ## Group actions Okay, now we’ve got all the setup for one big use of group theory. Most mathematical structures come with some notion of symmetries. We can rotate a regular $n$-sided polygon $\frac{1}{n}$ of a turn, or we can flip it over. We can rearrange the elements of a set. We can apply an automorphism of a group. The common thread in all these is a collection of reversible transformations. Performing one transformation and then the other is certainly also a transformation, so we can use this as a composition. The symmetries of a given structure form a group! What if we paint one side of the above $n$-gon black and the other white, so flipping it over definitely changes it. Then we can only rotate. The rotations are the collection of symmetries that preserve the extra structure of which side is which, and they form a subgroup of the group of all symmetries of the $n$-gon. The symmetries of a structure preserving some extra structure form a subgroup! As far as we’re concerned right now, mathematical structures are all built on sets. So the fundamental notion of symmetry is rearranging the elements of a set. Given a set $S$, the set of all bijections from $S$ to itself ${\rm Bij}(S)$ is a group. We’ve actually seen a lot of these before. If $S$ is a finite set with $n$ elements, ${\rm Bij}(S)$ is the symmetric group on $n$ letters. Now, a group action on a set is simply this: a homomorphism from $G$ to ${\rm Bij}(S)$. That is, for each element $g$ in $G$ there is a bijection $p_g$ of $S$ and $p_{gh}(x)=p_g(p_h(x))$. It’s important to note what seems like a switch here. It’s really due to the notation, but can easily seem confusing. What’s written on the right is “do the permutation corresponding to $h$, then the one corresponding to $g$“. So we have to think of the multiplication in $G$ as “first do $h$, then do $g$“. In what follows I’ll often write $p_g(x)$ as $gx$. The homomorphism property then reads $(gh)x=g(hx)$ I’ll throw out a few definitions now, and follow up with examples in later posts. We can slice up $S$ into subsets so that if $x$ and $x'$ are in the same subset, $x'=gx$ for some $g$, and not if they’re not in the same subset. In fact, this is rather like how we sliced up $G$ itself into cosets of $H$. We call these slices the “orbits” of the $G$ action. As an important special case of the principle that fixing additional structure induces subgroups, consider the “extra structure” of one identified point. We’re given an element $x$ of $S$, and want to consider those transformations in $G$ which send $x$ to itself. Verify that this forms a subgroup of $G$. We call it the “isotropy group” or the “stabilizer” of $x$, and write it $G_x$. I’ll leave you to ponder this: if $x$ and $x'$ are in the same $G$-orbit, show that $G_x$ and $G_{x'}$ are isomorphic. ## 18 Comments » 1. You really want to repair that missing closing sub-tag in the end — it changes the layout of EVERYTHING else on the first page. Comment by | February 21, 2007 | Reply 2. Very odd.. I accidentally typed “sup” instead of “sub”, but Safari evidently knew what I meant… Comment by | February 21, 2007 | Reply 3. [...] “Tale of Groupoidification”. He really fleshes out this extension of the notion of group actions, and ties it into the concept of spans. I hope it’s not gushing too much to say that spans [...] Pingback by | April 9, 2007 | Reply 4. [...] The analogue in ring theory for the idea of a group action is that of a module. Again we want every element of the ring to behave like a function on a set and [...] Pingback by | April 21, 2007 | Reply 5. [...] Finds. He continues his “Tale of Groupoidification”. It features a great comparison of group actions on sets and those on vector spaces (which we’ll get to soon enough). Even better, he gives a [...] Pingback by | May 29, 2007 | Reply 6. [...] and group actions — categorically The theory of group actions looks really nice when we translate it into the language of categories. That’s what I plan to [...] Pingback by | June 8, 2007 | Reply 7. [...] is still beyond us at this point, but there are others. One nice place groupoids come up is from group actions. In fact, Baez is of the opinion that the groupoid viewpoint is actually more natural than the [...] Pingback by | June 9, 2007 | Reply 8. [...] point we’ll call so that . It’s straightforward from here to show that this gives an action of the vector space (considered as an abelian group) on the affine space [...] Pingback by | July 16, 2008 | Reply 9. [...] a set with some binary operation defined on it, sure, but what does it do? We’ve seen groups acting on sets before, where we interpret a group element as a permutation of an actual collection of elements. [...] Pingback by | October 23, 2008 | Reply 10. [...] here’s the upshot: the general linear group acts on the hom-set by conjugation — basis changes. In fact, this is a representation of the group, [...] Pingback by | October 30, 2008 | Reply 11. [...] group on the vector space of matrices over by conjugation. What we want to consider now are the orbits of this group action. That is, given two matrices and , we will consider them equivalent if there [...] Pingback by | March 6, 2009 | Reply 12. [...] Ising model of ferromagnetism has found some evidence they claim is linked to certain symmetry; an action of the Lie group on the space of states which preserves some interesting property or another. I [...] Pingback by | January 11, 2010 | Reply 13. [...] Weyl Group on Weyl Chambers With our latest lemmas in hand, we’re ready to describe the action of the Weyl group of a root system on the set of its Weyl chambers. Specifically, the action is [...] Pingback by | February 5, 2010 | Reply 14. [...] to existence. Given a vector , we consider its orbit — the collection of all the as runs over all elements of . We have to find a vector in this [...] Pingback by | February 9, 2010 | Reply 15. [...] Ising model of ferromagnetism has found some evidence they claim is linked to certain symmetry; an action of the Lie group on the space of states which preserves some interesting property or another. I [...] Pingback by | August 28, 2010 | Reply 16. [...] detailed subject. And since every finite group can be embedded in a permutation group — its action on itself by left multiplication permutes its own elements — and many natural symmetries come [...] Pingback by | September 7, 2010 | Reply 17. [...] and Representations From the module perspective, we’re led back to the concept of a group action. This is like a -module, but “discrete”. Let’s just write down the axioms for [...] Pingback by | September 16, 2010 | Reply 18. [...] now multiplication on the left by shuffles around the cosets. That is, we have a group action of on the quotient set , and this gives us a permutation representation of [...] Pingback by | September 20, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 46, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9298000931739807, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/147972/cycle-types-of-s-4
# Cycle types of $S_4$ I understand the possible cycle types of $S_4$ are $(4), (3,1), (2,2), (2,1,1), (1,1,1,1)$, but why are there $6, 8, 3, 6, 1$ of each respectively? Also why do the elements of each cycle type form a conjugacy class, and then why is any normal subgroup a union of some of the conjugacy classes? - – MJD May 22 '12 at 13:52 ## 1 Answer Here’s an answer to the first question. Counting the number permutations of each cycle type is just a matter basic combinatorics. Consider a $4$-cycle, for instance. You can start it with anything, so let’s start it with $1$: $(1xyz)$. You now have $3$ choices for $x$, $2,3$, or $4$; no matter which you choose, you’ve $2$ choices for $y$; and then $z$ is already determined, so the total number of possibilities is $3\cdot2=6$. The same reasoning shows more generally that there are $(n-1)!$ ways to form an $n$-cycle from a set of $n$ objects. For the type $(3,1)$ there are $4$ ways to pick which element is fixed, and there are $(3-1)!=2$ ways to to form the other $3$ elements into a $3$-cycle, so there are $4\cdot2=8$ permutations of this type. For the type $(2,2)$ there are $3$ ways to decide which of $2,3$, and $4$ gets paired with $1$ in a transposition, and that completely determines the permutation: the other two elements must make up the other transposition. For the type $(2,1,1)$ there are $\binom42=6$ ways to choose the two elements forming the transposition, and that completely determines the permutation. And of course the identity permutation is the only one of type $(1,1,1,1)$, since such a permutation must have four fixed points. Added: Once you know that the members of each cycle type form a conjugacy class, it’s easy to see why a normal subgroup must be a union of conjugacy classes. Suppose that $N$ is a normal subgroup, and let $\pi\in N$. For each $\sigma\in S_4$ we have $\pi^\sigma\in N^\sigma=N$, so the entire conjugacy class of $\pi$ must be a subset of $N$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920134425163269, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/1774-problem.html
# Thread: 1. ## Problem I need help to solve this problem: Two-digit number is three times bigger from sum of digits which that number contains, and square of that sum of digits is equal to triple number which is wanted. Find that number. I have represented that two-digit number as $10x+y=3(x+y)$ Multiplying both sides with (x+y) we get that $\frac{10x^2+11xy+y^2}{3}=(x+y)^2$ Dividing left side with 3 we will get that $\frac{10x^2+11xy+y^2}{9}=(x+y)^2$ which is number that we want. So I think we must solve system of equation $10x+y=3(x+y)$ $\frac{10x^2+11xy+y^2}{9}=(x+y)^2$ I have tried to solve this equation by puting $x=\frac{2y}{7}$ (from first equation) into second equation but the result is x=0 and y=0. Is my way of solution right? help to solve. 2. Originally Posted by DenMac21 I need help to solve this problem: Two-digit number is three times bigger from sum of digits which that number contains, and square of that sum of digits is equal to triple number which is wanted. Find that number. I have represented that two-digit number as $10x+y=3(x+y)$ Multiplying both sides with (x+y) we get that $\frac{10x^2+11xy+y^2}{3}=(x+y)^2$ Dividing left side with 3 we will get that $\frac{10x^2+11xy+y^2}{9}=(x+y)^2$ which is number that we want. So I think we must solve system of equation $10x+y=3(x+y)$ $\frac{10x^2+11xy+y^2}{9}=(x+y)^2$ I have tried to solve this equation by puting $x=\frac{2y}{7}$ (from first equation) into second equation but the result is x=0 and y=0. Is my way of solution right? help to solve. You made a mistake, on step 3, your result should look like $<br /> \frac{10x^2+11xy+y^2}{3}=(x+y)^2<br />$ You accidently did 2 steps in step 2. You multiplied by $(x+y)$ and divided by 3, then you forgot that you divided by 3 and divided by 3 again. 3. Originally Posted by ThePerfectHacker You made a mistake, on step 3, your result should look like $<br /> \frac{10x^2+11xy+y^2}{3}=(x+y)^2<br />$ You accidently did 2 steps in step 2. You multiplied by $(x+y)$ and divided by 3, then you forgot that you divided by 3 and divided by 3 again. No, I divided by 3 in step 3 because the square of sum of digits is equal to triple number which is wanted. So I didn't have to divide it by 3 but later it should be divided. The problem is that I can't find values of x and y. I can't solve this system of equation: $<br /> 10x+y=3(x+y)<br />$ $<br /> \frac{10x^2+11xy+y^2}{9}=(x+y)^2<br />$ 4. Originally Posted by DenMac21 No, I divided by 3 in step 3 because the square of sum of digits is equal to triple number which is wanted. So I didn't have to divide it by 3 but later it should be divided. The problem is that I can't find values of x and y. I can't solve this system of equation: $<br /> 10x+y=3(x+y)<br />$ $<br /> \frac{10x^2+11xy+y^2}{9}=(x+y)^2<br />$ Since $x \in \{1,2,3,4,5,6,7,8,9\}$ and $y \in \{0,1,2,3,4,5,6,7,8,9\}$, trial and error shows that the first of your equations has solution: $x=2,\ y=7$, but this is not a solution of the second of your equations. RonL 5. Originally Posted by DenMac21 I need help to solve this problem: Two-digit number is three times bigger from sum of digits which that number contains, and square of that sum of digits is equal to triple number which is wanted. Find that number. Let $Z$ be the number wanted. The sum of the digits of the two-digit number is $9$ as the digits are $2$ and $7$. So the last condition of the problem translates to: $3Z=9^2=81$, so: $Z=27$. RonL 6. Originally Posted by CaptainBlack Let $Z$ be the number wanted. The sum of the digits of the two-digit number is $9$ as the digits are $2$ and $7$. So the last condition of the problem translates to: $3Z=9^2=81$, so: $Z=27$. RonL Thats ok, but we have get that x=2 and y=7 by guessing numbers. Is there algebraic way to get that numbers? 7. Originally Posted by DenMac21 Thats ok, but we have get that x=2 and y=7 by guessing numbers. Is there algebraic way to get that numbers? Not by guessing, exhausive search over the ranges involved is not guessing. And if you don't like exaustive search you can try the following: $10x+y=3(x+y) \Rightarrow 7x=2y$, but $7$ and $2$ are prime and so $y$ is a multiple of $7$, and as $7$ is the only single digit non-zero multiple of $7$ we have $y=7$ and the rest follows. ( $y$ has to be non-zero, as if it were zero $x$ would be zero, which is not permitted). RonL 8. Originally Posted by CaptainBlack Not by guessing, exhausive over the ranges involved is not guessing. And if you don't like exaustive search you can try the following: $10x+y=3(x+y) \Rightarrow 7x=2y$, but $7$ and $2$ are prime and so $y$ is a multiple of $7$, and as $7$ is the only single digit non-zero multiple of $7$ we have $y=7$ and the rest follows. ( $y$ has to be non-zero, as if it were zero $x$ would be zero, which is not permitted). RonL Thanks, nice solution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 55, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9694846272468567, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/76491/multiple-choice-question-about-the-probability-of-a-random-answer-to-itself-bein/76494
# Multiple-choice question about the probability of a random answer to itself being correct I found this math "problem" on the internet, and I'm wondering if it has an answer: Question: If you choose an answer to this question at random, what is the probability that you will be correct? a. 25% b. 50% c. 0% d. 25% Does this question have a correct answer? - 23 Yes. This can be answered (correctly or incorrectly) with probability 1. :) – Bill Cook Oct 27 '11 at 22:23 15 The answer to the question in the title is currently: Yes, two. – joriki Oct 27 '11 at 22:45 22 – user18383 Oct 28 '11 at 2:24 8 In the question, I would put the quotes around "math" instead of "problem". It is definitely a problem, but whether it is math is problematic. One issue is whether correctness is computed against the set {a,b,c,d} or {0%, 25%, 50%}. – Ross Millikan Oct 28 '11 at 4:37 6 This should be tagged philosophy. Does anything have an answer? Is there really such a thing as a 'question'? Can we be certain that we really exist? (Answer to previous question: Probably.) – muntoo Oct 28 '11 at 5:01 show 11 more comments ## 10 Answers No, it is not meaningful. 25% is correct iff 50% is correct, and 50% is correct iff 25% is correct, so it can be neither of those two (because if both are correct, the only correct answer could be 75% which is not even an option). But it cannot be 0% either, because then the correct answer would be 25%. So none of the answers are correct, so the answer must be 0%. But then it is 25%. And so forth. It's a multiple-choice variant (with bells and whistles) of the classical liar paradox, which asks whether the statement This statement is false. is true or false. There are various more or less contrived "philosophical" attempts to resolve it, but by far the most common resolution is to deny that the statement means anything in the first place; therefore it is also meaningless to ask for its truth value. Edited much later to add: There's a variant of this puzzle that's very popular on the internet at the moment, in which answer option (c) is 60% rather than 0%. In this variant it is at least internally consistent to claim that all of the answers are wrong, and so the possibility of getting a right one by choosing randomly is 0%. Whether this actually resolves the variant puzzle is more a matter of taste and temperament than an objective mathematical question. It is not in general true for self-referencing questions that simply being internally consistent is enough for an answer to be unambiguously right; otherwise the question Is the correct answer to this question "yes"? would have two different "right" answers, because "yes" and "no" are both internally consistent. In the 60% variant of the puzzle it is happens that the only internally consistent answer is "0%", but even so one might, as a matter of caution, still deny that such reasoning by elimination is valid for self-referential statements at all. If one adopts this stance, one would still consider the 60% variant meaningless. One rationale for taking this strict position would be that we don't want to accept reasoning by elimination on True or false? • The Great Pumpkin exists. • Both of these statements are false. where the only internally consistent resolution is that the first statement is true and the second one is false. However, it appears to be unsound to conclude that the Great Pumpkin exists on the basis simply that the puzzle was posed. On the other hand, it is difficult to argue that there is no possible principle that will cordon off the Great Pumpkin example as meaningless while still allowing the 60% variant to be meaningful. In the end, though, these things are more matters of taste and philosophy than they are mathematics. In mathematics we generally prefer to play it safe and completely refuse to work with explicitly self-referential statements. This avoids the risk of paradox, and does not seem to hinder mathematical arguments about the things mathematicians are ordinarily interested in. So whatever one decides to do with the question-about-itself, what one does is not really mathematics. - 7 Liar paradox is also known as the Epimenides paradox after Epimenides of Crete who claimed: "All Cretans are liars". – lewellen Oct 28 '11 at 1:39 2 Bertrand's paradox, the proof of the halting problem, and the proof of Godel's incompleteness theorem all rely on similar self-referencing statements. – BlueRaja - Danny Pflughoeft Oct 28 '11 at 7:29 8 @BlueRaja, are you on first names with Bertrand Russell? Usually, "Bertrand's paradox" is about failure to specify a distribution when saying "chose a random chord in a circle". An interesting distinction for Gödel's argument (and the halting problem, which is closely analogous) is that in Gödel's case the self-referential statement doesn't itself know it's being self-referential. It just happens to be. If we do some trivial rewriting on it (such as swapping two conjuncts somewhere) it still means the same but is not exactly self-referential. – Henning Makholm Oct 28 '11 at 10:13 – Did Oct 28 '11 at 13:27 14 How dare you question the existence of the Great Pumpkin!? – joriki Nov 1 '11 at 15:30 show 4 more comments The question is underspecified since it doesn't say which distribution is used in choosing an answer at random. Any of the answers could be correct: If I choose a. with probability 25% and b. with probability 75%, a and d are correct. If I choose a. with probability 50% and b. with probability 50%, b is correct. If I choose a. with probability 75% and b. with probability 25%, c is correct. From the design of the question, it seems that whoever wrote it had in mind a uniform distribution over all four answers, but forgot to specify that. In that case Henning's answer applies. - 9 The writer of the question was definitely a programmer, then. We expect rand() to give numbers that are 'uniformly distributed'... Unlike some mathematicians. (*Rolls eyes.* :)) – muntoo Oct 28 '11 at 5:08 To offer up another perspective on Henning's answer, the question is essentially an elaboration of this (similar) multiple-choice question: What is the correct answer to this question? 1. Answer (2) 2. Answer (3) 3. Answer (4) 4. Answer (1) Note that there are some fine puzzles built around variants of the 'self-referential test'; for instance, this simple example: Each of the following statements is either true or false. Which of them are true and which are false? 1. All of these sentences are false. 2. Exactly 1 of these sentences is true. 3. Exactly 2 of these sentences are true. 4. Exactly 3 of these sentences are true. 5. Exactly 4 of these sentences are true. - 15 I'm probably missing something subtle, but how can the answer be anything other than "Exactly 1 of these sentences is true" being the only true sentence? – fluffy Oct 28 '11 at 0:53 4 @fluffy: that's exactly right (though it's often phrased in inverse - 'exactly 4 of these sentences are false', 'exactly 2 of these sentences are false', etc. - which makes it a little trickier). I said it was a simple example. :-) – Steven Stadnicki Oct 28 '11 at 1:36 If there is one right answer to the question, then you will answer this question statistically 25 percent of the time. If 25 percent is the "right" answer, then you actually have two options. If you have 2 options, then 50 percent is the statistical answer. And if since 50 percent is the only option place to mark down, that means that you will only get this answer right 25 percent of the time because you have a 1 in four chance. It is impossible without a miracle. Plus, if it is impossible then does that leave the option of 0 open because then there are no right answers? That is saying: "If there are no right answers, this is the right answer." What are you really saying there? Nothing. I think maybe you can't find out how many answers there are in the first place. There can't be only one. There can't be only two. There can't be three. There can't be four, and therefore one is the right answer? No. Because then you start back at the beginning. - 11 Please don't leave downvotes without providing a comment to explain (see the FAQ). In the present case, this answer is certainly less focussed than some of the other ones, but in my view not bad enough to merit a downvote. – joriki Oct 28 '11 at 0:47 Well, I must be pretty insane if I start competing with these already heavily upvoted answers from high rep users. But although the following solution may sound a little creative or even frivolous, it could easily be the right one. You could say that this solution is reverse engineered, as follows: The question instructs to only choose one single answer out of four. And assume a uniform distribution, since that is most likely intended, then each answer has a chance of 25% to become chosen. So the correct answer should be: 25%. This computes to answer A being correct, as well as answer D. Could that be? Yes, it can. The question does not reveal how many of the four given answers are correct, but since there is one to be picked, assume that at least one of the four answers is correct. Let's call answer A + answer D the correct answer pair. Now, there are two possible choices (A or D) that result in 50% of the correct answer (A and D). Secondly, there is 50% chance of picking one (A or D) of two (A and D) out of four (A to D). So whether answer A or answer D is chosen, in either case the probability of being correct (50% × 50%) is 25%, which evaluates true. Thus, yes, the question has 2 correct answers. And now I realize that this post is the long version of the by joriki ages ago given comment. ;) Ok, to be clear about what I mean, I believe the question is a special variant of the following trick question: What is the color of the car? 1. Black 2. Blue 3. Gray 4. Metallic As owner of the car I know the correct answer is metallic black. But this would render the question unfair, because it is never possible to give this answer by only selecting one. The difference with the question in question is the equality of both answers to give, which makes it slightly more fair. But since you can select only half of the full solution, the probability is still 25%. - 2 If I'm reading your response correctly, you're both giving meaning to the answers (answers A and D are possibly correct because they are both 25%), and ignoring this meaning (by assuming that one of A or D is the "correct" answer, as in on an answer key or in a computer). This doesn't make much sense... – Michael Boratko Oct 28 '11 at 2:28 @process91 Imagine the answers not being radio buttons as in a conventional multiple choice question, but instead being check boxes: A and D must be checked for the full solution, but you can choose only one. – NGLN Oct 28 '11 at 2:32 1 That doesn't seem to be a fair analogy, since "Metallic Black" is not an option, and "Metallic"$\ne$"Black". Based on Joriki's answer above, this doesn't seem to be what he was thinking. – Michael Boratko Oct 28 '11 at 19:13 So, fuzzy logic then? – kinokijuf Jan 23 '12 at 20:30 See problem 2 here for a similar problem that can be solved. SPOILER: Solution here. Don't look if you want to solve it yourself. a) can't be the answer because it says b) is correct, but the statement of b) directly contradicts the statement of a) c) can't be correct, because it means a) and b) are correct but they contradict each other d) can't be correct. If it were, it would imply a) or b) or c) to be correct. The only possibility left is b), since I have already ruled out a) and c). But, b) contradicts d). b) can’t be correct for the same reason, basically. If it were true, since a) and c) can’t be true, this would imply d) is correct. But b) contradicts d). f) can't be correct. If it were, it would imply that e) is also correct, which would contradict the statement of f). This leaves only e) and none of the statements contradict e) so e) must be the correct answer. - So, what's wrong with a similar question that actually has a solution? And, where are the coward downvoters with their explanations? Hiding. – Graphth Oct 29 '11 at 1:39 1 I didn't downvote, but this doesn't seem to be an answer to the question, and is instead discussing some entirely different question! – ShreevatsaR Oct 30 '11 at 15:57 1 Yes, an entirely different, but very similar question. If you care about understanding what is going on with the question that is asked, then wouldn't another example help you understand? Do teachers give one example per subject? And, the question I linked to came to mind as soon as I read this question, seeing as how I was a participant in this math contest. – Graphth Oct 30 '11 at 19:30 I think giving a similar example is actually useful. Thats why I upvoted this answer and also Steven Stadnicki's one. – Michalis Jan 23 '12 at 20:51 Here is an explanation I came across today. - 1 Note that this refers to the variant that doesn't have 0% as one of the answers. This is treated in the second part of Henning's answer . The present question, with 0% as one of the answers, not only leads to an infinite loop when you start out with 25% or 50%, but also does so if you start out at 0%; in contrast to the variant, there's no fixed point / self-consistent answer to this one. – joriki Nov 17 '11 at 12:48 The correct answer is c. 0%. Either the conditions we are given are consistent or they are not. If they are consistent, the way mathematicians solve these kinds of paradoxes is to say that none is provably correct in our formal system, however humans can use higher order reasoning to conclude therefore that c is correct. If the conditions given are not consistent, then 1=2 and True=False, so c. is correct also in this case. Conclusion: Only c. is always a correct answer - 9 Hmm -- this seems like an interesting answer at first, but in fact it's inconsistent :-) In the first part you rightly make a distinction between provable and correct, and then in the second part you suddenly drop that distinction and say that c. is correct just because we can prove it in an inconsistent system. – joriki Oct 28 '11 at 21:28 The question starts: "If you choose an answer to this question at random, " However it does not then continue: "what is the probability that the answer chosen will be the probability of choosing that answer?" It instead says: "what is the probability that you will be correct?" And then dosn't define correct. There are 7 possible answers to the question: a, b, c, d, 25%, 50% and 0% However, lets presume that one of the following answers can be chosen: a, b, c or d The probability of choosing each answer: a - 25% b - 25% c - 25% d - 25% 25% - 50% 50% - 25% 0% - 25% Being correct for "what is the probability that you will be correct?" if there is one correct answer (although as covered above the question doesn't define correct or specify how many answers are correct). This produces the same answer as the question "is the answer you choose the probability of choosing that answer?": a - yes b - no c - no d - yes 25% - yes 50% - no 0% - no Now to what I think could be the real question in the question. "what is the probability that the answer chosen will be the probability of choosing that answer?": For me the answer to this must be 'b' (50%). - It is a trick question. let me rephrase the question to keep the misleading elements out If you choose an answer to this question at random, what is the probability that you will be correct? 1. A 2. B 3. C 4. D Provided that one of the four option is correct (assumption), the probability will be 25% that you are correct. Since you are picking up a random answer (don't bother about logic at all), the correct answer to the question "what is the probability that you will be correct" is 25%. Do not bother about the options, which is misleading. Edit (Redoing and Correcting): I noticed there are two choices of 25% which is technically wrong because every choice must be different from the another. Lets redo the same example again. Chances of each selection at random are 1/4 or 25%. That means 25% would select choice A at random, 25% choice B at random and so on. Since we know choice A and D are the correct answer (they are repeated which is wrong), that leads us to 50% correct answer if people make random choices. So what is the probability that you will be correct is 50% :) - 6 -1 Sorry, if you're going to say "keep the thinking element out" and "don't bother about logic at all", and give a wrong answer (or at least change the question so the answer is different), I will have to downvote that. – Graphth Oct 28 '11 at 20:27 To make things complicated, it is like asking two questions in one . Best way is to simplify it. Otherwise it is totally misleading and not worth spending time. It like those optical illusions where a shape neither like cube, or box or ball or any regular shape out there. It is just an illusion. – TomCat Oct 28 '11 at 20:31 7 That's the point. – Graphth Oct 28 '11 at 21:04 6 @TomCat: You claim that "25% is the correct answer" and "the probability that you will be correct is 50%", which are contradictory. That's the whole point of the question... – Zev Chonoles♦ Oct 30 '11 at 20:10 6 @TomCat: Read Henning's and Steven's answers above. Your argument is invalid because the question is "what is the probability that you will be correct?" and you are claiming both that this probability equals 25% and equals 50%, which is obviously nonsensical. – Zev Chonoles♦ Oct 31 '11 at 5:41 show 5 more comments ## protected by Zev Chonoles♦Oct 28 '11 at 20:26 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9559551477432251, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/364/motivation-for-algebraic-k-theory/511
Motivation for algebraic K-theory? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm looking for a big-picture treatment of algebraic K-theory and why it's important. I've seen various abstract definitions (Quillen's plus and Q constructions, some spectral constructions like Waldhausen's) and a lot of work devoted to calculation in special cases, e.g., extracting information about K-theory from Hochschild and cyclic homology. As far as I can tell, K-theory is extremely difficult to compute, it yields deep information about a category, and in some cases, this produces highly nontrivial results in arithmetic or manifold topology. I've been unable to piece these results into a coherent picture of why one would think K-theory is the right tool to use, or why someone would want to know that, e.g., K22(Z) has an element of order 691. Explanations and pointers to readable literature would be greatly appreciated. - 2 I'd like to emphasize that I'm not necessarily looking for holistic, coherent answers. Fragmentary brain dumps are also welcome (and may receive up-votes). – S. Carnahan♦ Oct 12 2009 at 20:26 1 Could someone provide some information about the application/manifestation of K theory in arithmetic other than Lichtenbaum conjecture and Veovodsky's work? (these are great problems and provide good motivation for a theory themselves, yet I am just curious. thx.) Or shall I open a new post for this question? – Ying Zhang Aug 13 2010 at 4:27 10 Answers Algebraic K-theory originated in classical materials that connected class groups, unit groups and determinants, Brauer groups, and related things for rings of integers, fields, etc, and includes a lot of local-to-global principles. But that's the original motivation and not the way the work in the field is currently going - from your question it seems like you're asking about a motivation for "higher" algebraic K-theory. From the perspective of homotopy theory, algebraic K-theory has a certain universality. A category with a symmetric monoidal structure has a classifying space, or nerve, that precisely inherits a "coherent" multiplication (an E_oo-space structure, to be exact), and such an object has a naturally associated group completion. This is the K-theory object of the category, and K-theory is in some sense the universal functor that takes a category with a symmetric monoidal structure and turns it into an additive structure. The K-theory of the category of finite sets captures stable homotopy groups of spheres. The K-theory of the category of vector spaces (with appropriately topologized spaces of endomorphisms) captures complex or real topological K-theory. The K-theory of certain categories associated to manifolds yields very sensitive information about differentiable structures. One perspective on rings is that you should study them via their module categories, and algebraic K-theory is a universal thing that does this. The Q-construction and Waldhausen's S.-construction are souped up to include extra structure like universally turning a family of maps into equivalences, or universally splitting certain notions of exact sequence. But these are extra. It's also applicable to dg-rings or structured ring spectra, and is one of the few ways we have to extract arithmetic data out of some of those. And yes, it's very hard to compute, in some sense because it is universal. But it generalizes a lot of the phenomena that were useful in extracting arithmetic information from rings in the lower algebraic K-groups and so I think it's generally accepted as the "right" generalization. This is all vague stuff but I hope I can at least make you feel that some of us study it not just because "it's there". - 1 Thanks, that was great! – S. Carnahan♦ Oct 15 2009 at 1:46 1 I wasn't talking specifically about Quillen's higher K-theory functor, but about something homotopy equivalent, which produces a universal functor from symmetric monoidal categories to spectra that converts the symmetric monoidal structure into addition. – Tyler Lawson Nov 18 2009 at 13:30 7 It is universal in a specific sense. See the thesis of Gonçalo Tabuada. – Mariano Suárez-Alvarez Nov 25 2009 at 3:26 5 That's the Barrat-Priddy-Quillen theorem. See Barrat-Priddy, "On the homology of non-connected monoids and their associated groups", and others reference Quillen, "The group completion of a simplicial monoid", Appendix Q in Friedlander-Mazur's "Filtrations on the homology of algebraic varieties". Also see Segal, "Configuration-spaces and iterated loop-spaces" for another take. – Tyler Lawson Mar 16 2010 at 3:40 3 Since there is now a paper on this, I would like to add that one can read about the universal characterization of higher K-theory that Tyler is discussing here: arxiv.org/abs/1001.2282 – Justin Noel Dec 25 2011 at 12:47 show 3 more comments You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think a key point is that algebraic K-theory is defined not only for rings, but also for schemes (and other kinds of "generalized spaces" in algebraic geometry). If you believe that generalized (Eilenberg-Steenrod) cohomology theories are useful/interesting in algebraic topology, then it is also reasonable to think that they might be interesting in algebraic geometry, and algebraic K-theory is in some sense the simplest and most widely studied such theory, although yes, computations are very hard. Some other motivation: Algebraic K-theory allows you to talk about characteristic classes of vector bundles on schemes, with values in various cohomology theories, see for example Gillet: K-theory and algebraic geometry. Algebraic K-theory is intimately connected with motivic cohomology and algebraic cycles, see for example Friedlander's ICTP lectures available on his webpage, especially the 5th lecture on Beilinson's vision: http://www.math.northwestern.edu/~eric/lectures/ictp/ One of the major themes in arithmetic geometry is the study of special values of motivic L-functions. These values capture a lot of deep arithmetic invariants of number fields and varieties over number fields, and they seem to be mysteriously related to many other things, for example orders of stable homotopy groups of spheres. There are many results and conjectures about these values, most famously the Clay Millennium Birch-Swinnerton-Dyer conjecture, and in many versions of these conjectures, algebraic K-theory plays a crucial role. See for example the survey by Bruno Kahn in the K-theory handbook, also availably at his webpage: http://people.math.jussieu.fr/~kahn/preprints/kcag.pdf There are also many other useful things in the K-theory handbook, such as the lectures by Gillet on K-theory and intersection theory, also available here: http://www.math.uic.edu/~henri/preprints/K-Theory_Chow_Groups-6.pdf - Yes, I agree with you. Algebraic K theory is for studying algebraic geometry. In particular, algebraic cycle. – Shizhuo Zhang Nov 18 2009 at 9:11 As a particular application of algebraic K-theory, let me mention the intersection product on regular schemes. Let X be a regular scheme over spec Z. Then, one can use the Quillen spectral sequence and Adam's operations on K-theory to produce an intersection product on the Chow groups tensored with Q. To my knowledge, this is the first definition of an intersection theory on a class of schemes larger than those smooth over a Dedekind domain. For details, see Soule's book Lectures on Arakelov Geometry. Chapter 1 of this book contains a very nice introduction to K-theory with supports and the Adam's operations. Besides that, all you need is Quillen's original paper to understand the intersection theory. - First, recall the slogan: Small constructions are good for making calculations, but large constructions are good for proving theorems. K-theory is certainly a large construction. In general, K-theory seems to turn up in topology when the following slogan holds: Chain compex good; homology bad. You can often construct exactly the same invariant using K-theory or without, but K-theory makes the extra structure in the chain complex visible (such as Poincare duality), which makes it possible to prove theorems. As a low-dimensional topologist, the example I have in mind is Ranicki's symmetric signature. More basically, the key observation in Milnor's proof that the Alexander polynomial is palindromic is that if you construct it from the chain complex by using Reidemeister torsion, the Poincare duality becomes evident. Similarly, the Blanchfield pairing (linking pairing on the infinite cyclic cover of a knot complement) can be constructed as the symmetric signature (L-theory), which lets you see Poincare duality. This seems completely typical. You can squish everything into the middle dimension and get information (Wall's finiteness obstruction; Alexander polynomial; Branchfield pairing...) or you can use a large K-theoretic construction to get the same information in a way which preserves the chain complex context from which it comes, revealing properties which come from the grading. - I suggest looking at the introduction to Waldhausen's original paper on algebraic K-theory (Algebraic K-theory of generalized free products, Part I, Ann. Math., 108 (1978) 135-204). Waldhausen started out as a 3-manifold theorist, and he realized that certain phenomena in the topology of 3-manifolds would be explained if the Whitehead groups of classical knot groups were trivial. So he set out to prove this, and in order to do so he developed a plethora of methods for dealing with K-groups (including his definition involving the S. construction). The basic approach here is that the Whitehead group is the cokernel of the assembly map, which is a map $$H_*(BG; K(Z)) \to K_* (R[G]).$$ Here $G$ is some group (e.g. the fundamantal group of a knot complement), $R$ is some ring (e.g. $\mathbb{Z}$), and $R[G]$ is the group ring. The homology on the left is the homology theory represented by the (non-connective) $K$-theory spectrum. The study of assembly maps for group rings is one area where $K$-theory computations look a bit more organized; one hopes to show that $K$-theory of group rings is actually a homology theory, i.e. that the assembly map is an isomorphism. Of course this homology theory itself is still quite complex, since it involves the $K$-theory of some ring $R$! - Here's a reference that gives some of the history of algebraic k-theory. It might have something you're looking for. http://www.math.uiuc.edu/K-theory/0343/khistory.pdf. Also Rosenberg's book "Algebraic K-theory and Its Applications is good. - It's very much "thing in itself" (quote from my advisor). And indeed it's mostly of interest to people who (1) like to compute (2) don't mind the fact that there's "no general picture", which admittedly are a minority among mathematicians. In fact, there is (or was) a separate e-archive of K-theory papers! Still, yes, it's a very important and general way to learn about abstract rings. - I used to spend a lot of time looking at the K-theory archive (www.math.uiuc.edu/K-theory/) but I have yet to find enlightenment. – S. Carnahan♦ Oct 12 2009 at 19:52 Well, what about the K-theory periodicity? It's a computation, too. – Ilya Nikokoshev Oct 23 2009 at 20:45 Though I agree the things they compute right now, in 2009, are absolutely not fun from my point of view. – Ilya Nikokoshev Oct 23 2009 at 20:46 4 Since my knowledge of K-theory is very limited, I get a big confused by your comment, that K-theory is for people who don't mind the fact that there's no general picture. It seems to me that K-theory creates a link between many different fields and so, it somehow a tool to see the big picture. So how is it then related to the fact that there may be no general picture at all? – Brian Nov 9 2011 at 14:42 For me, one interesting thing about algebraic K-theory is L-theory (which I wish I understood better). This is in no way going to be coherent, but: Let's say you are interested in classifying manifolds. That's not going to be possible, because any finitely presented group is realizable as the fundamental group of some 4-manifold, and "most" finitely presented groups don't have solvable word problem. OK then, the next best thing is to try to classify manifolds within a fixed homotopy type. Surgery theory is a technique for doing this. Given a homotopy type, you construct a CW-complex X with that homotopy type, and your first question is whether X is homotopy equivalent to a manifold. Roughly speaking, this is determined by the normal bundle. So for X to be homotopy equivalent to a manifold, you want there to exist a suitably defined bundle map $(f,b)\colon M \to X$,which is normal bordant to a homotopy equivalence. This latter condition (for a bundle map to be normal bordant to a homotopy equivalence) is detected K-theoretically (I don't understand this well enough to attempt to try to explain it). Note that homological techniques are sufficient for simply connected manifolds (Browder, Kervaire, Milnor, Novikov...), but are not useful for manifolds which are not simply connected. And this is where K-theory enters. The main point seems to be that algebraic K-theory provides the machinery to work with quadratic forms over nonabelian group rings with involution ($\mathbb{Z}[\pi_1(M)]$ in our case). See also Walhausen's survery, which I don't yet understand. - I found Mitchell's survey "On the Lichtenbaum-Quillen conjectures from a stable homotopy-theoretic viewpoint" very motivating. I assumes only little background and is written for mutually introducing homotopy theorists with algebraic K-theorists. - Grothendieck's original motivation for K-theory was to give a natural setting for the intersection theory on algebraic varieties. - 1 This is Benjamin Antieau's answer, right? – Martin Brandenburg Apr 6 2011 at 22:08 I do not recall if I noticed his post at the time of posting mine; I recalled that I found other's answers being in other directions. Still one thing is that something is an application and another the original motivation. – Zoran Škoda Apr 7 2011 at 16:02 @Zoran: the date and time of the post is right above the poster's name. Antieau's answer was four months before yours. – Ryan Budney Mar 7 at 16:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932605504989624, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-math-topics/209587-help-modular-arithmetic-proof-print.html
# Help with a modular arithmetic proof Printable View • December 11th 2012, 06:32 AM dom139 Help with a modular arithmetic proof Show that there is no solution of "x2 is congruent to 3 modulo 5". This is part of a piece of coursework I have for my Mathematical Foundations module but the lecturer barely covered modular arithmetic and i'm pretty terrible at proofs, so any help would be greatly appreciated, thanks. • December 11th 2012, 06:38 AM emakarov Re: Help with a modular arithmetic proof Every integer x equals 0, 1, 2, 3 or 4 modulo 5, and if $x\equiv k\pmod{5}$, then $x^2\equiv k^2\pmod{5}$. So you only need to check that $k^2\not\equiv3\pmod{5}$ for k = 0, 1, 2, 3, 4, All times are GMT -8. The time now is 10:42 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.878879725933075, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=Buffon's_Needle&diff=28606&oldid=12063
# Buffon's Needle ### From Math Images (Difference between revisions) | | | | | |----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Current revision (21:32, 12 March 2012) (edit) (undo) | | | (31 intermediate revisions not shown.) | | | | | Line 1: | | Line 1: | | | - | {{Image Description | + | {{Image Description Ready | | | |ImageName=Buffon's Needle | | |ImageName=Buffon's Needle | | | |Image=BuffonNeedle 700.gif | | |Image=BuffonNeedle 700.gif | | | |ImageIntro=The Buffon's Needle problem is a mathematical method of approximating the value of pi <math>(\pi = 3.1415...) </math>involving repeatedly dropping needles on a sheet of lined paper and observing how often the needle intersects a line. | | |ImageIntro=The Buffon's Needle problem is a mathematical method of approximating the value of pi <math>(\pi = 3.1415...) </math>involving repeatedly dropping needles on a sheet of lined paper and observing how often the needle intersects a line. | | - | |ImageDescElem=The method was first used to approximate π by Georges-Louis Leclerc, the Comte de Buffon, in 1777. Buffon posed the Buffon's Needle problem and offered the first experiment where he threw breadsticks over his shoulder and counted how often the crossed lines on his tiled floor. | + | |ImageDescElem=The method was first used to approximate π by Georges-Louis Leclerc, the Comte de Buffon, in 1777. Buffon was a mathematician, and he wondered about the probability that a needle would lie across a line between two wooden strips on his floor. To test his question, he apparently threw bread sticks across his shoulder and counted when they crossed a line. | | | | | | | | | + | Calculating the probability of an intersection for the Buffon's Needle problem was the first solution to a problem of <balloon title="A probability problem is ''geometric'' if the probability can be computed as the ratio of two areas or volumes."> geometric probability</balloon>. The solution can be used to design a method for approximating the number π. | | | | | | | - | Subsequent mathematicians have used the method with needles instead of bread sticks, or with computer simulations. In the case where the distance between the lines is equal the length of the needle, we will show that an approximation of π can be calculated using the equation | + | Subsequent mathematicians have used this method with needles instead of bread sticks, or with computer simulations. We will show that when the distance between the lines is equal the length of the needle, an approximation of π can be calculated using the equation | | | | | | | | <math> | | <math> | | - | \pi \approx {2*\mbox{number of drops} \over \mbox{number of hits}} | + | \pi \approx {2\times \mbox{number of drops} \over \mbox{number of hits}} | | | </math> | | </math> | | | |ImageDesc=====Will the Needle Intersect a Line?==== | | |ImageDesc=====Will the Needle Intersect a Line?==== | | Line 17: | | Line 18: | | | | [[Image:willtheneedlecross ps10.jpg|right]] | | [[Image:willtheneedlecross ps10.jpg|right]] | | | | | | | - | To prove that the Buffon's Needle experiment will give an approximation of π, we can consider which positions of the needle will cause an intersection. Since the needle drops are random, there is no reason why the needle should be more likely to intersect one line than another. As a result, we can simplify our proof by focusing on a particular strip of the paper bounded by two horizontal lines. | + | To prove that the Buffon's Needle experiment will give an approximation of π, we can consider which positions of the needle will cause an intersection. Since the needle drops are random, there is no reason why the needle should be more likely to intersect one line than another. As a result, we can simplify our proof by focusing on a particular strip of the paper bounded by two horizontal lines. | | | | | | | - | The variable θ is the acute angle made by the needle and an imaginary line parallel to the ones on the paper. Finally, ''d'' is the distance between the center of the needle and the nearest line. | + | The variable θ is the acute angle made by the needle and an imaginary line parallel to the ones on the paper. Since we are considering the case where the interline distance equals l, we might as well take that common distance to be 1 unit. | | | | + | Finally, ''d'' is the distance between the center of the needle and the nearest line. Also, there is no reason why the needle is more likely to fall at a certain angle or distance, so we can consider all values of θ and d equally probable. | | | | | | | - | We can extend line segments from the center and tip of the needle to meet at a right angle. A needle will cut a line if the green arrow, ''d'', is shorter than the leg opposite θ. More precisely, it will intersect when | + | We can extend line segments from the center and tip of the needle to meet at a right angle. A needle will cut a line if the length of the green arrow, ''d'', is shorter than the length of the leg opposite θ. More precisely, it will intersect when | | | | | | | | <math> d \leq \left( \frac{1}{2} \right) \sin(\theta). \ </math> | | <math> d \leq \left( \frac{1}{2} \right) \sin(\theta). \ </math> | | | | | | | - | See case 1, where the needle falls at a relatively small angle with respect to the lines. Because of the small angle, the center of the needle would have to fall very close. In case 2, the needle intersects even though the center of the needle is far from both lines because the angle is so large. | + | See case 1, where the needle falls at a relatively small angle with respect to the lines. Because of the small angle, the center of the needle would have to fall very close to one of the horizontal lines in order to intersect it. In case 2, the needle intersects even though the center of the needle is far from both lines because the angle is so large. | | | | | | | | | | | | | ====The Probability of an Intersection==== | | ====The Probability of an Intersection==== | | | | | | | - | In order to show that the Buffon's experiment gives an approximation for π, we need to show that there is a relationship between the probability of an intersection and the value of π. If we graph the outcomes of θ along the X axis and ''d'' along the Y, we have the <balloon title="In probability theory, the sample space of an experiment is the set of all possible outcomes."> sample space</balloon> for the trials. In the diagram below, the sample space is contained by the dashed lines. | + | In order to show that the Buffon's experiment gives an approximation for π, we need to show that there is a relationship between the probability of an intersection and the value of π. If we graph the possible values of θ along the X axis and ''d'' along the Y, we have the <balloon title="In probability theory, the sample space of an experiment is the set of all possible outcomes."> sample space</balloon> for the trials. In the diagram below, the sample space is contained by the dashed lines. | | | | | | | - | | + | Each point on the graph represents some combination of an angle and a distance that a needle might occupy. | | - | The sample space is useful in this type of simulation because it gives a visual representation of all the possible ways the needle can fall. Each point on the graph represents some combination of an angle and distance that a needle might occupy. We divide the area that contains combinations that represent an intersection by the total possible positions to calculate the probability of an intersection. | + | | | | | | | | | [[image:Minigraph.jpg]] | | [[image:Minigraph.jpg]] | | | | | | | - | There will be an intersection if <math> d \leq \left ( \frac{1}{2} \right ) \sin(\theta) \ </math>, which is represented by the blue region. The area under this curve represents all the combinations of distances and angles that will cause the needle to intersect a line. The area under the blue curve, which is equal to <math> \frac {1}{2} </math> in this case, can found by evaluating the integral | + | There will be an intersection if <math> d \leq \left ( \frac{1}{2} \right ) \sin(\theta) \ </math>, which is represented by the blue region. The area under this curve represents all the combinations of distances and angles that will cause the needle to intersect a line. Since each of these combinations is equally likely, the probability is proportional to the area – that's what makes this a geometric probability problem. The area under the blue curve, which is equal to 1/2 in this case, can found by evaluating the integral | | | | | | | - | <math>\int_0^{\frac {\pi}{2}} \frac{1}{2} \sin(\theta) d\theta</math> | + | <math>\int_0^{\frac {\pi}{2}} \frac{1}{2} \sin(\theta) d\theta = \frac {1}{2}</math> | | | | | | | | Then, the area of the sample space can be found by multiplying the length of the rectangle by the height. | | Then, the area of the sample space can be found by multiplying the length of the rectangle by the height. | | Line 45: | | Line 46: | | | | <math> \frac {1}{2} * \frac {\pi}{2} = \frac {\pi}{4}</math> | | <math> \frac {1}{2} * \frac {\pi}{2} = \frac {\pi}{4}</math> | | | | | | | - | The probability of a hit can be calculated by taking the number of total ways an intersection can occur over the total number possible outcomes (the number of trials). For needle drops, the probability is proportional to the ratio of the two areas in this case because each possible value of θ and ''d'' is equally probable. The probability of an intersection is | + | The probability is equal to the ratio of the two areas in this case because each possible value of θ and ''d'' is equally probable. The probability of an intersection is | | | | | | | - | <math>P_{hit} = \cfrac{ \frac{1}{2} }{\frac{\pi}{4}} = \frac {2}{\pi} = .6366197...</math> | + | <math>P_{hit} = \cfrac{ \frac{1}{2} }{\frac{\pi}{4}} = \frac {2}{\pi} = .6366197\ldots.\qquad \mbox{and thus} \qquad \pi = \frac 2{P_{hit}}.</math> | | | | | | | | | + | Thus, in order to approximate <math>\pi</math>, it remains only to approximate <math>P_{hit}</math>. | | | | | | | - | ====Using Random Samples to Approximate Pi==== | + | | | | | + | ====Using Random Samples to Approximate π==== | | | | | | | | The original goal of the Buffon's needle method, approximating π, can be achieved by using probability to solve for π. If a large number of trials is conducted, the proportion of times a needle intersects a line will be close to the probability of an intersection. That is, the number of line hits divided by the number of drops will equal approximately the probability of hitting the line. | | The original goal of the Buffon's needle method, approximating π, can be achieved by using probability to solve for π. If a large number of trials is conducted, the proportion of times a needle intersects a line will be close to the probability of an intersection. That is, the number of line hits divided by the number of drops will equal approximately the probability of hitting the line. | | | | | | | - | <math> \frac {\mbox{number of hits}}{\mbox{number of drops}} \approx P_{hit} = \frac {2}{\pi}</math> | + | <math>P_{hit} \approx \frac {\mbox{number of hits}}{\mbox{number of drops}} </math> | | - | | + | | | - | So | + | | | - | | + | | | - | <math> \frac {\mbox{number of hits}}{\mbox{number of drops}} \approx \frac {2}{\pi}</math> | + | | | | | | | | - | Therefore, we can solve for π: | + | So, | | | | | | | - | <math> \pi \approx \frac {2 * {\mbox{number of drops}}}{\mbox{number of hits}} </math> | + | <math> \pi = \frac{2}{P_{hit}} \approx \frac {2}{\frac {\mbox{number of hits}}{\mbox{number of drops}}} = \frac {2 * {\mbox{number of drops}}}{\mbox{number of hits}} </math> | | | | | | | | | | | | | ====Watch a Simulation==== | | ====Watch a Simulation==== | | | | | | | - | http://mste.illinois.edu/reese/buffon/bufjava.html | + | <center>{{#iframe:http://mste.illinois.edu/reese/buffon/bufjava.html|405|500|thumb}}</center> | | | |AuthorName=Wolfram MathWorld | | |AuthorName=Wolfram MathWorld | | | |SiteURL=http://mathworld.wolfram.com/BuffonsNeedleProblem.html | | |SiteURL=http://mathworld.wolfram.com/BuffonsNeedleProblem.html | | Line 77: | | Line 76: | | | | The Buffon's needle problem was the first recorded use of a Monte Carlo method. These methods employ repeated random sampling to approximate a probability, instead of computing the probability directly. Monte Carlo calculations are especially useful when the nature of the problem makes a direct calculation impossible or unfeasible, and they have become more common as the introduction of computers makes randomization and conducting a large number of trials less laborious. | | The Buffon's needle problem was the first recorded use of a Monte Carlo method. These methods employ repeated random sampling to approximate a probability, instead of computing the probability directly. Monte Carlo calculations are especially useful when the nature of the problem makes a direct calculation impossible or unfeasible, and they have become more common as the introduction of computers makes randomization and conducting a large number of trials less laborious. | | | | | | | - | π is an irrational number, which means that its value cannot be expressed exactly as a fraction a/b, where a and b are integers. As a result, π cannot be written as an exact decimal and mathematicians have been challenged with trying to determine increasingly accurate approximations. The timeline below shows the improvements in approximating pi throughout history. In the past 50 years especially, improvements in computer capability allow mathematicians to determine more decimal places. Nonetheless, better methods of approximation are still desired. | + | π is an irrational number, which means that its value cannot be expressed exactly as a fraction a/b, where a and b are integers. This also means that a normal decimal repetition of pi is non-terminating and non-repeating, and mathematicians have been challenged with trying to determine increasingly accurate approximations. The timeline below shows the improvements in approximating π throughout history. In the past 50 years especially, improvements in computer capability allow more decimal places to be determined. Nonetheless, better methods of approximation are still desired. | | | | | | | - | [[Image:history of pi1.jpg|800px]] | + | [[Image:history of pi2.jpg|800px]] | | | | | | | | | | | | Line 86: | | Line 85: | | | | [[Image:box1.jpg‎]] | | [[Image:box1.jpg‎]] | | | | | | | - | These results show that the Buffon's Needle approximation is a relatively tedious. Even when a large number of needles are dropped, this experiment gave a value of pi that was inaccurate in the third decimal place. Compared to other computer generated techniques, Buffon's method is impractical because the estimates converge towards π rather slowly. Nonetheless, the intriguing relationship between the probability of a needle's intersection and the value of π has attracted mathematicians to study the Buffon's Needle method since its introduction in the 18th century. | + | These results show that the Buffon's Needle method approximation is relatively tedious. Compared to other computation techniques, Buffon's method is impractical because the estimates converge towards π rather slowly. Even when a large number of needles were dropped, this experiment gave a value of π that was inaccurate in the third decimal place. | | | | + | | | | | + | Regardless of the impracticality of the Buffon's Needle method, the historical significance of the problem as a Monte Carlo method means that it continues to be widely recognized. | | | | | | | | | | | | Line 93: | | Line 94: | | | | | | | | | | | | | - | The Buffon’s needle problem has been generalized so that the probability of an intersection can be calculated for a needle of any length and paper with any spacing. It has been proven that for a needle shorter than the distance between the lines, the probability of a intersection is <math> \frac {2*l}{\pi*d} </math>. This equation makes sense when we consider the normal case, where ''l'' =1 and ''d'' =1, so these variables disappear and the probability is <math> \frac {2}{\pi} </math>. | + | The Buffon’s needle problem has been generalized so that the probability of an intersection can be calculated for a needle of any length and paper with any spacing. For a needle shorter than the distance between the lines, it can be shown by a similar argument to the case where ''d'' = 1 and ''l'' = 1 that the probability of a intersection is <math> \frac {2*l}{\pi*d} </math>. Note that this expression agrees with the normal case, where ''l'' =1 and ''d'' =1, so these variables disappear and the probability is <math> \frac {2}{\pi} </math>. | | | | | | | | | | | | - | The generalization of the problem is useful because it allows us to examine the relationship between length of the needle, distance between the lines, and probability of an intersection. The variable for length is in the numerator, so a longer needle will have a greater probability of an intersection. The variable for distance is in the denominator, so greater space between lines will decrease the probability. | + | The generalization of the problem is useful because it allows us to examine the relationship between length of the needle, distance between the lines, and probability of an intersection. The variable for length is in the numerator, so a longer needle will have a greater probability of an intersection. The variable for distance is in the denominator, so greater space between lines will decrease the probability of an intersection. | | | | | | | | | | | | Line 105: | | Line 106: | | | | ====Needles in Nature==== | | ====Needles in Nature==== | | | | | | | - | Applications of the Buffon's Needle method are even found naturally in nature. The Centre for Mathematical Biology at the University of Bath found uses of the Buffon's Needle algorithm in a recent study of ant colonies. The researchers found that an ant can estimate the size of an anthill by visiting the hill twice and noting how often it recrosses its first path. | + | Applications of the Buffon's Needle method are even found in nature. The Centre for Mathematical Biology at the University of Bath found uses of the Buffon's Needle algorithm in a recent study of ant colonies. The researchers found that an ant can estimate the size of an anthill by visiting the hill twice and noting how often it recrosses its first path during the second trip. | | | | | | | | [[Image:one ant track.jpg|right]] | | [[Image:one ant track.jpg|right]] | | Line 111: | | Line 112: | | | | Ants generally nest in groups of about 50 or 100, and the size of their nest preference is determined by the size of the colony. When a nest is destroyed, the colony must find a suitable replacement, so they send out scouts to find new potential homes. | | Ants generally nest in groups of about 50 or 100, and the size of their nest preference is determined by the size of the colony. When a nest is destroyed, the colony must find a suitable replacement, so they send out scouts to find new potential homes. | | | | | | | - | | + | In the study, scout ants were provided with "nest cavities of different sizes, shapes, and configurations in order to examine preferences" [2]. From observations that ants prefer nests of a certain size related to the number of ants in the colony, researchers were able to draw the conclusion that scout ants must have a method of measuring areas. | | - | In the study, scout ants were provided with "nest cavities of different sizes, shapes, and configurations in order to examine preferences" [2]. From their observations, researchers were able to draw the conclusion that scout ants must have a method of measuring areas. | + | | | | | | | | | A scout initially begins exploration of a nest by walking around the site to leave tracks. Then, the ant will return later and walk a new path that repeatedly intersects the first tracks. The first track will be laced with a chemical that causes the ant to note each time it crosses the original path. The researchers believe that these scout ants can calculate an estimate for the nest's area using the number of intersections between its two visits. | | A scout initially begins exploration of a nest by walking around the site to leave tracks. Then, the ant will return later and walk a new path that repeatedly intersects the first tracks. The first track will be laced with a chemical that causes the ant to note each time it crosses the original path. The researchers believe that these scout ants can calculate an estimate for the nest's area using the number of intersections between its two visits. | | | | | | | - | ''"In effect, an ant scout applies a variant of Buffon's needle theorem: The estimated area of a flat surface is inversely proportional to the number of intersections between the set of lines randomly scattered across the surface."'' [7] | + | The ants can measure the size of their hill using a related and fairly intuitive method: If they are constantly intersecting their first path, the area must be small. If they rarely reintersects the first track, the area of the hill must be much larger so there is plenty of space for a non-intersecting second path. | | | | | | | | [[Image:willtheneedlecross ps4.jpg|left]] | | [[Image:willtheneedlecross ps4.jpg|left]] | | Line 122: | | Line 122: | | | | [[Image:Willtheneedlecross_ps5.jpg|left]] | | [[Image:Willtheneedlecross_ps5.jpg|left]] | | | | | | | - | This idea can be related back to the generalization of the problem by imagining if the parallel lines were much further apart. A larger distance between the two lines would mean a much smaller probability of intersection. We can see in case 3 that when the distance between the lines is greater than the length of the needle, even very large angle won’t necessarily cause an intersection. | + | ''"In effect, an ant scout applies a variant of Buffon's needle theorem: The estimated area of a flat surface is inversely proportional to the number of intersections between the set of lines randomly scattered across the surface."'' [7] | | | | | | | - | The ants can measure the size of their hill using a related and fairly intuitive method: If they are constantly intersecting their first path, the area must be small. If they rarely reintersects the first track, the area of the hill must be much larger so there is plenty of space for a non-intersecting second path. | + | This idea can be related back to the generalization of the problem by imagining if the area between the lines was increased by making the parallel lines much further apart. A larger distance between the two lines would mean a much smaller probability of intersection. We can see in case 3 that when the distance between the lines is greater than the length of the needle, even very large angle won’t necessarily cause an intersection. | | | | | | | - | This natural method of random motion in nature allows the ants to gauge the size of their potential new hill regardless of its shape. Scout ants are even able to asses the area of a hill in complete darkness. The animals show that algorithms can be used to make decisions where an array of restrictions may prevent other methods from being effective. | + | This natural method of random motion allows the ants to gauge the size of their potential new hill regardless of its shape. Scout ants are even able to asses the area of a hill in complete darkness. The animals show that algorithms can be used to make decisions where an array of restrictions may prevent other methods from being effective. | | | | | | | - | The Buffon's needle problem began the study of geometric probability. The study of geometric probability allowed mathematicians to calculate probability of events without counting each possible outcome which paved the way for study of complex events like ants searching for new nests. | + | COOL LINK! : http://www.youtube.com/watch?feature=player_embedded&v=sJVivjuMfWA | | | | | | | | }} | | }} | | Line 144: | | Line 144: | | | | | | | | | [7] http://math.tntech.edu/techreports/TR_2001_4.pdf | | [7] http://math.tntech.edu/techreports/TR_2001_4.pdf | | - | |InProgress=Yes | + | | | | | + | |Pre-K=No | | | | + | |Elementary=No | | | | + | |MiddleSchool=Yes | | | | + | |HighSchool=Yes | | | | + | |InProgress=No | | | }} | | }} | ## Current revision Buffon's Needle Field: Geometry Image Created By: Wolfram MathWorld Website: [1] Buffon's Needle The Buffon's Needle problem is a mathematical method of approximating the value of pi $(\pi = 3.1415...)$involving repeatedly dropping needles on a sheet of lined paper and observing how often the needle intersects a line. # Basic Description The method was first used to approximate π by Georges-Louis Leclerc, the Comte de Buffon, in 1777. Buffon was a mathematician, and he wondered about the probability that a needle would lie across a line between two wooden strips on his floor. To test his question, he apparently threw bread sticks across his shoulder and counted when they crossed a line. Calculating the probability of an intersection for the Buffon's Needle problem was the first solution to a problem of geometric probability. The solution can be used to design a method for approximating the number π. Subsequent mathematicians have used this method with needles instead of bread sticks, or with computer simulations. We will show that when the distance between the lines is equal the length of the needle, an approximation of π can be calculated using the equation $\pi \approx {2\times \mbox{number of drops} \over \mbox{number of hits}}$ # A More Mathematical Explanation [Click to view A More Mathematical Explanation] #### Will the Needle Intersect a Line? [[Image:willtheneedlecros [...] [Click to hide A More Mathematical Explanation] #### Will the Needle Intersect a Line? To prove that the Buffon's Needle experiment will give an approximation of π, we can consider which positions of the needle will cause an intersection. Since the needle drops are random, there is no reason why the needle should be more likely to intersect one line than another. As a result, we can simplify our proof by focusing on a particular strip of the paper bounded by two horizontal lines. The variable θ is the acute angle made by the needle and an imaginary line parallel to the ones on the paper. Since we are considering the case where the interline distance equals l, we might as well take that common distance to be 1 unit. Finally, d is the distance between the center of the needle and the nearest line. Also, there is no reason why the needle is more likely to fall at a certain angle or distance, so we can consider all values of θ and d equally probable. We can extend line segments from the center and tip of the needle to meet at a right angle. A needle will cut a line if the length of the green arrow, d, is shorter than the length of the leg opposite θ. More precisely, it will intersect when $d \leq \left( \frac{1}{2} \right) \sin(\theta). \$ See case 1, where the needle falls at a relatively small angle with respect to the lines. Because of the small angle, the center of the needle would have to fall very close to one of the horizontal lines in order to intersect it. In case 2, the needle intersects even though the center of the needle is far from both lines because the angle is so large. #### The Probability of an Intersection In order to show that the Buffon's experiment gives an approximation for π, we need to show that there is a relationship between the probability of an intersection and the value of π. If we graph the possible values of θ along the X axis and d along the Y, we have the sample space for the trials. In the diagram below, the sample space is contained by the dashed lines. Each point on the graph represents some combination of an angle and a distance that a needle might occupy. There will be an intersection if $d \leq \left ( \frac{1}{2} \right ) \sin(\theta) \$, which is represented by the blue region. The area under this curve represents all the combinations of distances and angles that will cause the needle to intersect a line. Since each of these combinations is equally likely, the probability is proportional to the area – that's what makes this a geometric probability problem. The area under the blue curve, which is equal to 1/2 in this case, can found by evaluating the integral $\int_0^{\frac {\pi}{2}} \frac{1}{2} \sin(\theta) d\theta = \frac {1}{2}$ Then, the area of the sample space can be found by multiplying the length of the rectangle by the height. $\frac {1}{2} * \frac {\pi}{2} = \frac {\pi}{4}$ The probability is equal to the ratio of the two areas in this case because each possible value of θ and d is equally probable. The probability of an intersection is $P_{hit} = \cfrac{ \frac{1}{2} }{\frac{\pi}{4}} = \frac {2}{\pi} = .6366197\ldots.\qquad \mbox{and thus} \qquad \pi = \frac 2{P_{hit}}.$ Thus, in order to approximate $\pi$, it remains only to approximate $P_{hit}$. #### Using Random Samples to Approximate π The original goal of the Buffon's needle method, approximating π, can be achieved by using probability to solve for π. If a large number of trials is conducted, the proportion of times a needle intersects a line will be close to the probability of an intersection. That is, the number of line hits divided by the number of drops will equal approximately the probability of hitting the line. $P_{hit} \approx \frac {\mbox{number of hits}}{\mbox{number of drops}}$ So, $\pi = \frac{2}{P_{hit}} \approx \frac {2}{\frac {\mbox{number of hits}}{\mbox{number of drops}}} = \frac {2 * {\mbox{number of drops}}}{\mbox{number of hits}}$ # Why It's Interesting #### Monte Carlo Methods The Buffon's needle problem was the first recorded use of a Monte Carlo method. These methods employ repeated random sampling to approximate a probability, instead of computing the probability directly. Monte Carlo calculations are especially useful when the nature of the problem makes a direct calculation impossible or unfeasible, and they have become more common as the introduction of computers makes randomization and conducting a large number of trials less laborious. π is an irrational number, which means that its value cannot be expressed exactly as a fraction a/b, where a and b are integers. This also means that a normal decimal repetition of pi is non-terminating and non-repeating, and mathematicians have been challenged with trying to determine increasingly accurate approximations. The timeline below shows the improvements in approximating π throughout history. In the past 50 years especially, improvements in computer capability allow more decimal places to be determined. Nonetheless, better methods of approximation are still desired. A recent study conducted the Buffon's Needle experiment to approximate π using computer software. The researchers administered 30 trials for each number of drops, and averaged their estimates for π. They noted the improvement in accuracy as more trials were conducted. These results show that the Buffon's Needle method approximation is relatively tedious. Compared to other computation techniques, Buffon's method is impractical because the estimates converge towards π rather slowly. Even when a large number of needles were dropped, this experiment gave a value of π that was inaccurate in the third decimal place. Regardless of the impracticality of the Buffon's Needle method, the historical significance of the problem as a Monte Carlo method means that it continues to be widely recognized. #### Generalization of the problem The Buffon’s needle problem has been generalized so that the probability of an intersection can be calculated for a needle of any length and paper with any spacing. For a needle shorter than the distance between the lines, it can be shown by a similar argument to the case where d = 1 and l = 1 that the probability of a intersection is $\frac {2*l}{\pi*d}$. Note that this expression agrees with the normal case, where l =1 and d =1, so these variables disappear and the probability is $\frac {2}{\pi}$. The generalization of the problem is useful because it allows us to examine the relationship between length of the needle, distance between the lines, and probability of an intersection. The variable for length is in the numerator, so a longer needle will have a greater probability of an intersection. The variable for distance is in the denominator, so greater space between lines will decrease the probability of an intersection. To see how a longer needle will affect probability, follow this link: http://whistleralley.com/java/buffon_graph.htm #### Needles in Nature Applications of the Buffon's Needle method are even found in nature. The Centre for Mathematical Biology at the University of Bath found uses of the Buffon's Needle algorithm in a recent study of ant colonies. The researchers found that an ant can estimate the size of an anthill by visiting the hill twice and noting how often it recrosses its first path during the second trip. Ants generally nest in groups of about 50 or 100, and the size of their nest preference is determined by the size of the colony. When a nest is destroyed, the colony must find a suitable replacement, so they send out scouts to find new potential homes. In the study, scout ants were provided with "nest cavities of different sizes, shapes, and configurations in order to examine preferences" [2]. From observations that ants prefer nests of a certain size related to the number of ants in the colony, researchers were able to draw the conclusion that scout ants must have a method of measuring areas. A scout initially begins exploration of a nest by walking around the site to leave tracks. Then, the ant will return later and walk a new path that repeatedly intersects the first tracks. The first track will be laced with a chemical that causes the ant to note each time it crosses the original path. The researchers believe that these scout ants can calculate an estimate for the nest's area using the number of intersections between its two visits. The ants can measure the size of their hill using a related and fairly intuitive method: If they are constantly intersecting their first path, the area must be small. If they rarely reintersects the first track, the area of the hill must be much larger so there is plenty of space for a non-intersecting second path. "In effect, an ant scout applies a variant of Buffon's needle theorem: The estimated area of a flat surface is inversely proportional to the number of intersections between the set of lines randomly scattered across the surface." [7] This idea can be related back to the generalization of the problem by imagining if the area between the lines was increased by making the parallel lines much further apart. A larger distance between the two lines would mean a much smaller probability of intersection. We can see in case 3 that when the distance between the lines is greater than the length of the needle, even very large angle won’t necessarily cause an intersection. This natural method of random motion allows the ants to gauge the size of their potential new hill regardless of its shape. Scout ants are even able to asses the area of a hill in complete darkness. The animals show that algorithms can be used to make decisions where an array of restrictions may prevent other methods from being effective. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References [1] http://www.maa.org/mathland/mathtrek_5_15_00.html [2] http://mste.illinois.edu/reese/buffon/bufjava.html [3] http://www.absoluteastronomy.com/topics/Monte_Carlo_method [4] The Number Pi. Eymard, Lafon, and Wilson. [5] Monte Carlo Methods Volume I: Basics. Kalos and Whitlock. [6] Heart of Mathematics. Burger and Starbird [7] http://math.tntech.edu/techreports/TR_2001_4.pdf Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249297976493835, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=357671
Physics Forums ## Normal coordinates and frequencies of four masses connected by springs on a circle 1. The problem statement, all variables and given/known data Given the system of 4 equal masses connected by identical springs, and constrained to move on a circle, find the normal coordinates and frequencies of the masses. I'm not looking for the answer, just a push in the right direction as I'm having trouble starting. If someone could explain how this is related to the motion of springs and masses in a straight line system, it would be much appreciated. Even a vague idea would be helpful, thanks. 2. Relevant equations 3. The attempt at a solution PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Quote by jameson2 1. The problem statement, all variables and given/known data Given the system of 4 equal masses connected by identical springs, and constrained to move on a circle, find the normal coordinates and frequencies of the masses. I'm not looking for the answer, just a push in the right direction as I'm having trouble starting. If someone could explain how this is related to the motion of springs and masses in a straight line system, it would be much appreciated. Even a vague idea would be helpful, thanks. It's not really related to motions of the mass-spring system in 1D. You might want to consider spherical-polar coordinates with the origin at the center of the circular loop, that way the masses are given by a single coordinate: $\theta$. I should note that you will actually have 4 $\theta$ coordinates, one for each mass. I hope I didn't introduce any confusion on that. Thread Tools | | | | |---------------------------------------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Normal coordinates and frequencies of four masses connected by springs on a circle | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 6 | | | Advanced Physics Homework | 3 | | | Introductory Physics Homework | 0 | | | Introductory Physics Homework | 1 | | | Classical Physics | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137901663780212, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/190422-uniform-convergence-1-a.html
# Thread: 1. ## uniform convergence on (1,a) The function $f_n=\frac{x^n}{x^n+1}$ converges uniformly to $f(x)=1$ on the interval $(1,a)$. Proof. taking x=a as $|\frac{a^n}{a^n+1}-1|<\epsilon\rightarrow \frac{1}{1+a^n}<\epsilon \rightarrow ln(\frac{1-\epsilon}{\epsilon})<ln(a) n$ So can I take $N={ln(\frac{1-\epsilon}{\epsilon})}$ and then I have that: $\forall\epsilon>0\exists N(={ln(\frac{1-\epsilon}{\epsilon})}) s.t \forall n\geq N \ \forall x\in(1,a)\ \mbox{we have}\ |f_n(x)-f(x)|<\epsilon$ is this correct or is it wildly off- uniform convergence is kind of killing me thanks for any help 2. ## Re: uniform convergence on (1,a) Originally Posted by hmmmm The function $f_n=\frac{x^n}{x^n+1}$ converges uniformly to $f(x)=1$ on the interval $(1,a)$. Proof. taking x=a as $|\frac{a^n}{a^n+1}-1|<\epsilon\rightarrow \frac{1}{1+a^n}<\epsilon \rightarrow ln(\frac{1-\epsilon}{\epsilon})<ln(a) n$ So can I take $N={ln(\frac{1-\epsilon}{\epsilon})}$ and then I have that: $\forall\epsilon>0\exists N(={ln(\frac{1-\epsilon}{\epsilon})}) s.t \forall n\geq N \ \forall x\in(1,a)\ \mbox{we have}\ |f_n(x)-f(x)|<\epsilon$ is this correct or is it wildly off- uniform convergence is kind of killing me thanks for any help I don't quite understand what you're doing here, why are you choosing this $a$? Are you secretly plugging this in because it's the max, and you just omitted this? Give us more words, explaining your choices etc. and then we can help you decide of what you did is correct. A quick check: you should be proving that for every $\varepsilon>$ there exists $N\in\mathbb{N}$ such that $\|f_n(x)-f(x)\|_\infty<\varepsilon$ where $\|\cdot\|_\infty$ is the infinity norm. This 'looks' like what you're doing, but please fill in more holes. 3. ## Re: uniform convergence on (1,a) The function $f_n=\frac{x^n}{x^n+1}$ converges uniformly to $f(x)=1$ on the interval $(1,a)$. Proof. taking x=a as this will be the max of the $f_n(x)$ so if the max is less then epsilon then $f_n(x)$ will be for all other values of x. as $|\frac{a^n}{a^n+1}-1|<|\frac{a^n}{a^n+1}-1|<\epsilon$ and as $\frac{a^n}{a^n+1}-1=\frac{-1}{1+a^n}$ and so: $|\frac{1}{1+a^n}|<\epsilon \rightarrow ln(\frac{1-\epsilon}{\epsilon})<ln(a) n$ So can I take $N={ln(\frac{1-\epsilon}{\epsilon})}$ and then I have that: $\forall\epsilon>0\exists N(={ln(\frac{1-\epsilon}{\epsilon})}) s.t \forall n\geq N \ \forall x\in(1,a)\ \mbox{we have}\ |f_n(x)-f(x)|<\epsilon$ I think this is a bit better? Sorry about it being a bit of a mess I am a little confused by this. Thanks very much for the help thanks for any help
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301013946533203, "perplexity_flag": "head"}
http://mathoverflow.net/questions/53381?sort=newest
## The continuity of Injectivity radius ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Dear all, when reading a book of M. Berger, I learned that the injectivity radius Inj(x) on a compact Riemannian manifold depends continuously on the point x. When the manifold is complete and non-compact, Inj may not be continuous. For example, Inj(x) decreases to zero when x moves to the most curved point on a paraboloid. However, it could be infinity at that point. My question is, can we prove the continuity of Inj on a non-compact manifold under some conditions? (I think that the weakest condition is to assume the finiteness of Inj.) ps. I must admit that I don't know how to prove the continuity of Inj even on a compact manifold. I think that the argument should involve the stability of ODEs (the geodesic equation and Jacobi equation). If one of you have a reference about this, could you please tell me? thanks a lot! - Are you sure about your paraboloid example? Take a look at Proposition 2.1.10 on Page 131 of W. Klingenberg's book Riemannian Geometry – Willie Wong Jan 26 2011 at 19:01 You write: "Inj(x) decreases to zero when x moves to the most curved point on a paraboloid." this is not true, InjRad does not not go to zero... – Anton Petrunin Jan 26 2011 at 19:03 In fact, on any compact region of a smooth Riemannian manifold, you have that the injectivity radius is bounded below by a strictly positive number... (see the same reference that I gave above) – Willie Wong Jan 26 2011 at 19:16 Thank you a lot!! I should ask the question earlier, it had troubled me for one month... – Chih-Wei Chen Jan 27 2011 at 16:19 ## 1 Answer The compactness is irrelevant; i.e if it is true for compact manifolds then the same is true for complete ones. (The same proof as in comact case works, but it is easier to do this way.) If $R<\mathrm{InjRad}_p$ then one can construct a smooth metric on a sphere with an isometric copy of $B_R(p)$ inside. If there is a sequence of points $x_n\to x$ such that $$\lim\ \mathrm{InjRad}_{x_n}< \mathrm{InjRad}_x,$$ apply above consruction for $R$ slightly smaller than $\mathrm{InjRad}_x$. You get a compact manifold with non-continuous InjRad. If there is a sequence of points $x_n\to x$ such that $$\lim\ \mathrm{InjRad}_{x_n}> \mathrm{InjRad}_x$$ then apply above construction for $p=x_n$ for large enough $n$ and $R>\mathrm{InjRad}_x$. That leads to a contradiction again. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9123700261116028, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/21812/can-we-construct-a-function-f-mathbbr-rightarrow-mathbbr-such-that-it-h
# Can we construct a function $f:\mathbb{R} \rightarrow \mathbb{R}$ such that it has intermediate value property and discontinuous everywhere? Can we construct a function $f:\mathbb{R} \rightarrow \mathbb{R}$ such that it has intermediate value property and discontinuous everywhere? I think it is probable because we can consider $$y = \begin{cases} \sin \left( \frac{1}{x} \right), & \text{if } x \neq 0, \\ 0, & \text{if } x=0. \end{cases}$$ This function has intermediate value property but is discontinuous on $x=0$. Inspired by this example, let $r_n$ denote the rational number,and define $$y = \begin{cases} \sum_{n=1}^{\infty} \frac{1}{2^n} \left| \sin \left( \frac{1}{x-r_n} \right) \right|, & \text{if } x \notin \mathbb{Q}, \\ 0, & \mbox{if }x \in \mathbb{Q}. \end{cases}$$ It is easy to see this function is discontinuons if $x$ is not a rational number. But I can't verify its intermediate value property. - 2 Kindly typeset your post in future. – user17762 Feb 13 '11 at 9:13 ## 4 Answers Sure. The class of functions satisfying the conclusion of the Intermediate Value Theorem is actually vast and well-studied: such functions are called Darboux functions in honor of Jean Gaston Darboux, who showed that any derivative is such a function (the point being that not every derivative is continuous). A standard example of an everywhere discontinuous Darboux function is Conway's base 13 function. (Perhaps it is worth noting that the existence of such functions is not due to Conway: his is just a particularly nice, elementary example. I believe such functions were already well known to Rene Baire, and indeed possibly to Darboux himself.) - +1 for the base 13 function. I also thought sin(1/x) was a bit of a cheat, being discontinuous only in one point. – Glen Wheeler Feb 13 '11 at 10:57 Any function that give all the reals numbers on any open interval is an example. You can find other examples of this in the answers to this question on MathOverflow. - Please look at the problem $1.3.29$ in Problems in Mathematical Analysis Vol II, Continuity and Differentiation, by Kaczor and Nowak. They have provided solutions also. Anyhow since one can't view it in Google books, I am Texing out the solution here. Question: Recall that every $x \in (0,1)$ can be represented by a binary fraction $0.a_{1}a_{2}a_{3}\cdots$, where $a_{i} \in \{0,1\}$ and $i=1,2, \cdots$. Let $f: (0,1) \to [0,1]$ be defined by $$f(x) = \overline{\lim_{n \to \infty}} \ \frac{1}{n} \sum\limits_{i=1}^{n}a_{i}$$ Prove that $f$ is discontinuous at each $x \in (0,1)$ but nevertheless has the intermediate value property. Solution. We show that if $I$ is a subinterval of $(0,1)$ with non-empty interior then, $f(I)=[0,1]$. To this end, note that such an $I$ contains a sub-interval $\bigl(\frac{k}{2^{n_0}}, \frac{k+1}{2^{n_0}}\bigr)$. So it is enough to show that, $$f\biggl(\biggl(\frac{k}{2^{n_0}},\frac{k+1}{2^{n_0}}\biggr)\biggr)= [0,1]$$ Now observe that if $x \in (0,1)$ then either $x-\frac{m}{2^{n_0}}$ with some $m$ and $n_0$ or $x \in \bigl(\frac{j}{2^{n_0}},\frac{j+1}{2^{n_0}}\bigr)$ with some $j=0,1, \cdots, 2^{n_0}-1.$ If $x = \frac{m}{2^{n_0}}$, then $f(x)=1$, and the value of $f$, at the middle point of $\bigl(\frac{k}{2^{n_0}}, \frac{k+1}{2^{n_0}}\bigr)$ is also $1$. Next if $x \in \bigl(\frac{j}{2^{n_0}}, \frac{j+1}{2^{n_0}}\bigr)$ then there is $x' \in \bigl(\frac{k}{2^{n_0}}, \frac{k+1}{2^{n_0}}\bigr)$, such that $f(x)=f(x')$. Indeed, all numbers in $\bigl(\frac{k}{2^{n_0}}, \frac{k+1}{2^{n_0}}\bigr)$ have the same first $n_0$ digits, and we can find $x'$ in this interval for which all the remaining digits are as in the binary expansion of $x$. Since $$\overline{\lim_{n\to\infty}} \ \frac{\sum\limits_{i=1}^{n} a_{i}}{n} = \overline{\lim_{n\to\infty}} \ \frac{\sum\limits_{i=n_{0}+1}^{n} a_{i}}{n}$$ we get $f(x)=f(x')$. Consequently it is enough to show that $f((0,1))=[0,1]$, or in other words, for each $y \in [0,1]$ there is $x \in (0,1)$ such that $f(x)=y$. It follows from the above that $1$ is attained, for example at $x =\frac{1}{2}$. To show that $0$ is also attained, take $x = 0.a_{1}a_{2}\cdots,$ where $$a_{i}=\biggl\{\begin{array}{cc} 1 & \text{if} \ i=2^{k}, \ k=1,2, \cdots, \\\ 0 & \text{otherwise.}\end{array}$$ Then $$f(x) = \lim_{k \to \infty} \frac{k}{2^{k}}=0$$ To obtain $y=\frac{p}{q}$, where $p$ and $q$ are Co-prime positive integers, take $$x = \underbrace{.00\cdots 0}_{q-p} \: \underbrace{11\cdots 1}_{p} \: \underbrace{00\cdots 0}_{q-p} \cdots,$$ where blocks of $q-p$ zeros alternate with blocks of $p$ ones. Then $f(x) = \lim_{k\to\infty} \frac{kp}{kq}=\frac{p}{q}$. Now our task is to show that every irrational $y \in [0,1]$ is also attained. We know that there is a sequence of rationals $\frac{p_n}{q_n}$, where each pair of $p_n$ and $q_n$ are co-prime converging to an irrational $y$. Let $$x = \underbrace{.00 \cdots 0}_{q_{1}-p_{1}} \: \underbrace{11\cdots 1}_{p_{1}} \: \underbrace{00 \cdots 0}_{q_{2}-p_{2}} \cdots,$$ Then $f(x) = \lim_{n \to \infty} \frac{p_{1} + p_{2} + \cdots + p_{n}}{q_{1} + q_{2} + \cdots + q_{n}} = \lim_{n \to \infty} \frac{p_{n}}{q_{n}} = y$. Since $\displaystyle\lim_{n \to \infty} q_{n} = +\infty$, the second equality follows from the Stolz theorem. - 3 To, the downvoter: why a downvote? – user9413 Jun 9 '11 at 10:53 Not only such functions exists, but it turns out there are `lots` of such functions. There is actually a Theorem, I think by Sierpinski, that any real valued function is the sum of two Darboux functions... - – Martin Sleziak Apr 1 '12 at 8:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501426219940186, "perplexity_flag": "head"}
http://quant.stackexchange.com/tags/diversification/hot
# Tag Info ## Hot answers tagged diversification 12 ### How do I find the most diversified portfolio, or least correlated subset, of stocks? One simple method, based on the principles of mean-variance optimization, is to set the weights proportional to the product of the inverse of the covariance matrix and a vector of standard deviations. This implicitly assumes that the normalized expected return of each stock is equal. If you wish, you can take only the top 5 weights and set the others to ... 9 ### Should the average investor hold commodities as part of a broadly diversified portfolio? Yes Strategic Asset Allocation: Determining the Optimal Portfolio with Ten Asset Classes Strategic Asset Allocation and Commodities The Case for Commodities An Asset Class for All Seasons: The Benefits of a Strategic Allocation to Commodities No Should Investors Include Commodities in Their Portfolios After All? New Evidence My Take Although there ... 7 ### How to simulate correlated assets for illustrating portfolio diversification? Your formula looks like cointegration (between the price time series) rather than correlation (between the returns). To simulate "correlated random walks", i.e., random walks built from correlated innovations, you can just build the desired covariance matrix (for instance, put ones on the diagonal and $\rho$ everywhere else), take multivariate gaussian ... 7 ### How do I find the most diversified portfolio, or least correlated subset, of stocks? The problem of the selecting the best portfolio (according to some risk measure) with a limited number of assets can be formulated as a mixed integer linear or quadratic program and is reviewed in the recent paper "Portfolio selection problems in practice: a comparison between linear and quadratic optimization models". It can be solved for reasonable sizes ... 5 ### How does one analyze diversification if stock prices follow a Cauchy distribution? There is, in fact, a large literature on this subject, but you may have been searching for the wrong terms. This issue is broadly explored within the literature regarding a preference for higher moments. Cauchy distributions, themselves, are very hard to work with because they don't have any well-defined moments. One possible solution to this is to use ... 4 ### How does one analyze diversification if stock prices follow a Cauchy distribution? I think this is a very good and valid question. I will try to give a more general answer here. It is by now a well known fact that much of the classical stuff won't work in the manner it was thought and supposed to work. One of the points that are not clear when it comes to real financial time series is how they are distributed (they are obviously not ... 4 ### age-sensitive correlation measurements in finances You can use the Exponentially Weighted Average directly aswell, finding the covariances and then normalizing back to the correlations: $\sigma_{t+1,jk} = (1-\lambda) \sum_{n=0}^\infty \lambda^{n} r_{j,t-n} r_{k,t-n}$ (this assumes average returns 0 etc etc. More general versions can be derived) 3 ### St Petersburg lottery pricing & short investing horizons What a great question -- it touches on many issues at the core of quantitative finance. This answer might be a lot more than you bargained for, but it's too interesting to pass up. References Mostly, this subject falls somewhere at the intersection of these three highly-interrelated topics: risk-neutral valuation, rational pricing and the fundamental ... 3 ### age-sensitive correlation measurements in finances 1) Calculate exponential averages (EMA) for time series A & B. 2) Calculate exponential standard deviations for A & B. My little hack for this is to calculate an EMA of squared returns, then subtract the squared EMA of simple returns, then take the square root of this. sqrt( ema(return^2) - ema(return)^2 ) 3) Apply the same concept to calculating ... 3 ### Is this comment right about subadditivity? You're right, I hope he meant exactly the opposite, and the formula you provided is indeed part of the definition of a coherent risk measure. In fact, I would say that the risk of the sum is less than or equal to the sum of the individuals as in some cases you would like your model to accept no diversification effect. As John mentioned in his comment, ... 3 ### Should the average investor hold commodities as part of a broadly diversified portfolio? Very informative and balanced is: The Strategic and Tactical Value of Commodity Futures by Claude B. Erb, CFA, and Campbell R. Harvey One well-known scientifically based passive investment fund in Germany (ARERO) draws a ratio of 15% for commodities (60% world stocks and 25% bonds, rebalanced on a yearly basis) as a conclusion out of this - see the live ... 2 ### portfolio diversification tester Maybe I don't get your question well, but what it appears that your goal is to buy securities in order to reduce the correlation between your portfolio constituents. So firstly you need a metric of diversification. Something simple you can use, is calculate the correlation matrix, and the weights of each position. A simple metric would be the sum of ... 1 ### St Petersburg lottery pricing & short investing horizons This may or may not be helpful, since I don't have anything to point you to that specifically addresses the high skewness of the distribution you mention. However, this sounds like it is probably an idiosyncratic risk, and that certainly has bearing on whether or not it would be priced. In the standard capital asset pricing model, the marginal investor ... 1 ### age-sensitive correlation measurements in finances Try this: Given some time horizon of K, which can be divided into subperiods of N, you will calculate a rolling correlation coefficient of length N, then you can use the EMA to weight the more recent correlation coefficient heavier (indirectly weighting the recent relationship more, vs the medium term part). Never came up with this problem in my work so ... 1 ### Should the average investor hold commodities as part of a broadly diversified portfolio? The controversy surrounding commodity futures flows from Gorton and Rouwenthorst (2004). The authors show an equal-weight portfolio of long positions in commodity futures provides a Sharpe ratio greater than the one earned by holding a cap-weighted portfolio of U.S. stocks (beginning in the 1950's through 2004 or so). In essence, why should holding a ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259235858917236, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/22478/can-analysis-detect-torsion-in-cohomology
## Can analysis detect torsion in cohomology? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Take, for example, the Klein bottle K. Its De Rham cohomology with coefficients in $\mathbb{R}$ is $\mathbb{R}$ in dimension 1, while its singular cohomology with coefficients in $\mathbb{Z}$ is $\mathbb{Z} \times \mathbb{Z}_2$ in dimension 1. It is in general true that De Rham cohomology ignores the torsion part of singular cohomology. This is not a big surprise since De Rham cohomology really just gives the dimensions of the spaces of solutions to certain PDE's, but I'm wondering if there is some other way to directly use the differentiable structure of a manifold to recover torsion. I feel like I should know this, but what can I say... Thanks! - I very much like this question! – B. Bischof Apr 25 2010 at 3:27 ## 3 Answers You can compute the integer (co)homology groups of a compact manifold from a Morse function $f$ together with a generic Riemannian metric $g$; the metric enters through the (downward) gradient flow equation $$\frac{d}{dt}x(t)+ \mathrm{grad}_g(f) (x(t)) = 0$$ for paths $x(t)$ in the manifold. After choosing further Morse functions and metrics, in a generic way, you can recover the ring structure, Massey products, cohomology operations, Reidemeister torsion, functoriality. The best-known way to compute the cohomology from a Morse function is to form the Morse cochain complex, generated by the critical points (see e.g. Hutchings's Lecture notes on Morse homology). Poincaré duality is manifest. Another way, due to Harvey and Lawson, is to observe that the de Rham complex $\Omega^{\ast}(M)$ sits inside the complex of currents $D^\ast(M)$, i.e., distribution-valued forms. The closure $\bar{S}_c$ of the the stable manifold $S_c$ of a critical point $c$ of $f$ defines a Dirac-delta current $[\bar{S}_c]$. As $c$ varies, these span a $\mathbb{Z}$-subcomplex $S_f^\ast$ of $D^*(M)$ whose cohomology is naturally the singular cohomology of $M$. The second approach could be seen as a "de Rham theorem over the integers", because over the reals, the inclusions of `$S_f\otimes_{\mathbb{Z}} \mathbb{R}$` and $\Omega^{\ast}_M$ into $D^\ast(M)$ are quasi-isomorphisms, and the resulting isomorphism of `$H^{\ast}_{dR}(M)$` with $H^\ast(S_f\otimes_{\mathbb{Z}}\mathbb{R})=H^\ast_{sing}(X;\mathbb{R})$ is the de Rham isomorphism. - Thanks a ton! Your second approach was exactly the sort of thing I was looking for, though it is a very useful observation that Morse theory also does the trick. – Paul Siegel Apr 26 2010 at 15:16 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The Cheeger--Muller theorem (and related results) allow one to get some control over the torsion in the homology using an analysis of the spectrum of the Laplacian. For one application, see the recent paper of Bergeron and Venkatesh (which also contains a reference to the work of Cheeger and Muller). - In your example, the orientation cover is the 2-d torus T. It is a double cover of the Klein bottle with H1(T)=R2, so the answer for this example is definitely yes. However, I am not sure if this example is so good, as the torsion here comes form the orientation cover being connected. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9226282835006714, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?s=aaee1e74bdf721cfa476a18e8cfedd3f&p=3956143
Physics Forums ## Tensor fields and multiplication Hello! I'm currently reading John Lee's books on different kinds of manifolds and three questions has appeared. In 'Introduction to Smooth Manifolds' Lee writes that a tensor of rank 2 always can be decomposed into a symmetric and an antisymmetric tensor: A = Sym(A) + Alt(A). We define a product which looks at the antisymmetric part of A \otimes B according to: AB = Sym(A \otimes B), while the wedge product describes the antisymmetric part: A \wedge B = Alt(A \otimes B). Now first of all the fact that a tensor of, lets say, rank 3 can not be decomposed in this way seems quite counter-intuitive, for me. How do you think of it? Is there any easy way to picture it? Secondly: Can we define a product for this last term (that is neither symmetric or antisymmetric) of our tensors of rank higher than 2? In other words: A * B = (A \otimes B) - Sym(A \otimes B) - Alt(A \otimes B) ? The last question concerns the total covariant derivative that is definied in the book on Riemannian manifolds. Lee first sets out to claim: 'Although the definition of a linear connection resembles the characterization of (2,1)-tensor fields [...], a linear connection is not a tensor field because it is not linear over C^∞(M) in Y, but instead satisfy the product rule.' (- 'Riemannian Manifolds: An Introduction to Curvature' by John Lee) Later however he states that the total covariant derivative (the generalization of this linear connection) is a (k+1, l)-tensor field. This seems to be contradictive.. or am I mixing something up? Thanks for all the help! Kindly Regards Kontilera PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member Homework Help Science Advisor Regarding the second question... What Lee is saying is that a connection ∇: $\Gamma(TM)\times \Gamma(TM)\rightarrow \Gamma(TM)$ looks like a (2,1) tensor (compare with Lemma 2.4), but it is not one as it is not $C^{\infty}(M)$-linear in its second argument. Later, he defines the covariant derivative of a tensor, and remarks that if you take a tensor T of type (k,l), and take its covariant derivative ∇T, you get a tensor of type (k+1,l). In particular, if you take a vector field Y (tensor of type (0,1)) and jam it up the second slot of the connection map like so: ∇Y, you get a tensor, because the problem was in the second argument of ∇ and you've now eliminated that problem. Thanks for the answer! Nobody that could give some response to the idea of the new multiplication? Maybe its just not so useful so Lee doesnt mention it.. Recognitions: Gold Member Homework Help Science Advisor ## Tensor fields and multiplication Well, sure, there is nothing in the world or beyond that prevents you from assigning to the symbols A * B the meaning A * B = (A \otimes B) - Sym(A \otimes B) - Alt(A \otimes B), it's the first time I've seen this defined before which I guess is the essence of your question. To understand how the symmetric and the antisymmetric part are not all for tensors of rank k>2 : just notice that there are k! permutations that can send a tensor (such as those made of a product of linearly independent vectors) into k! linearly independent tensors. But the symmetric and antisymmetric parts are only 2 tensors, whose linear combinations forms a 2-dimensional subspace that thus cannot give back those k! dimensions. This 2-dimensional subspace is stable by the group of permutations (preserved by the even ones, and undergoing a reflection by the odd ones). The initial tensor cannot belong to it because if it did then its images by permutations would belong to it to, which leads to contradiction as they are linearly independent. Now there exists a systematic study of the many components of tensors apart from the symmetric and antisymmetric ones : operations on the tensor space defined by applying symmetrization on some indices then some antisymmetrization in another way, can be decomposed into a series of eigenspaces that can be classified. For details you can refer for example to the wikipedia article on "Young tableau" and connected articles ("Young symmetrizer" and "representation theory of the symmetric group"). Thread Tools | | | | |-------------------------------------------------------|------------------------------|---------| | Similar Threads for: Tensor fields and multiplication | | | | Thread | Forum | Replies | | | Linear & Abstract Algebra | 2 | | | Linear & Abstract Algebra | 1 | | | Advanced Physics Homework | 3 | | | Special & General Relativity | 11 | | | Differential Geometry | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9152716398239136, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/41686/dimensions-of-homology-groups
# Dimensions of Homology Groups What does a dimension of a homology group tell us? In particular, suppose we form an arbitrary simplicial complex $S(G)$ from a simple graph $G$. Then we compute the homology groups of $S(G)$ and note the ones which are nonzero. What useful information can we glean by computing the dimension of the nonzero homology groups? Edit. $S(G)$ is usually taken to be the coloring complex. - 1 – t.b. May 27 '11 at 17:33 1 – Qiaochu Yuan May 27 '11 at 18:39 1 Homology is an incredibly rich and vast field, and homology groups can tell you all sorts of things about different spaces. Without more detail about what you're looking for, we can't really give you more information. Since you seem to have a background in combinatorics, I'd say this question is a bit like asking, "Suppose I form a vertex set and edges from a topological space. What useful information can we glean from the resulting graph?" – MartianInvader May 27 '11 at 21:00 Why not try to answer the question yourself? If you want to start scratching the surface, compute the Euler characteristic of $S(G)$. This is readily done and expressible as a certain number that you can extract directly from the graph $G$. Is that number familiar to you? – Ryan Budney May 27 '11 at 21:54 I see it is the alternating sum of the Betti Numbers. – PEV May 27 '11 at 22:07 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279300570487976, "perplexity_flag": "head"}
http://micromath.wordpress.com/2008/08/18/fractions-2/
# Mathematics under the Microscope Atomic objects, structures and concepts of mathematics Posted by: Alexandre Borovik | August 18, 2008 ## Fractions In response to my call for personal stories about difficulties in studying (early) mathematics AG sent me the following e-mail: When I was about 9 years old, I’ve first learned at school about fractions, and understood them quite well, but I had difficulties in understanding the concept of fractions that were bigger than 1, because you see we were thought that fractions are part of something, so I could understand the concept of , for example $1/3$ (you a take a piece of something you divided in 3 equal pieces and you take one), but I couldn’t understand the what meant $4/3$ (how can you take 4 pieces when there are only 3? ). Of course I get it in several days, but I remember that I was baffled at first. I am a boy, the language of my mathematical instruction is Romanian, which is also my mother tongue. Currently I am a student at Computer Science. I am surprised to see how frequently such memories are related to subtle play of hidden mathematical structures, like dance of shadows in a moonlit garden; these shadows can both fascinate and scare an imaginative child. As a child, I myself was puzzled by expressions like $5/4$; but it appears that my worries were resolved by pedagogical guidance: I was taught to think about fractions as named numbers of special kind: quarter apples. Fractions like $5/4$ are not result of dividing 5 apples between 4 people, since this operation of division is not yet defined; they come from making sufficient number of material objects of new kind, “quarter apples” and then counting five “quarter apples”. In effect, we are working in the additive group $\frac{1}{4}\mathbb{Z}$ generated by $\frac{1}{4}$. What happens next is much more interesting and sophisticated: we have to learn how to add half apples with quarter apples. This is done, of course, by dividing each half in two quarters, which amounts to constructing a homomorphism $\frac{1}{2}\mathbb{Z} \longrightarrow \frac{1}{4}\mathbb{Z}$. Since both $\frac{1}{2}\mathbb{Z}$ and $\frac{1}{4}\mathbb{Z}$ are canonically isomorphic to \$\mathbb{Z}\$, we, being adults now, can make a shortcut in notation and write this homomorhism simply as $\mathbb{Z} \longrightarrow \mathbb{Z}, \quad z \mapsto 2 \times z.$ In effect, we have a direct system $\mathbb{Z} \stackrel{k\times}{\longrightarrow} \mathbb{Z}, \quad k =2,3,4,\dots$ – or, if you prefer less abstract notation – $\frac{1}{n}\mathbb{Z} \stackrel{{\rm Id}}{\longrightarrow} \frac{1}{kn} \mathbb{Z}.$ Then we do something outrageous: we take its inductive limit. In the primary school, of course, taking the inductive limit is called bringing fractions to a common denominator. The result, of course, is the additive group of rational numbers $\mathbb{Q}$. Only then we define multiplication on $\mathbb{Q}$ — I leave a category theoretical construction of multiplication as an exercise to the reader. I wish to reiterate my principal thesis: Pedagogically motivated intermediate steps in introduction of mathematical concepts to children very frequently reflect the presence and behaviour of hidden underlying structures of mathematics. Please, send me more stories like AG’s. Please do not forget to mention the age when you had your particular experience, your gender, language of mathematical instruction (and whether it was your mother tongue), and the level of mathematical education you have eventually received. ### Like this: Posted in Uncategorized ## Responses 1. Typo: The map in the inductive system should be multiplication by 1. The displayed map is an isomorphism. By: anonymous on August 18, 2008 at 11:38 am 2. Thanks — corrected. By: Alexandre Borovik on August 18, 2008 at 12:38 pm Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9577620625495911, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/141976/selecting-numbers-on-a-number-line-where-distribution-tends-to-cluster-at-one-en?answertab=votes
# Selecting numbers on a number line where distribution tends to cluster at one end. Lets say I've got a number line from $1$ to $100$. I want to randomly select $20$ integer numbers from the number line. But I want the numbers to tend to come from say $1$-$50$, with only a few coming from between $50$-$100$. My initial thought on how to solve the problem is to use an exponential function, say $2^n$. Then, until I've reached $20$ numbers, select a random decimal between $1$ and $\sim 6.65$ for $n$. Rounding the result to the nearest integer to use as my next number. I'm wondering if there is a better solution. - Maybe you can explain "only a few" and "a better solution" a little more, to get an appropriate answer. BTW, welcome to MathStackExchange. – draks ... May 7 '12 at 18:25 ## 2 Answers There are too many possible answers here because your question is so vague. Here's one possible technique: Select a random number $r$ uniformly between 0 and 1. Calculate $r' = 1+\lfloor100r^2\rfloor$. Then you get $r'$ between 1 and 50 about 70.7% of the time. (Whenver $r$ is less than ${\sqrt 2 \over 2}\approx 0.707$), and between 1 and 25 half the time. It will be between 1 and $n$ about $10\sqrt n$ percent of the time. The $r^2$ is the important part here; the $1+\lfloor100\ \cdots\rfloor$ stuff is only to transform the range of the function from real numbers in $[0,1]$ to integers in $[1,100]$. You can bias the thing as much as you like toward low numbers by changing the $r^2$ to $r^3$ or some higher power. If you use $r^n$, then you get $r'$ bigger than 50 only whenever $r$ is less than $1\over \sqrt[n]{2}$. In general, you can pick any function $f : [0,1]\to[0,1]$ that has the property that $f(x)≤x$, and then use $r' = 1+\lfloor100f(r)\rfloor$. By drawing different $f$, you can bias the generator more or less toward smaller numbers. - I think you are overthinking this (unless you need some sort of smoothly decaying distribution that puts more probability mass on the smaller elements). If you just need to distinguish the first 50 elements from the last 50, with no within-set distinctions, just hand-specify a discrete distribution without caring about the functional form too much. If you want there to be a probability $p$ that the values come from $\{1,2,\cdots,50\}$ and probability $(1-p)$ that values come from $\{51,\cdots,100\}$, then just make a discrete distribution that is $p/50$ for any value in $\{1,2,\cdots,50\}$ and $(1-p)/50$ for any value in $\{51,\cdots,100\}$. Then $p$ tunes how lopsided the distribution is towards the first set, without making any individual element of that set more likely than any other. By putting $p$ close to 1, you can make the occurrence of elements from $\{51,\cdots,100\}$ as infrequent as you'd like for your purposes. To implement this in software, you'll need to use a psuedorandom number generator to generate draws from a standard uniform distribution on (0,1). Compute the cumulative sum vector for the discrete distribution, $F(i) = \sum_{k\leq i}P(k)$. Define $F(0) = 0$ and then by definition you'll get $F(100) = 1$. For a given uniform draw $u$, find the index $j$ such that $F(j) \leq u \leq F(j+1)$, and return $j+1$ as the drawn number. Languages such as Python (NumPy), Matlab, and C++ (Boost), provide this kind of user-defined discrete distribution as a built-in function, with built-in sampling functions. But it's often a good exercise to write your own discrete simulator at least once. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9408743977546692, "perplexity_flag": "head"}
http://en.m.wikibooks.org/wiki/The_Scientific_Method/Control_of_Measurement_Errors
# The Scientific Method/Control of Measurement Errors ## Experimental Design Perhaps the most important step in controlling experimental error is to design your experiments to produce as little systematic error as possible. In order to do this, it is important to know something about what you are measuring. As an example, suppose that you desired to measure the weight of the oxygen produced in the decomposition of hydrogen peroxide: $H_2O_2(aq) \rightarrow H_2O(l) + \frac{1}{2}O_2(g)$ You would need to ask yourself: How would you separate the oxygen from the water and unreacted hydrogen peroxide? How will you prevent the oxygen from leaking? Do you want to measure the weight directly, or by calculating it from other values (such as pressure)? Get into the habit of asking yourself, "what could go wrong with this experiment?" before you start the experiment. Then if you can, design it so that the things that could go wrong are as minor as possible, and then when performing it be as careful as possible to avoid what is left. ↑Jump back a section ## Calibration and Accuracy All measurement instruments need to be calibrated in some way in order to ensure that the values that are read are near the true value of the property being measured. Rulers all are compared to a standard when they are made so that when an inch is marked on the ruler, it is truly an inch. Many instruments lose their calibration, and hence their accuracy, over time. Therefore it is necessary to recalibrate them. Instruments are generally re-calibrated by measurement of a standard or several, which have well-defined properties. For example, a scale might be calibrated by weighing a 5g weight and adjusting a dial until the reading is 5.000 g. Follow the instrument manual closely for calibration procedures, so that any bias in measurement due to measurement inaccuracy can be mitigated. ↑Jump back a section ## Repeatability and Precision Measurement instruments never will give you an exact answer. For example, if you are measuring the volume of a liquid in a graduated cylinder, it is necessary for you to estimate which of the hash marks on the instrument is the closest to the true volume (or to interpolate between them based on your eyesight). Most computerized measurement devices, such as many modern scales, take multiple measurements and average them to obtain accurate results, but these also have sensitivity limitations. Manufacturers often report the precision of their instruments. The repeatability of an instrument is a measure of the precision, which is the similarity of successive measurements of an identical quantity to each other. Reproducibility is essentially the ability to, with all other conditions the same (or as close to the same as possible), achieve the same measurement value in an experiment. For example, you may measure the weight of an object with the same scale multiple times. If the reading is significantly different every time, it is possible that the instrument needs to be recalibrated or re-stabilized (for example, by cleaning out dust from the receiver, or making sure the setup is right). If it has been properly calibrated and set up and measurements still vary more than the precision claimed by the manufacturer, the instrument may be broken. ↑Jump back a section ## Reproducibility Another way to control errors in measurement from experiment to experiment is to constantly assess the reproducibility of the measurements. Reproducibility is measured essentially by performing the same measurement multiple times while varying one part of the experiment. For example, if you are measuring the pH of a buffer as part of a process, you may assess the reproducibility of the buffer preparation by preparing the same sample several times, independently of each other, and measuring the pH of each sample. If the variance in the pH measurements is larger than the measurement accuracy (or repeatability) of the instrument, then it is likely that the preparation of the buffer is to blame for this error. Such tests can be performed on many parts of a larger process in order to pinpoint and remedy the largest control difficulties. Another possible reproducibility test would be measuring the same sample with different pH meters. It is very important to test the compatibility of different measurement instruments before claiming that the results are comparable, and such reproducibility measurements are critical for determining the relationship between two instruments. ↑Jump back a section
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9520799517631531, "perplexity_flag": "middle"}