url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathhelpforum.com/calculus/168667-how-do-i-find-relative-extrema-points-inflection-function-print.html
# How do I find the relative extrema and points of inflection of this function? Printable View • January 17th 2011, 08:38 PM onanyc How do I find the relative extrema and points of inflection of this function? y = (x)/(x^2+9) • January 17th 2011, 08:57 PM Prove It Local Maxima: The points where $\displaystyle \frac{dy}{dx} = 0$ and $\displaystyle \frac{d^2y}{dx^2} < 0$. Local Minima: The points where $\displaystyle \frac{dy}{dx} = 0$ and $\displaystyle \frac{d^2y}{dx^2} > 0$. Inflection Points may or may not appear where $\displaystyle \frac{d^2y}{dx^2} = 0$. It helps to evaluate these points and compare them to the graph of the function. • January 18th 2011, 06:35 AM HallsofIvy Quote: Originally Posted by Prove It Local Maxima: The points where $\displaystyle \frac{dy}{dx} = 0$ and $\displaystyle \frac{d^2y}{dx^2} < 0$. Local Minima: The points where $\displaystyle \frac{dy}{dx} = 0$ and $\displaystyle \frac{d^2y}{dx^2} > 0$. Inflection Points may or may not appear where $\displaystyle \frac{d^2y}{dx^2} = 0$. It helps to evaluate these points and compare them to the graph of the function. "Inflections Points" are defined as points where the first derivative changes sign. Where that happens the second derivative must be 0 but the second derivative may be 0 where the first derivative does not change sign (example, $f(x)= x^3$ at x= 0). Determine where the second derivative is 0, then look at the first derivative on either side of those points. (Surely you knew this basic information, onanyc, so what is your difficulty with this particular problem?) All times are GMT -8. The time now is 06:47 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.903446614742279, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/189895-does-uniquely-determine-g.html
# Thread: 1. ## Does this uniquely determine G? (up to isomorphism) $1\longrightarrow \mathbb{Z}_4 \longrightarrow^{\!\!\!\!\!\!\!\!\!\beta}\ \, G \longrightarrow^{\!\!\!\!\!\!\!\!\!\alpha}\ \, \mathbb{Z}_2 \longrightarrow 1$ $\mathbb{Z}_2\longrightarrow^{\!\!\!\!\!\!\!\gamma} \ \, G$ $\gamma \circ \alpha = 1_{\mathbb{Z}_2}$ (if my latex were better i'd try to put it all on one line, this is a right-split short exact sequence). if so, is there a "smaller" diagram which conveys the same information? 2. ## Re: Does this uniquely determine G? Originally Posted by Deveno (up to isomorphism) $1\longrightarrow \mathbb{Z}_4 \longrightarrow^{\!\!\!\!\!\!\!\!\!\beta}\ \, G \longrightarrow^{\!\!\!\!\!\!\!\!\!\alpha}\ \, \mathbb{Z}_2 \longrightarrow 1$ $\mathbb{Z}_2\longrightarrow^{\!\!\!\!\!\!\!\gamma} \ \, G$ $\gamma \circ \alpha = 1_{\mathbb{Z}_2}$ (if my latex were better i'd try to put it all on one line, this is a right-split short exact sequence). if so, is there a "smaller" diagram which conveys the same information? Presumably you meant $\alpha\circ\gamma=\text{id}_{\mathbb{Z}_2}$. If so, we can definitely say that $G\cong \mathbb{Z}_4\rtimes_\varphi\mathbb{Z}_2$ for some homomorphism $\varphi:\mathbb{Z}_2\to\text{Aut}(\mathbb{Z}_4)$. But, since $\text{Aut}(\mathbb{Z}_4)=\mathbb{Z}_2$ there is precisely one non-trivial homomorphism. And so, in particular, up to isomorphism the only semidirect product gives $D_4$. Thus, $G\cong D_4$. 3. ## Re: Does this uniquely determine G? $G \cong \mathbb{Z}_4 \times \mathbb{Z}_2$ satisfies the conditions too. so $G$ is not uniquely determined. 4. ## Re: Does this uniquely determine G? Originally Posted by NonCommAlg $G \cong \mathbb{Z}_4 \times \mathbb{Z}_2$ satisfies the conditions too. so $G$ is not uniquely determined. Right, I forgot the trivial homomorphism $\varphi:\mathbb{Z}_2\to\mathbb{Z}_2$. 5. ## Re: Does this uniquely determine G? yes i did mean the other composition ( can i claim it was because i was looking at herstein last night? no? ok, i admit it, i'm just stupid). so...how would i indicate there is no left-split with a picture, to rule out the trivial semi-direct (that is direct) product? 6. ## Re: Does this uniquely determine G? Originally Posted by Deveno yes i did mean the other composition ( can i claim it was because i was looking at herstein last night? no? ok, i admit it, i'm just stupid). so...how would i indicate there is no left-split with a picture, to rule out the trivial semi-direct (that is direct) product? You mean you just want to indicate, diagramatically that there is no left split? 7. ## Re: Does this uniquely determine G? correct. what got me thinking about this, is thinking about identifying D4 solely in terms of homomorphisms to and from it (although i admit including the extra information of the short exact sequence is cheating a little) and wondering how well this can be done, for an arbitrary group.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305055141448975, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/61761/topological-sort-of-partial-order-into-sorted-sets
## Topological sort of partial order into sorted sets ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a partial order of elements, one can use topological sorting to produce a sorted list of elements. For example, if we have the partial order A->B and A->C, then the possible topological sort results are [A,B,C] and [A,C,B]. I am interested in producing a sorted list of sets [$S_1, \ldots, S_k$] that satisfy the partial order. (The sets $S_i$ partition the elements.) Here, the requirements are: 1. for $i = 1 \ldots k-1$, $\exists e_1 \in S_i,e_2 \in S_{i+1}$ s.t. $e_1 < e_2$ 2. for each set $S_i$, $\nexists e_1, e_2 \in S_i$ such that $e_1 < e_2$ or $e_2 < e_1$ In our example, the only correct sorted list of sets is [{A},{B,C}]. Given a partial order, how many possible sorted lists of sets exist? Is there a name for this kind of sorting? Any pointers are appreciated. - It is not clear whether the sets S_i form a partition of your original set. If they do, then (I think) you are looking for homomorphic images of your partial order, and that is bounded by the number of partitions of the set. If you allow an element to belong to different S_i, that changes things, but may still be tied to homomorphic images of the original partial order for connected partial orders. Gerhard "Ask Me About System Design" Paseman, 2011.04.14 – Gerhard Paseman Apr 14 2011 at 23:11 Thanks for the reply. Yes, I meant S_i to form a partition. BTW, I have rewritten condition 1. – Steven Apr 14 2011 at 23:24 This new version doesn't really sound like it's "sorted": look at the partial order A<B, C<D with no other comparable pairs; then [{A,D},{B,C}] satisfies the conditions even though D occurs in an earlier set than C. – Omar Antolín-Camarena Apr 15 2011 at 23:03 ## 2 Answers EDIT: This answer was for a previous version of the question. There is usually no such list: consider the case where some element is incomparable to everything else. - Thanks for the answer. I just realized I did not properly formulate the problem and have fixed condition 1. So if there is an element that is incomparable to everything else, then you can place it in any existing set in the sorted list. – Steven Apr 14 2011 at 23:18 Or consider a pentagon-shaped partial order, with one vertex having out degree 2 and indegree 0, and a vertex two edges away having indegree 2 and outdegree 0, with remaining vertices ghavin in and out degree 1. That will not admit a partition of the desired form. Gerhard "Ask Me About System Design" Paseman, 2011.04.14 – Gerhard Paseman Apr 14 2011 at 23:21 Hi Gerhard, I'm not completely understanding the example (you mean diamond shape, not pentagon, right?). So if we have a partial relation A->B, A->C, C->D, B->D, then I can create the sorted list [{A},{B,C},{D}]. Let me know if I got this wrong. – Steven Apr 14 2011 at 23:39 I think Gerhard truly means pentagon. A->B, B->C, C->D, A->E, and E->D. – Andreas Blass Apr 14 2011 at 23:50 But in the pentagon with the notation in my previous comment, [{A}, {B,E}, {C}, {D}] seems to satisfy the requirements. – Andreas Blass Apr 14 2011 at 23:54 show 4 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. We can take $S_1$ to be the set of minimal elements, then remove those and proceed inductively. Or go in the other direction, pealing off the maximal elements first. So in any case such lists do exist. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8901463747024536, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/31/lay-explanation-of-the-theory-of-relativity
# Lay explanation of the theory of relativity? What is Einstein's theory of Special relativity, in terms a lay person can follow? - 5 How lay are we talking about here? Do you have some knowledge of Galilean dynamics and Newtonian mechanics? – j.c. Nov 2 '10 at 20:24 3 The problem is : after the beta we will have a lot of questions like that, it is unrealistic to answer individually each one of them. The best answer to this one is "Look in Wikipedia, then ask more specific questions". – Cedric H. Nov 2 '10 at 20:55 From the FAQ: "Your questions should be reasonably scoped. If you can imagine an entire book that answers your question, you’re asking too much. " – Ben Crowell Aug 5 '11 at 20:39 ## 4 Answers Special Relativity derives from two basic ideas: 1. The speed of light (in a vacuum) is always c. 2. The laws of physics are the same in all inertial reference frames (basically, points of view that aren't accelerating, that is, they obey Newton's Laws.) With these two points and a little math, various proven conclusions may be derived: 1. Time Dilation: When something moves fast relative to something else, time for the faster moving body slows down. It's not an illusion of time slowing down, it's the real thing: individual atoms that make up the body operate slower, chemical reactions function slower, and biological processes (aging) occur slower. From the perspective of the faster moving body, its time progresses at the usual pace. 2. Length Contraction: Objects moving fast relative to other objects shrink along the line of the direction they're moving. 3. Relativistic Simultaneity: There's no such thing as simultaneous events: because time is attached to the observer, different people could witness 2 events happening in different order. The exception to this is "casually-related" events which are events where event A is the cause of event B. 4. Mass-Energy: The math goes into describing the mass of bodies at rest and how that mass changes as the bodies move. As bodies speed up they get "heavier." Nothing can travel faster than light (and nothing with mass can travel AT the speed of light) because any massive body would reach infinite "relative mass" at that speed. You can derive E=mc^2 and fission/fusion from this. This is a very quick summary of the basic points and principles. - 4 Good that you separate the principles from the most common "conclusions" as a lot of people mix both. – Cedric H. Nov 2 '10 at 20:55 I think we need an additional postulate to derive mass-energy equivalence in relativity. Specifically, we could define mass by rest mass and momentum and energy as spacelike and timelike components of the 4-momentum $m d\vec{x}/d\tau$, with $\vec{x}$ the position 4-vector and $\tau$ the proper time. I think mass-energy equivalence would then require us to assume conservation of this 4-momentum. – Mark Eichenlaub Nov 4 '10 at 5:18 1 Nice. One tiny thing: causally related, not casually. :) – MatrixFrog Nov 10 '10 at 16:59 An important point about relativity is that it is not quite like the physics you learn in an introductory physics class. There, you learn about Newton's Laws or Snell's Law or Lenz's Law, etc. Those are all laws that tell things how to act; they tell mass how to respond to force, or light how to bend, or currents which way to run. Relativity is different in that it provides a set of meta-laws, or laws that the other laws of physics must obey. It doesn't directly tell things what to do. The classic example is Maxwell's Equations. These are laws that tell charged particles and electromagnetic fields how to act. It turns out that these laws obey a certain mathematical criterion, called "Lorentz Invariance", that is required by relativity. So Maxwell's equations are good relativistic laws. They obey the meta-laws. On the other hand, Newton's laws (of motion) are not good relativistic laws. They don't obey the meta-laws. So in relativity, we need a slightly new set of laws to describe how mass responds to force. As for what the meta-laws are, they were outlined by Nick Gotch as "basic ideas" above. Those basic ideas turn out to be equivalent to Lorentz invariance. - Really appreciate the distinction that they are "meta-laws". Good point. – thunderror Feb 8 '11 at 15:22 Special relativity is based upon the idea of how the same events for different observers are located by each observer using their own rulers and clocks; their own measurement of space and time. It's also based upon the principle that the laws of physics don't change for observers travelling at different velocities to one another. The average lay person already has an intutive understanding of how they think events are seen to occur by observers travelling at different velocities to one another. It comes under the name of Galilean relativity after Galileo first introduced the idea. For example, a person driving a car will press the clutch at some time t1, and then press the break pedal a time t2 later at approximately the same location in his space; a zero space interval. On the other hand, someone on the road observng the driver will think these two events occured with a different space interval not equal to zero in his space, but separated by the same time intervale t2-t1. Special relativity instead proposes that the time inverval between events for different observers also changes in a way similar to that for space intervals. It also means that events being simultaneous with one another is relative, as Einstein emphasised in his 1905 paper. - Einstein's Special Theory of Relativity cannot be of much significance to a lay person because he has no use for such knowledge. However, the common idea that this topic is extremely complicated motivates the layman to learn it. The basic concepts of this theory are actually somewhat simple. In short, when an object is moving a certain velocity $u$, a few seemingly unusual phenomena happen to it. If a bar of length $L_0$ moves in the direction of its length, its new length will appear to be $L_0 \sqrt{1 - u^2 / c^2}$ from the our viewpoint on the ground, where $c$ is the speed of light. If that formula means nothing to you, simply take note that the length decreases as the velocity increases. Of course, from the bar's viewpoint, it will appear that our length has changed by that factor. Since time is proportional to distance traveled at a constant velocity, the length contraction can show that the "rate of time" also changes when an object is moving. If a clock is traveling through space close to the speed of light, it will tick significantly slower than it did at rest. Those are the basics, but phenomena such as mass-energy equivalence and nuclear binding energy can be derived from these concepts and experiments conducted in the early 20th century. - 2 "having a use for it" is hardly the only reason, or even the best reason, to learn something about physics. There is a Feynman quote to that effect, I think. – MatrixFrog Nov 10 '10 at 17:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949895977973938, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/41589/reading-list-in-topological-qft/41729
# Reading list in topological QFT I'm interested in learning about topological QFT including Chern Simons theory, Jones polynomial, Donaldson theory and Floer homology - basically the kind of things Witten worked on in the 80s. I'm looking for pedagogical reviews rather than original articles. Though these things sit at the interface of mathematics and physics I'm interested in them more as a physics student. I remember someone asking for a suggested reading list for topological QFT in mathoverflow. The suggested papers were almost uniformly rigorous mathematics written by mathematicians. I am not looking for something like that. - Thanks for adding the link, Qmechanic. – Sudip Paul Oct 25 '12 at 3:16 ## 5 Answers The relation is very deep and has a rich mathematical structure, so (unfortunately) most stuff will be written in a more formal, mathematical way. I can't say anything about Donaldson theory or Floer homology, but I'll mention some resources for Chern-Simons theory and its relation to the Jones Polynomial. There is first of all the original article by Witten - Quantum field theory and the Jones polynomial. A related article is this one (paywall) by Elitzur, Moore, Schwimmer and Seiberg. A very nice book is from Kauffman called Knots and Physics. Also the book by Baez and Munaiin has two introductory chapters on Chern-Simons theory and its relation to link invariants. There are also some physical applications of Chern-Simons Theory. For instance, it appears as an effective (longe wavelength) theory of the fractional quantum Hall effect. Link invariants, such as the Jones polynomial, can be related to a generalized form of exchange statistics. See this review article: abs/0707.1889. See also this book by Lerda for more on this idea of generalized statistics. - 1 Thank you Olaf. I know that this stuff has very deep mathematics. However it is possible to present it without the full rigor of the professional mathematician. Anything at or below the level of Witten's papers are fine with me. In the mathematics papers that I looked into TQFT was defined using cobordism & n-category. There was nothing left to recognize it as a QFT. – Sudip Paul Oct 25 '12 at 3:14 Another interesting application is that Chern-Simons Theory in 3d is equivalent to General Relativity in 3 space-time dimensions. GR in 3 dimensions is quantisable and following a nice playground for quantum gravity. http://ncatlab.org/nlab/show/Chern-Simons+gravity has a nice reading list about that topic at the References. Maybe a good start is "Edward Witten, (2+1)-Dimensional Gravity as an Exactly Soluble System Nucl. Phys. B311 (1988) 46" but if you prefer more pedagogical material i think "Bastian Wemmenhove, Quantisation of 2+1 dimensional Gravity as a Chern-Simons theory thesis (2002)" and "Jorge Zanelli, Lecture notes on Chern-Simons (super-)gravities" are very readable. Also pedagogical to me seems "Ivancevic,Ivancevic, Undergraduate Lecture Notes in Topological Quantum Field Theory" http://arxiv.org/abs/0810.0344 (already linked to at your link.) - I personally find a book by Nash Differential topology and QFT very readable. - +1 This is a really nice book and I will also recommend this. But the book assumes the reader has a very strong background in math, much stronger than most physicist do have. So I doubt the OP will find it so useful as such. – Heidar Oct 25 '12 at 22:57 Hi Heidar, I used to be a mathematics PhD student before I transferred to physics. So I don't think that mathematical background would be an insurmountable obstacle to me. – Sudip Paul Oct 26 '12 at 7:15 @SudipPaul Oh I see, then you should definitely take a look at Nash's book. Its full of very nice algebraic/differential topology, algebraic geometry, K-theory and so on. – Heidar Oct 27 '12 at 16:41 Olaf has already given most of the references I would recommend. But in the case of Chern-Simons theories and knot theory, there are two (plus one) other very nice references. These are all written by physicist to physicists, so no modular functors, Cobordisms and so on. 1) Marcos Marino - Chern-Simons Theory and Topological Strings (arXiv:hep-th/0406005v4) Section II has a very good review of Chern-Simons theory and its relation to Knot invariants (and Rational CFT's). 2) Michio Kaku - String, Conformal Fields and M-Theory Don't be too scared by the title. Chapter 8 contain a very readable review of Chern-Simons theories and knot invariants. It introduces everything starting from a simple and intuitive starting point. For example shows how the abelian $U(1)$ Chern-Simons theory leads to the Gauss linking numbers by direct integration, and why one has to regularize with framing due to problem with self-linking. Chapter 12 is more generally about topological field theories, it discusses Cohomological Field theories, Floer theory, relations to Morse theory and so on. You might find this chapter a little more challenging than chapter 8. 3) Birmingham et al - Topological Field Theory This is a long, and a little old, review of many different topological field theories. It also contains a little bit about Chern-Simons theory but not as much as the other two above, as I remember. I know many other good references, but they are more advanced. This is an advanced topic so most papers and books will naturally assume a certain background. - There's a third TQFT that Witten studied in the 80s that's worth spending time with: Gromov-Witten theory, which is concerned with topological variations on the nonlinear sigma model and string theory. The starting point is Witten, Topological Sigma Models. The most recent nice exposition I know of is Hori's et al's Mirror Symmetry. Also worth a visit is Witten, Chern-Simons Gauge Theory as a String Theory, which shows the spacetime physics of a special case of these perturturbative string theories is described by a Chern-Simons theory. Marino's book is good here, too. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9379387497901917, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/3164/how-is-a-spherical-electromagnetic-wave-emitted-from-an-antenna-described-in-ter/3167
# How is a spherical electromagnetic wave emitted from an antenna described in terms of photons? When an atenna transmits radiowaves isn't it true that the electromagnetic pulse is radiated away from the accelerating electron as a spherical wave in all directions simultaneously, and if so how can the associated photon be "everywhere" on this rapidly expanding sphere? - Suggest renaming the title of the question to be more "informative". – Kostya Jan 17 '11 at 16:14 ## 4 Answers First of all, note that any realistic antenna emits a gigantic amount of photons at the radio frequencies. The energy of a single photon is $E=hf$ where $h=6.626\times 10^{-34}$ Js which is tiny, so if you have frequencies of order "just a few Hertz", the energy of one photon will be a tiny fraction of one Joule. Antennas consume much more energy than that. So in reality, you emit trillions of photons that fly in all the directions more or less uniformly - well, vertical dipole antennas emit mostly in the horizontal directions etc. The number of photons is so huge that it makes no sense to talk about individual photons: classical electromagnetism described by Maxwell's equations is a totally satisfactory approximation for all practical purposes (and even many impractical ones). But if you designed a similar experiment where you would only emit one photon, there would be one photon going away and its position i.e. direction would be undetermined. A photon is a particle that always respects the laws of quantum mechanics, including the uncertainty principle. If the frequency and angular momentum of the photon is known, its position - direction in which it propagates - is completely unknown. The photon is described by a probabilistic wave whose dependence on the space is pretty much the same as the dependence of $E+iB$ of a classical electromagnetic wave that you obtain if you emit lots of photons in the same state. The probability that the photon will be found at a point is proportional to the energy density $(E^2+B^2)/2$ of the classical electromagnetic wave that you emit by the same antenna if the number of photons is very large. But for a single photon, you can't predict in what direction it will go. That's a basic feature of quantum mechanics that the evolution is not deterministic and outcomes of experiments can only be predicted probabilistically. If you know that you have only emitted one photon, the distant detectors will only detect one photon in one particular direction - but you can't be sure in which direction it will be. Again, for a dipole antenna, the nearly-horizontal directions will be preferred - the ratios of probabilities in different directions will follow the energy density of the corresponding classical electromagnetic wave. - Ok Lubos, so I think if I understand you correctly the classical laws of electromagnetism are actually describing the "collective" behavior of what in practice can be considered an infinite number of photons being "spit out" in all directions "simultaneously" or in a more restricted set of directions depending on the configuration of the antenna. Whereas quantum mechanics is used to describe the case of a single photon. So obviously I was confusing the case of "infinite" number of photons with the case of a single photon. Many thanks for your help! – BuckyBadger Jan 17 '11 at 22:43 For some reason, my instinct is that a spherical electromagnetic wave cannot be emitted by an antenna. Instead, they can only be emitted by a charge. I guess that's cause I always think of an antenna as an object that has no net charge. – Carl Brannen Jan 19 '11 at 2:42 @BuckyBadger , I hope it is clear to you that the classical wave distributions are a limiting case of the quantum mechanical description. Quantum mechanical solutions are always there, except it is not practical or reasonable to use its formalism when the limiting classical formalism is more than adequate. – anna v Feb 4 '11 at 15:28 "If the frequency and angular momentum of the photon is known, its position - direction in which it propagates - is completely unknown". Surely Lubos wrote this in haste. The relevant uncertainties for this problem are the initial position of the "photon", presumed to be completely known, and its final momentum, presumed to be completely unknown. In fact it's a little more suttle: even for a fixed initial position, we can specify one axis along which the momentum is known to be zero; but that's as much as I can say in 600 characters or less. – Marty Green Aug 6 '11 at 23:57 What follows is not a proper answer to your question, but the statement of a fact worth knowing. There is no way to create a (perfectly) spherical wave: "A spherically symmetric vacuum solution to Maxwell's equations is always static." (Pappas, Am. J. Phys., 52 (255), 1984.) Also: H.F. Mathis: "A short proof that an isotropic antenna is impossible", Proc. IRE, 59 (979), 1951. It's an amusing application of Brouwer's "hairy ball" theorem. - One of the lessons quantum mechanics supposedly teaches us is that we should be cautious about asking questions that cannot be answered experimentally. Not that we should never do it, just that we should be careful. I think this question and the subsequent answers are a good example of this principle being disregarded. How can a "photon" be everywhere at once on an expanding spherical surface? First of all, I would dismiss those people who make a major point of the issue of spherical symmetry. Everyone knows that an e-m wave is not spherically symmetrical. It is so obvious that those who deal with the subject will use the term "spherical" to describe the next best thing, the familiar donut shape of a dipole radiator. S-wave or p-wave, the question stands: how can the photon be everywhere at once? Second, I disagree with those who say the question is wrong because an antenna emits billions of photons. There are in fact antennas which regularly emit light in quantities similar to one photon's worth of energy; these antennas are called "atoms" and they are everywhere. The question stands: how can a photon emitted by an atom be everywhere at once on a spherical surface? In fact, this is very close to the original form of the EPR paradox. When Einstein posed the question in 1935, no one at first seriously considered that it might be tested experimentally. The EPR paradox went through a number of transformations before it dawned on people that it might be put testable. Among these transformations we can list Bohm, who recast it in the form of two electrons in the spin singlet state; and Feynmann, who analyzed the two-photon decay of positronium. Neither of these models were, then or now, amenable to experimental testing. After Bell's analysis in 1964, people were motivated afresh to look for experimental manifestations, and found something workable in parametric down-conversion. But that's another story. The basic problem with the question as posed here is: how would you measure it? The theory tells us that the photon spreads out as a "spherical" wave. But Copenhagen, in one form or another, tells us that the photon is detected at a single point. How do we know? Many, notably Feynmann, would say that the click in a photomultiplier tube tells us when a photon has been detected. But the detailed physics of a detector event can be interpreted in different ways; all we can say with relative certainty is that the probability of a detector going off is proportional to the square of the incident field. And this is entirely conisistent with the photon's energy being spread over a spherical surface. It is very difficult to establish that a click in the photodetector is necessarily associated with the abosorption of one full photon's worth of energy. Some will undoubtedly say it is obvious that when a photomultiplier tube clicks, it must have absorbed a photon. To those people, I would ask: what experiment can you propose to demonstrate that a photomultiplier tube will never click when exposed to less than one photon's worth of energy? Others will object that once a detector clicks, a second detector will never go off at the same time; this shows that the whole photon "collapsed" into the first detector. But experimentally, this hypothesis is notoriously difficult to demonstrate. The reason is simply that we still do not have a working pea-shooter for photons that reliably produces one photon at a time. To answer the original question, I would say that the wave from an antenna, even at "atomic" antenna, spreads out "spherically"; and that there is no experiment which can conclusively show that the emitted "photon" ever appears concentrated at a single point. - Whether or not photons should be taken into account when calculating antenna parameters is irrelevant to the natural wish to get a picture of what is going on with the photons when an antennais tranmitting or receiving. Unfortunately, experiments involving photons and antennas are presumably difficult. Nevertheless, if we can look for gravity waves, we should be able to figure out how to observe low-energy photons coming out of a wire. - 1 You obviously lack basics on partice-wave dualism. – Georg Aug 3 '11 at 16:10 I most certainly do not lack basics. I am talking about the physical photon picture behind transmission and reception by an antenna. That is an area where a quantum description is seldom discussed, mainly because classical electrodynamics is the appropriate calculational framework to use for getting answers in such low-frequency radiation. But I am looking for a quantum picture, not a specific tool for making calculations. – Ralph Dratman Aug 4 '11 at 5:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418132305145264, "perplexity_flag": "head"}
http://mathoverflow.net/questions/tagged/eigenvector
## Tagged Questions 1answer 131 views ### Coercive Symmetric Bilinear form on a Hilbert space I need to show one of the two following equivalent results. If true, it must be a simple proof but I do not seem to be able to make it work. Thank you in advance. 1) Consider a co … 1answer 156 views ### Coercive Symmetric Bilinear form on a Hilbert space I need to show the two following results. If true, it must be a simple proof but I do not seem to be able to make it work. Thank you in advance. Consider a continuous symmetric bi … 0answers 70 views ### Relation between the eigenspace of a covariance matrix and eigenspace of correlation matrix I was discussing applying Principal Component Analysis to a covariance matrix versus applying PCA to the corresponding correlation matrix with a collegue. This led me to think abou … 0answers 31 views ### is there any fast algorithm for tree graph eigendecomposition? Is there any fast algorithm that can performs the eigendecomposition of the Laplacian matrix of a tree graph? Thank you! 0answers 50 views ### Eigenvectors of contraction times projection Suppose $A$ is a real $n\times n$ matrix with real eigenvalues: $$1=\lambda_1>|\lambda_2|\ge \ldots\ge |\lambda_n|>0.$$ Suppose $B$ is an involution, for simplicity let us assum … 1answer 123 views ### Sum of commuting semisimple operators Let $V$ be a finite dimensional vector space over a field $K$. An operator $T:V\to V$ is called semi-simple if every $T$-invariant subspace of $V$ has a complement(for algebraicall … 2answers 437 views ### Singular Value Decomposition of Noisy Matrices I am an engineer who makes measurements of a variable over a grid of, say, $m\times n$. Since these are actual measurements, the true values are always corrupted by noise, and what … 2answers 289 views ### Eigenvalues of a Symmetric Positive Semi-Definite (PSD) matrix after rank one update I have a Symmetric Positive Semi-Definite matrix $A$ which i know its eigenvalue and eigenvectors. let $v$ and $u$ be a random column vector. i want to know if it is possible to ha … 3answers 438 views ### Eigenvectors and eigenvalues of nonsymmetric Tridiagonal matrix Hi, the question is following: We have one matrix \begin{pmatrix} -\beta & \Delta & 0 & 0 &\cdots & 0 & 0 & 0 \newline \beta & -(\beta+\Delta) … 0answers 158 views ### Comparing eigenvalues of two matrices [closed] Suppose we have $A=\left(\begin{array}{cccc} 1 & 1 & 1 & 0\\ 1 & 3 & 0 & 0\\ 1 & 0 & 2 & 1\\ 0 & 0 & 1 & 3 \end{array}\right)$ and … 1answer 196 views ### Eigenvectors of asymmetric graphs Let $G$ be an asymmetric connected graph. Then is it always the case that at least one of the eigenvectors of its adjacency matrix $A$ consists entirely of distinct entries? Thank … 4answers 2k views ### Eigenvectors and eigenvalues of Tridiagonal matrix Hi, is it possible to analytically evaluate the eigenvectors and the eigenvalues of a tridiagonal matrix of the form : \mathcal{T}^{a}_n(p,q) = \begin{pmatrix} 0 & q &a … 2answers 408 views ### Perturbation theory for the generalized eigenvalue problem Is there a standard reference for the perturbation theory of the generalized eigenvalue problem? More specifically, I would like to get a systematic expansion for the problem … 0answers 190 views ### Common eigenvector I have little experience with functional analysis beyond an undergraduate basic course, and I'm dealing with the following problem: let $V$ be an infinite-dimensional locally conv … 1answer 4k views ### Difference between Principal Component Analysis(PCA) and Singular Value Decomposition(SVD)? I am confused between PCA and SVD. The wikipedia page for PCA has this line. "PCA can be done by eigenvalue decomposition of a data covariance matrix or singular value decompositi … 15 30 50 per page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8472315669059753, "perplexity_flag": "head"}
http://mathoverflow.net/questions/42594?sort=newest
## Concavity of $\det^{1/n}$ over $HPD_n$. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) One of my beloved theorems in matrix analysis is the fact that the map $H\mapsto (\det H)^{1/n}$, defined over the convex cone $HPD_n$ of Hermitian positive definite matrices, is concave. This is accurate, if we think that this map is homogeneous of degree one, thus linear over rays. • it has important applications in many branches of mathematics, • it has many elegant proofs. I know at least three complety different ones. I am interested to learn in both aspects. Which is your prefered proof of the concavity ? Is it useful in your own speciality ? In order to avoid influencing the answers, I decide not to give any example. But those who have visited my page may know my taste. - 2 Community wiki, seeing as there is no single "best answer"? – Yemon Choi Oct 18 2010 at 8:28 I think that, as far as elementary solutions are concerned, it's hard to beat the proof in ex.219. – Gjergji Zaimi Oct 18 2010 at 10:42 Yes that's a great proof. It also shows that it is a special case of Brunn-Minkowski, although I do not know if this counts as an elementary proof... – Piero D'Ancona Oct 18 2010 at 10:55 ## 3 Answers Here is a interesting calculus proof. Let $f:A\mapsto(\det A)^{1/n}$, defined over $SPD_n$. Differentiating twice, we find the Hessian $${\rm D}^2f_A(X,X)=\frac1{n^2}f(A)\left(({\rm Tr} M)^2-n{\rm Tr}(M^2)\right),$$ where $M=A^{-1}X$. This matrix, being the product of two symmetric matrices with one of them positive definite, is diagonalisable with real eigenvalues $m_1,\ldots,m_n$. The parenthesis above is now $$\left(\sum_jm_j\right)^2-n\sum_jm_j^2,$$ a non-positive quantity, according to Cauchy-Schwarz. We infer that ${\rm D}^2f_A\le0$ and that $f$ is concave. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. An easy reduction shows that one can suppose that one of the matrices is the identity and the other diagonal: the inequality then reduced to the convexity of $f(x)=\ln(1+e^x)$. - The concavity of $(\det A)^{1/n}$ for a positive definite symmetric matrix $A$, as well as its generalization known as the Brunn-Minkowski inequality, are absolutely fundamental and critical to differential and integral geometry, as well as geometric analysis (here, I mean functional inequalities like the Sobolev and Poincare inequalities). It is used, for example, in the proof of isoperimetric inequalities and something known as the Bishop-Gromov inequality on a Riemannian manifold. The first proof I learned is simply differentiating $(\det A(t))^{1/n}$ twice, where $A(t) = A_0 + A_1t$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503922462463379, "perplexity_flag": "head"}
http://mathoverflow.net/questions/80986?sort=votes
## Question on geometric measure theory ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I want to know the following is well-known or not: Let X be a metric space with Hausdorff dimension $\alpha$. Then for any $\beta < \alpha$, X contains a closed subset whose Hausdorff dimension is $\beta$. - The empty set works, as does a 1-point set. – Igor Rivin Nov 15 2011 at 14:45 (unless the space itself has one point, in which case only the empty set works). – Igor Rivin Nov 15 2011 at 14:46 (unless you consider the dimension of the empty set to be undefined, in which case a one point set is a counterexample to the claim). – Igor Rivin Nov 15 2011 at 14:46 6 @Igor: I don't see how what you say implies, for instance, the existence of a subset of $[0,1]$ of Hausdorff dimension say 1/2. – Pietro Majer Nov 15 2011 at 14:58 2 A counterexample when $\alpha=\infty$. Let $X$ be uncountable with the discrete metric. Then a subset has either dimension $\infty$ (if uncountable) or $0$ (if countable). The only place finiteness of $\alpha$ is used in my answer is to get $X$ separable. – Gerald Edgar Nov 15 2011 at 22:47 show 2 more comments ## 1 Answer Let's do the case of complete metric space. Let $X$ be a complete metric space with Hausdorff dimension $\alpha < \infty$. Then of course $X$ is separable, as well. We use a result of Howroyd [2] (following Marstrand [1] who did the real line). Let $0 < \beta < \alpha$. Then $H^\beta(X) = \infty$, the $\beta$-dimensional Hausdorff measure. By Howroyd's theorem ($H^\beta$ is semifinite), there is a Borel subset $A \subset X$ with $0 < H^\beta(A) < \infty$. Then since a finite Borel measure is regular, there is a Cantor set $B \subseteq A$ with $0 < H^\beta(B) < \infty$, so of course $B$ has Hausdorff dimension $\beta$. 1. J. M. Marstrand, "The dimension of Cartesian product sets." Proc. Cambridge, Philos. Soc. 50 (1954) 198--202 2. J. Howroyd, "On dimension and the existence of sets of finite positive Hausdorff measure." Proc. London Math. Soc. 70 (1995) 581--604 - Thank you very much. I do not know where you use the finitness assumption of \alpha in your argument. The finiteness seems to be needed only for \beta. If it is possible, would you please tell me which statement you used in Howroyd's paper? – Ema Nov 15 2011 at 18:03 1 Howroyd's first two pages can be viewed here: mendeley.com/research/… (see the statement of Corollary 7). – Gerald Edgar Nov 15 2011 at 23:00 1 I also note that Howroyd attributes the real & Euclidean cases to Besicovitch and Davies, not to Marstrand. – Gerald Edgar Nov 15 2011 at 23:06 Thank you very much for many information! – Ema Nov 18 2011 at 16:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8698354959487915, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/214533/variation-of-monty-hall-game-homework-need-checking
# Variation of Monty Hall game. Homework. Need checking. Assume that you are playing the following variation of the Monty Hall game: A valuable prize is randomly placed behind one of the four doors, numbered from 1 to 4, and nothing behind the remaining three doors. You are allowed to select a door. After your selection, the game show host will open one of the remaining doors, but he will always open a door without the prize. However, the host will always open the door with the smallest number if no prize is behind the remaining doors; or smaller number if the prize is behind one of the remaining doors. After opening a door, the host will offer you the possibility to switch to one of the two doors. We will also assume that your aim is to win the prize and you know all the above information (such as host always opens a door without prize which has the smaller or smallest number). Consider the following strategies: (I) Initially you select door 4 and you stick with it. (II) Initially you select door 4 and you will always switch if you are offered the opportunity to switch. You’ll choose the door to switch to at random from the available possibilities. (III) Initially you select door 4. You will switch to door 1 if the host opens door 2; you will stick with your initial choice of door 4 if the host opens door 1. Choose one My solution There are 4 possibilities for the arrangement of the prizes, namely X X X P / X X P X / X P X X / P X X X Consider Strategy (I). Chance of winning is a flat $1/4$. Consider Strategy (2). He will open the smallest door. For my cases, I end up with OXXP\OXPX\OPXX\POXX. Chance of winning if I switch randomly is $1/4*1/2+1/4*1/2+1/4*1/2+0=0.375$ Consider Strategy (3). Cases are OXXP\OXPX\OPXX\POXX. Probability is $1/4+1/4=1/2$. Because there is only 2 situations in which this can win! Am I right to say that strategy 3 yields me the highest chance of winning? - 2 Why would you switch to a random door if he opens door 2 and door one is up for grab? – Jean-Sébastien Oct 16 '12 at 0:05 ## 1 Answer I agree with your calculation. For a check of your value for 2, you could think this way: If the prize is in 4 (probability $\frac 14$) you lose. Otherwise, you know it is neither $4$ nor the door opened, so you have $\frac 12$ chance now to win. Overall, this gives $\frac 34 \cdot \frac 12=\frac 38$. I give you strategy 4) Initially select door 4. If host opens 2, change to 1. If host opens 1, change to a random door. What chance do you have now? Jean-Sébastien has given a good intuitive reason why strategy 3 has to beat strategy 2. Can you do the same for this one? - Awesome perspective!!!!!!!! – A New Guy Oct 16 '12 at 2:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923904299736023, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/teaching+general-topology
# Tagged Questions 4answers 156 views ### Motivation for the importance of topology Starting from tomorrow, I will be tutoring some undergraduate students following a course in general topology. I am looking for examples motivating the importance of topology in mathematics which can ... 1answer 179 views ### Compact and Locally Compact Spaces I would like to consult with anyone who is reading this post on how do you explain the distinction between compact spaces and locally compact spaces to students who had just completed topology course ... 8answers 2k views ### Why do introductory real analysis courses teach bottom up? A big part of introductory real analysis courses is getting intuition for the $\epsilon-\delta$ proofs. For example, these types of proofs come up a lot when studying differentiation, continuity, and ... 1answer 96 views ### weakly locally one-to-one? Is there any standard name for this concept that is weaker than local one-to-one-ness? In some open neighborhood of $x_0$ there is no point $x\ne x_0$ such that $f(x)=f(x_0)$. Or, if you like: In ... 5answers 1k views ### Quotient geometries known in popular culture, such as “flat torus = Asteroids video game” In answering a question I mentioned the Asteroids video game as an example -- at one time, the canonical example -- of a locally flat geometry that is globally different from the Euclidean plane. It ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378837943077087, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/49185-solved-determinant-3x3-matrix.html
# Thread: 1. ## [SOLVED] Determinant of a 3x3 matrix I've done the problem but I just want to know if I'm on the right track because I've solved many others this way. In other words, is my answer correct? Determine for which values of $c$ the following matrix is invertible : $\left[ \begin{array}{ccc} 1 & c & -1 \\ c & 1 & 1 \\ 0 & 1 & c\end{array} \right]$. From my memory I think that this holds : A matrix $A$ is invertible if and only if its determinant is different from $0$. So I calculated the determinant of the matrix above to be $-1-c^3$, from which I concluded that the matrix is invertible $\forall c \in \mathbb{C}$ such that $c\neq -1$. 2. You correctly calculated the determinant and your statement about the matrix's invertibility is true. 3. Originally Posted by icemanfan You correctly calculated the determinant and your statement about the matrix's invertibility is true. Youpi!!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456110596656799, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/152039/question-related-to-some-class-of-nowhere-dense-sets
# Question related to some class of nowhere dense sets Let $\Omega\subset (0,1)$ be a nowhere dense set which has no lower and upper bound in $(0,1)$ and for which $\Omega^{d}\cap(0,1)=\Omega$ ($\Omega^{d}$ denotes here the set of all limit points of $\Omega$; let also $\Omega^{\pm d}$ denote the set of all two-sided limit points of $\Omega$). Is it true that then the set $(0,1)\backslash \Omega^{\pm d}$ is the set which is the union of the disjoint closed intervals $[a,b]$, where $a$ and $b$ are left-sided and right-sided limit points of $\Omega$, respectively? I think it's true but have no idea how to proceed with the proof. Thank you for any replies. - I'm not sure it matters that $\Omega$ is nowhere dense. It seems more important that (a) every point in $\Omega$ is some kind of limit point, and (b) $(0, 1) \setminus \Omega$ is open, so it is a union of disjoint open intervals. What can you say about the end points of those intervals? – Niels Diepeveen Jun 1 '12 at 1:24 I would say the end points of these intervals are just left-sided and right-sided limit points of $\Omega$. Am I right? If it were true, it would be the end of the proof. But how to show it? – John Jun 1 '12 at 15:41 You are right. If for some $x \in \Omega$ there is an interval $(x, y)$ that does not intersect $\Omega$ then by definition $x$ is not a right-sided or two-sided limit point, so by (a) it must be a left-sided limit point. – Niels Diepeveen Jun 2 '12 at 1:10 Now I see your point. Thank you for your tips. They were very helpful. – John Jun 2 '12 at 18:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.961170494556427, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/2705/standard-symbol-notation-for-x-knows-y-or-the-inverse/2725
# Standard symbol / notation for “x knows y”, or the inverse What's the standard way to express "$x$ knows about $y$", or "$x$ has no knowledge of $y$" in cryptographic notation? Example (PRNG predictor): $\exists f : P(f(G(k)|_{0..n}) = G(k)|_{n+1}) \geq 0.5 + \epsilon$, for non negligable $\epsilon$, where $f$ has no knowledge of $k$. - ## 2 Answers The notation is "", i.e., the empty string. $\;\;$ Since $k$ is not an input of $f$, $f$ has no knowledge of $k$. - So you're saying there's no such notation? – Polynomial May 25 '12 at 9:44 4 @Polynomial He's saying that in your example such a notation is not necessary, because it's implicit in the parameters of $f$. A function by definition only knows what it sees in its parameters. – CodesInChaos May 25 '12 at 9:56 But your example is problematic, since you did not restrict the cost of calculating $f$. If you make no such restriction, you need entropy exceeding the output size, i.e. a true RNG, and not a PRNG. – CodesInChaos May 25 '12 at 9:57 @CodeInChaos I was just using it as an example. If I were really writing that in a document, I'd put something like "where $f(x)$ runs in polynomial time or better". – Polynomial May 25 '12 at 10:10 If you really wanted a symbol for that, I suppose you could borrow a notation from probability theory and write $f \perp k$ for "$f$ is independent of $k$". But that's definitely not standard usage, so you're going to have to define it explicitly. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504271745681763, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/limit
# Tagged Questions Questions on the evaluation of limits. 2answers 33 views ### Proving if $f(x)$ is differentiable at $x = x_0$ then $f(x)$ is continuous at $x = x_0$. Please, see if I made some mistake in the proof below. I mention some theorems in the proof: The condition to $f(x)$ be continuous at $x=x_0$ is $\lim_{x\to x_0} f(x)=f(x_0)$. If $f(x)$ is ... 5answers 96 views ### Limit as $x$ approaches $1$ from the right of $\frac{1}{\ln x}-\frac{1}{x-1}$ $$\lim_{x\rightarrow 1^+}\;\frac{1}{\ln x}-\frac{1}{x-1}$$ So I would just like to know how to begin to solve this limit, or what topic does this problem fall under so that I can search for ... 2answers 109 views ### High school contest question Some work on it reveals the possibility of using gamma function. Is there any easy way to compute it? $$\lim_{n\to\infty}\left(\frac{1}{n!} \int_0^e \log^n x \ dx\right)^n$$ 1answer 30 views ### Question about limits with variable on exponent So I have to find the following limit $$\lim_{n\to\infty}\left(1+\frac{2}{n}\right)^{1/n}.$$I said that this is ... 5answers 88 views ### A limit on binomial coefficients Let $$x_n=\frac{1}{n^2}\sum_{k=0}^n \ln\left(n\atop k\right).$$ Find the limit of $x_n$. What I can do is just use Stolz formula. But I could not proceed. 1answer 19 views ### $\lim_{y \to \infty}\int_{R}f(x-t)\frac{t}{t^2 +y^2}dt=0?$ for $f\in L^{p}$, $p \in [1,\infty)$ For $f\in L^{p}$, $p \in [1,\infty)$ we want to prove: $$\lim_{y \to \infty}\int_{R}f(x-t)\frac{t}{t^2 +y^2}dt=0$$ I'm not sure whether we can exchange the limit and the integral, cuz I cannot find ... 3answers 62 views ### Limit. $\lim_{x \to \infty}{\sin{\sqrt{x+1}}-\sin{\sqrt{x}}}.$ I want to compute $$\lim_{x \to \infty}{\sin{\sqrt{x+1}}-\sin{\sqrt{x}}}.$$ Is it OK how I want to do? ... 2answers 60 views ### Limit: $\lim_{x \to 0}{[1+\ln{(1+x)}+\ln(1+2x)+\ldots+\ln(1+nx)]}^{1/x}.$ How can I find the following limit? $$\lim_{x \to 0}{[1+\ln{(1+x)}+\ln(1+2x)+\ldots+\ln(1+nx)]}^{1/x}.$$ It's a limit of type $\displaystyle 1^{\infty}$ and if I note with \$\displaystyle ... 2answers 53 views ### Proof f(x) is continuous given $x$ rational and irrational. How can I resolve the task below: Given $f(x)= \begin{cases} x, &x\in \mathbb{Q}\text{ }\\ 1-x, &x\notin \mathbb{Q}\text{ (irrational)} \end{cases}$, $0 \leq x \leq 1$. Show $f(x)$ is ... 8answers 130 views ### Limit of $\lim_{x \to 0}\left (x\cdot \sin\left(\dfrac{1}{x}\right)\right)$ is $0$ or $1$? WolframAlpha says $\lim_{x \to 0} x\cdot \sin\left(\dfrac{1}{x}\right)=0$ but I've found it $1$ as below: \lim_{x \to 0} \left(x\cdot \sin\left(\dfrac{1}{x}\right)\right) = \lim_{x \to 0} ... 4answers 84 views ### How to evaluate $\lim_{x \to \infty}\left(\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}\right)$? How can I evaluate $\lim_{x \to \infty}\left(\sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}}\right)$? 1answer 59 views ### Why do these trig functions “overpower” each other? For example, $\sin(x)\cos(x)$ can be written as $\sin(2x)/2$, the limit as $x$ approaches $0$ of $\sin(x)\cos(x)$ is $0$, and the limit as x approaches $\pi/2$ is $0$. I don't see a reason why sine ... 1answer 45 views ### limit of an exponential function I was trying to understand how we can approximate exp. One example is: $$\exp(t) = \sum_{i=0}^\infty t^i/i!$$ however, why is the next true: \lim_{x\to \infty}\exp \left ({\frac{t^2}{2!} ... 3answers 84 views ### What does this mean: there exist an integer N such that $n\ge N$? I'm reading Rudin's, Principles of Mathematical Analysis, and I keep tripping over this phrase. Usually the phrase by that it implies a some equation with n being the index, subscript, of a point. My ... 1answer 17 views ### Decreasing from the horizontal asymptote The function $f(x) = x^2/(x^2 - x -2)$ has the following graph. It has a horizontal asymptote $y=1$. For $x$ less than $-4$, the function is decreasing and its graph is under the asymptote. How is ... 3answers 30 views ### Finding the limit of a particular function Can not understand the following limit in a past paper. $$\dfrac{1}{[n(e^{\frac{1}{n}}-1)]}\rightarrow 1$$ 7answers 124 views ### How to prove that $1/n!$ is less than $1/n^2$? I want to prove $$\sum_{n=0}^{\infty} \frac{1}{n!}$$ is a converging series. So I want to compare it with $\sum_{n=0}^{\infty} \frac{1}{n^2}$. I want to do direct comparison test. How to prove ... 2answers 49 views ### How to find the limit for the quotient of the least number $K_n$ such that the partial sum of the harmonic series $\geq n$ Let $$S_n=1+1/2+\cdots+1/n.$$ Denote by $K_n$ the least subscript $k$ such that $S_k\geq n$. Find the limit $$\lim_{n\to\infty}\frac{K_{n+1}}{K_n}\quad ?$$ 3answers 46 views ### Evaluating a limit with variable in the exponent For $$\lim_{x \to \infty} \left(1- \frac{2}{x}\right)^{\dfrac{x}{2}}$$ I have to use the L'Hospital"s rule, right? So I get: $$\lim_{x \to \infty}\frac{x}{2} \log\left(1- \frac{2}{x}\right)$$ And ... 1answer 49 views ### How to place a limit that it's inside the integral, outside. I did this: $$\int_{1}^t x^{-1}dx=\int_{1}^t\lim_{n\rightarrow -1}{x^n}dx =\lim_{n\rightarrow -1}\int_{1}^t{x^n}dx$$ just to have a way to approximate $\ln t$. \ln{t}=\lim_{h\rightarrow ... 0answers 58 views ### Find the limit $\lim_{n\to\infty}\sqrt{1+\sqrt{2+\cdots+\sqrt{n}}}$ [duplicate] Find the limit $$\lim_{n\to\infty}\sqrt{1+\sqrt{2+\cdots+\sqrt{n}}}$$. Remark:there are n times square root within $n$. 5answers 42 views ### How to find the limit of this function We have the function $$\dfrac{\sqrt{n^4 + 100}}{4n}$$ I think the best method is by dividing by $n$, but I have no idea what that yields, mainly because of the square root. 4answers 117 views ### What is $\lim_{n \rightarrow \infty}\sum\limits_{k = 1}^n\frac{k}{n^2}$? We have $$\dfrac{1+2+3+...+ \space n}{n^2}$$ What is the limit of this function as $n \rightarrow \infty$? My idea: \dfrac{1+2+3+...+ \space n}{n^2} = \dfrac{1}{n^2} + \dfrac{2}{n^2} + ... + ... 1answer 29 views ### Find the limit as n approaches infinite We have the following function: $$U_n = \sin \dfrac{1}{3} n \pi$$ What is the limit of this function as n approaches infinity? I first tried to use my calculator as help, for n I chose some ... 3answers 34 views ### Sequence of the ratio of two successive terms of a sequence If $(a_n)_{n\in N}$ is a strictly decreasing sequence of real number converging to $0$ and s.t. $\forall n\in N$, $0<a_n<1$, does the following limit: $$\lim_{n}\frac{a_{n+1}}{a_n}$$ exists? ... 2answers 42 views ### How to calculate $\lim_{x\to 1}\int_{0}^{1}\frac{dy}{\sqrt{1-y^{2}}}\frac{y^{3/2}}{\sqrt{x - y}}$ when $x>1$? Numerically, it looks that the limit is $$\lim_{x\to 1}\int_{0}^{1}\frac{dy}{\sqrt{1-y^{2}}}\frac{y^{3/2}}{\sqrt{x - y}} = \frac{1}{\sqrt{2}}\log(1 - x) + cte$$, but I have not been able to ... 2answers 31 views ### limit of convex increasing funtion If $f$ is strictly increasing and strictly convex (or $f'>0$ & $f''>0$), then $$\lim_{x\rightarrow∞}{f(x)}=∞$$ Is this statement true? If this statement is true, how can I prove? 2answers 92 views ### Which is the better approximation to $e$? Let $a_n = (1+1/n)^n$ and $b_n = (1+1/n)^{n+1}$. Both $a_n \to e$ and $b_n \to e$, and $a_n < e < b_n$. A better approximation to $e$ is known to be $c_n = (1+1/n)^{n+1/2} = \sqrt{a_n b_n}$, ... 2answers 43 views ### multivariable limit question Is this an acceptable solution? $$\lim_{(x,y)\rightarrow(0,0)}\frac{\sin(2(x^2+y^2))}{x^2+y^2}$$ $$t=x^2+y^2$$ So $t\rightarrow0$. Now I change the limit to: ... 3answers 52 views ### Limit as N goes to Infinity Consider this limit: $$\lim_{n\rightarrow\infty} \left( 1+\frac{1}{n} \right) ^{n^2} = x$$ I thought the way to solve this for $x$ was to reduce it using the fact that as $n \rightarrow \infty$, ... 2answers 110 views ### Without calculating limit directly show that it is equal to zero $$\lim_{n\rightarrow\infty}\left(\frac{n+1}{n}\right)^{n^2}\frac{1}{3^n}=0$$ I am not really sure what it means by "without calculating limit" and I don't really have ideas how to do it. 3answers 106 views ### How to calculate $\lim_{x\to 1^+} \log (x)^{\log(x)}$? How to calculate $\lim_{x\to 1^+} \log (x)^{\log(x)}$ ? i know that its "1", but why? How can i calculate this? Thank you very very much =) 1answer 94 views ### What is this limit called? Is it a different kind of derivative? (first I should notice you this is not something I can look up in a textbook, because I'm learning partial derivatives, alike I do with most Maths, as a hobby. If something below is wrong, blame the ... 1answer 14 views ### Ignoring exponential terms in asymptotic matching of two point boundary value ODE So I'm not sure how much background I need to give to set up this question. But in my lecture notes I have that $e^{-\eta / \epsilon^{1-\alpha}}$ can be ignored where $\epsilon << 1$ and \$0 ... 2answers 42 views ### Find the limit of $2+\left(-\frac{2}{e}\right)^n$, as $n\to\infty$, if it exsists I'm absolutely unsure about how to approach this. I've considered changing it to $-2=\left(-\frac{2}{e}\right)^n$ and then using the properties of lograrithms, but $\ln(-2)$ is undefined, as is ... 2answers 36 views ### Is it ever proper to say that the limit of a function equals infinity? If I calculate a limit and get the value $\infty$, what is the proper way to communicate this? Can I say that the $\lim_{n\to\infty}a_n=\infty$ and therefore the sequence $\{a_n\}$ diverges, or do I ... 3answers 27 views ### Limit as n approaches infinity involving roots $$\lim_{n\to\infty}\frac{n}{1+2\sqrt{n}}$$ Given my understanding of how to solve these problems, I need to take the highest power of $n$ in the denominator and then divide both the numerator and ... 1answer 51 views ### accumulation point of recursive sequence Given is a sequence with: $(a_0)=1$, $(a_1=1)$, $a_{n+2}=\frac{1+a_{n+1}}{a_n}$ I now have to show what the accumulation points are: I guess that the sequence is jumping from number to number like ... 4answers 568 views ### Find $\lim\limits_{n \rightarrow \infty}\dfrac{\sin 1+2\sin \frac{1}{2}+\cdots+n\sin \frac{1}{n}}{n}$ Find$$\lim_{n \rightarrow \infty}\frac{\sin 1+2\sin \frac{1}{2}+\cdots+n\sin \frac{1}{n}}{n}.$$ This is a recent exam question, which I couldn't figure out in the exam. My guess is it doesn't ... 1answer 39 views ### Show that sequence approaches fixed point of a function Problem Let $f(x)$ be a differentiable function on $\Bbb R$ with $\left|\,f ' (x)\right| \leq r < 1$, where $r$ is constant. Then consider the sequence $\{x_n\}$ such that $x_1 = 0$, \$x_{n+1} = ... 1answer 138 views ### What is the most elementary proof that $\lim_{n \to \infty} (1+1/n)^n$ exists? Here is my candidate for the most elementary proof that $\lim_{n \to \infty}(1+1/n)^n$ exists. I would be interested in seeing others. $***$ Added after some comments: I prove here by very ... 1answer 38 views ### please,find the limit of $x$ please,find the limit of $x$ when $x=u^{-\frac{1}{\alpha}}$ where ${0}\le u\le{1}$ and $\alpha>0$ i have found the limit of $x$ is also ${0}\le x\le{1}$ when $u=0$ then $x=0$ since ... 1answer 46 views ### How to calculate $\lim_{n\to\infty}(1+1/n^2)(1+2/n^2)\cdots(1+n/n^2)$? How can we compute the following limit: $$\lim_{n\to\infty}(1+1/n^2)(1+2/n^2)\cdots(1+n/n^2)$$ Mathematica gives the answer $\sqrt{e}$. However, I do not know to do it. 1answer 61 views ### Solving the equation $\displaystyle \frac{e^x}{x}=\int_n^{n+1}f(t)\,dt$ Suppose the equation $\displaystyle \frac{e^x}{x}=\int_n^{n+1}f(t)\,dt$ as $f(t)=\frac{e^t}{t}$ and $n\in \mathbb{N} \setminus{0}$. How to prove that: The equation above has a unique solution $U_n$ ... 2answers 18 views ### Limit with variable: non-defined expression I have a given limit that depends on a variable $a$: $$\lim_{x \rightarrow \infty} \left (\frac{e^{ax}}{1 - ax} \right)$$ I understand cases for $a < 0 \implies \lim = 0$ and \$a > 0 \implies ... 2answers 88 views ### Finding $\lim\limits_{n \rightarrow \infty}\left(\int_0^1(f(x))^n\,\mathrm dx\right)^\frac{1}{n}$ for continuous $f:[0,1]\to[0,\infty)$ [duplicate] Find $$\lim_{n \rightarrow \infty}\left(\int_0^1(f(x))^n\,\mathrm dx\right)^\frac{1}{n}$$if $f:[0,1]\rightarrow(0,\infty)$ is a continuous function. My attempt: Say $f(x)$ has a max. value $M$. ... 5answers 197 views ### Why isn't this limit equal to $0$? $f(2)=4$, $g(2)=9$, $f'(2)=g'(2)$. $\displaystyle \lim_{x \to 2} \frac{ \sqrt{f(x)}-2} { \sqrt{g(x)}-2}$. Why isn't this limit equal to $0$? Since $f$ and $g$ are differentiable at $x=2$, that ... 1answer 29 views ### Finding distributional limit How to find $\lim_{\varepsilon\rightarrow 0+}f_{\varepsilon}$ in $D'(R)$, if $f_\varepsilon$ is defined as: $f_\varepsilon(x)=\frac{1}{\varepsilon^3}$ for ... 2answers 41 views ### How to prove something on limit points… $X = \mathbb R^n$. How to prove that the interior point of a subset of $X$ is also limit point? Don't know where to start... by definition, an interior point is such if it exists an e-neighborhood ... 1answer 32 views ### Are all limits solvable without L'Hospital Rule or Series Expansion Is it always possible to find the limit of a function without using L'Hospital Rule or Series Expansion For example, $$\lim_{x\to0}\frac{\tan x-x}{x^3}$$ $$\lim_{x\to0}\frac{\sin x-x}{x^3}$$ ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 145, "mathjax_display_tex": 33, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358125925064087, "perplexity_flag": "head"}
http://www.sagemath.org/doc/reference/combinat/sage/combinat/rigged_configurations/bij_abstract_class.html
# Abstract classes for the rigged configuration bijections¶ This file contains two sets of classes, one for the bijection from KR tableaux to rigged configurations and the other for the reverse bijection. We do this for two reasons, one is because we can store a state in the bijection locally, so we do not have to constantly pass it around between functions. The other is because it makes the code easier to read in the *_element.py files. These classes are not meant to be used by the user and are only supposed to be used internally to perform the bijections between TensorProductOfKirillovReshetikhinTableaux and RiggedConfigurations. AUTHORS: • Travis Scrimshaw (2011-04-15): Initial version class sage.combinat.rigged_configurations.bij_abstract_class.KRTToRCBijectionAbstract(krt)¶ Root abstract class for the bijection from KR tableaux to rigged configurations. This class holds the state of the bijection and generates the next state. This class should never be created directly. next_state(val)¶ Build the next state in the bijection. INPUT: • val – The value we are adding • tableau_height – The height of the tableau TESTS: ```sage: KRT = TensorProductOfKirillovReshetikhinTableaux(['A', 4, 1], [[2,1]]) sage: from sage.combinat.rigged_configurations.bij_type_A import KRTToRCBijectionTypeA sage: bijection = KRTToRCBijectionTypeA(KRT(pathlist=[[4,3]])) sage: bijection.cur_path.insert(0, []) sage: bijection.cur_dims.insert(0, [0, 1]) sage: bijection.cur_path[0].insert(0, [3]) sage: bijection.next_state(3) sage: bijection.ret_rig_con -1[ ]-1 -1[ ]-1 (/) (/) ``` class sage.combinat.rigged_configurations.bij_abstract_class.RCToKRTBijectionAbstract(RC_element)¶ Root abstract class for the bijection from rigged configurations to crystal paths. This class holds the state of the bijection and generates the next state. This class should never be created directly. next_state(height)¶ Build the next state in the bijection. TESTS: ```sage: RC = RiggedConfigurations(['A', 4, 1], [[2, 1]]) sage: from sage.combinat.rigged_configurations.bij_type_A import RCToKRTBijectionTypeA sage: bijection = RCToKRTBijectionTypeA(RC(partition_list=[[1],[1],[1],[1]])) sage: bijection.next_state(0) 5 sage: bijection.cur_partitions [(/) , (/) , (/) , (/) ] ``` #### Previous topic Bijection between rigged configurations and KR tableaux #### Next topic Bijection classes for type $$A_n^{(1)}$$ ### Quick search Enter search terms or a module, class or function name.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.81438809633255, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/35972/observation-of-violation-of-the-uncertainty-principle?answertab=votes
# Observation of violation of the uncertainty principle? I stumbled upon this piece of news in the BBC's website http://www.bbc.co.uk/news/science-environment-19489385, discussing this paper http://prl.aps.org/abstract/PRL/v109/i10/e100404, which reports the "Violation of Heisenberg’s Measurement-Disturbance Relationship by Weak Measurements" • What does this mean? The uncertainty principle is wrong!? • What are weak measurements? • All in all, what do we learn from this experiment? I don't have access to the journal, so it would be great if someone who saw and read the paper gave us some answers. - 4 – Qmechanic♦ Sep 9 '12 at 7:32 Thanks Qmechanic! – stupidity Sep 9 '12 at 23:18 ## 3 Answers 1. No, the uncertainty principle isn't wrong. The PRL paper doesn't suggest that the original uncertainty principle relating uncertainties of position and momentum fails. It "only" questions a modified interpretation of the principle that says that the momentum is disturbed at least by $\hbar / 2 \Delta x$ for a given precision of the position measurement $\Delta x$. Even this statement is highly loaded due to some (deliberately?) misleading terminology, as the following paragraphs clarify. 2. In 1988, Aharonov, Albert, and Vaidman (AVV) designed a clever technique to measure the expectation value of an observable in a state as the average of "weak values" obtained in some contrived time-dependent measurement procedures. The individual measurements disturb the state of the particle less than exact measurements would but it's still enough to obtain the exact expectation value. However, what's problematic is whether the individual terms, the "weak values", should be interpreted as "generalized values" i.e. as properties of the measured system. Stephen Parrott gave the clearest explanations that this ain't the case: the individual weak values are just auxiliary values that say something about the combination (measured system, measuring apparatus, details of the measurement algorithm) so they can't be interpreted as properties of the measured system only and as a consequence, Heisenberg's principles of any form don't have to apply to these quantities. 3. From this experiment, which generalizes the AAV "weak measurements" to photons in a simple way, we learn that the "weak values" indeed fail to possess some basic properties of actual values. AAV already showed in their very pioneering paper that the weak value of $j_z$ may be 100 even for a spin-1/2 system – this claim was the very title of their paper – which is impossible for the genuine (eigen)values. This experiment shows that if "values" are replaced by the (totally different) phrase "weak values" in a version of Heisenberg's principle, the principle doesn't hold. That shouldn't be surprising for any well-informed person. Of course, there is no evidence that there's anything wrong with quantum mechanics, or Heisenberg's original uncertainty principle which may be rigorously proven. It only tries to question a more informal claim by Heisenberg involving "necessary disturbance" caused by a measurement. But whether it actually succeeds in casting doubts on this statement of Heisenberg depends on whether or not you are willing to classify "weak values" as "sorta values". I think one definitely shouldn't so the paper only brings chaos and deep misconceptions to the readers of the mainstream media who are told that quantum mechanics is in doubt. It's surely not. See http://motls.blogspot.com/2012/09/pseudoscience-hiding-behind-weak.html?m=1 for some extra discussion and some formulae. - 1 Thanks Dr Motl for the answers and the link to your blog post. – stupidity Sep 9 '12 at 23:19 I will argue that the experiment presented in the paper [1,2] actually supports Quantum Mechanics. This may be not quite explicit in the paper, but also there is nothing against the standard view on quantum mechanics in it. Heisenberg originally stated his principle in terms of measurement-disturbance relationship (MDR). This is how he understood it at that time. The uncertainty principle which was proven theoretically, either in the context of wave mechanics, or from the non-commutativity of the operators, is correct, and it's correctness is acknowledged by the paper. This is called Heisenberg's uncertainty principle (HUP), and is very different from MDR. The paper refers to previous theoretical works which disprove MDR, and present experimental evidence purported to confirm the violation of the MDR. Why do I claim that the violation of MDR supports Quantum Mechanics? Because, if MDR would be correct, it would be enough to explain quantum uncertainty. Recall that even Heisenberg originally thought that the uncertainty is due to disturbance caused by measurement. If the states would behave as they are due to the measurement disturbance, then we could consider them classical, and extract Bohr's probability rule as we calculate probabilities in statistical mechanics. But we know this is not true. Quantum states exhibit properties which can't be explained by classical mechanisms. Among these, HUP plays an important role, together with entanglement. The service made by this paper is that it shows that the wrong version of the uncertainty principle can be violated. The authors seem to me to support the HUP: These two readings of the uncertainty principle are typically taught side-by-side, although only the modern one [HUP] is given rigorous proof. and Our work conclusively shows that, although correct for uncertainties in states [HUP], the form of Heisenberg's precision limit is incorrect if naively applied to measurement [MDR]. Violation of Heisenberg’s Measurement-Disturbance Relationship by Weak Measurements P.S. My answer is in contradiction with the interpretation given in the BBC article Heisenberg uncertainty principle stressed in new test. The BBC article is misleading, because it makes confusion between MDR and HUP. - It may come as a surprise to past and present adherents of Heisenberg's Uncertainty Principle (HUP) but recent mathematical progress means we can also look at uncertainty from a theoretical point of view. Quantum theory depends on HUP and is incomplete as Einstein thought in the 1930's. See book Self-field theory, a new mathematical description of physics, by A.H.J. Fleming, published by Pan-Stanford Press 2012; analytic solutions for the motions of the electron and the proton inside the hydrogen atom have been found obviating the need of the numerical and probabilistic quantum theory. The basis of this new formulation includes the magnetic currents of particles and not just the electric fields as in quantum theory. In this formulation, the photon is composite and hydrogenic-like. It is well known the inequality relationship of HUP applies to any quantum system in general. The equations for the orbital and cyclotron motions of each electron in self-field theory (SFT) are given as two equality equations. Apart from the 'greater than' relationship compared with the exact relationship, the 3 equations are identical. Whereas there is one inexact relationship in HUP there are two equality relationships in SFT. SFT thus completes the Bohr Theory that did not include any magnetic effect on the electron. In the light of this mathematics HUP can be seen as a theoretical error; in practice it appears as a numerical error in any computer calculations. Let me add that HUP will always be a good engineering approximation able to be used across domains from photon to universe in the same way that Newton's law of gravitation is still used today by those involved in gravitational research. Let me further add that the magnetic moments involved in this new mathematics (SFT) at the terrestrial domain may be able to give us much more quantitative information about the way techtonic plates, earthquakes and tsunamis develop over time. But there are other benefits like 'clean' chemistry waiting to be investigated. Tony Fleming -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404792785644531, "perplexity_flag": "middle"}
http://en.m.wikibooks.org/wiki/Basic_Algebra/Working_with_Numbers/Dividing_Rational_Numbers
# Basic Algebra/Working with Numbers/Dividing Rational Numbers ## Vocabulary ↑Jump back a section ## Lesson Dividing rational numbers. Dividing rational numbers covers a general area of equations. For an equation that is such has only to have a numerator and denominator that are both rational numbers. In turn, one will come out with a quotient that fits the terms applied to a "rational number". Given the fact that you already understand rational numbers, you will understand this unit. If, on the other hand, you have no clue what a rational number is, then you should do some research concerning this subject so that you can understand the explanation of dividing such numbers that follows this text. Anyway, dividing rational numbers, sometimes worded "quotients of rational expressions", is simply dividing a rational number by a rational number. For instance, look at the example problems, dividing rational numbers is very easy. If you have a fraction dividing another fraction then you simply flip the dividend and, by multiplying, one will come out with exactly the same number. The knowledge of expressing how this works is beyond the scope of this lesson. But, it works every time. You are still dividing, but you have switched your means of doing so. When you come to more complicated problems that have unknown variables the same method works. So if you have a fraction of 7 over 5 divided by 3 over 4, you will simply flip the 3 over 4 and multiply the fractions instead of dividing. This is a method that will be used again and again in math, so know it well. Look at the examples given and, although this is easy, make sure you know it. ↑Jump back a section ## Example Problems Example 1 $\frac{2}{7} \div \frac{14}{16}$ $\frac{2}{7} \times \frac{16}{14}$ (Change the division to multiplication and flip the fraction on the right.) $\frac{1}{7} \times \frac{16}{7}$ (Reduce fractions with any common factors on top and on bottom.) $\frac{1 \times 16}{7 \times 7}$ (Multiply the tops and bottoms together.) $\frac{16}{49}$ (Simplify) $\frac{14}{2} \div \frac{14}{2}$ $\frac{14 \times 2}{2 \times 14} =$ $\frac{28}{28} = 1$ $\frac{31}{2} = 15.5$ $\frac{77}{9} = 8.\overline{55}$ $\frac{12}{6} = 2$ $\frac{36}{8}= 4.5$ $\frac{55}{3}= 18.\overline{33}$ ↑Jump back a section ## Practice Games put links here to games that reinforce these skills ↑Jump back a section ## Practice Problems (Note: put answer in parentheses after each problem you write) $\frac{59}{7} =$ (8.428571) $\frac{46}{5} =$ (9.2) $\frac{97}{4} =$ (24.25) $\frac{73}{4} =$ (18.25) ↑Jump back a section Last modified on 19 October 2012, at 08:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113897085189819, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/50472?sort=newest
## Sums of arctangents ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) ```$$ \begin{align} \arctan(x) & = \arctan(1) + \arctan\left(\frac{x-1}{2}\right) \\ & {} - \arctan\left(\frac{(x-1)^2}{4} \right) + \arctan\left(\frac{(x-1)^3}{8}\right) - \cdots \end{align} $$``` Is this known? - 2 What is the rule behind the -+ signs? – darij grinberg Dec 27 2010 at 10:35 1 Michael, is this from Ramanujan's letter to Hardy (i.e., you)? At least the style is borrowed... There is no obvious way to reconstruct a correct formula from yours: it's false when the RHS is understood as $\arctan(1)+\sum_{n=1}^\infty(-1)^{n-1}\arctan((x-1)/2)^n$. – Wadim Zudilin Dec 27 2010 at 11:28 3 You can reexpand the Taylor series of the arctan function. I am not sure what the pattern is. Up to tenth order, I get $$\arctan(x)=\arctan(1)+\arctan((x-1)/2)-\arctan((x-1)^2/4)+\arctan((x-1)^3/8)$$ $$-\arctan((x-1)^5/32+\arctan((x-1)^6/64)-\arctan((x-1)^7/128$$ $$+\arctan((x-1)^9/256)-\arctan(3(x-1)^{10}/1024).$$ So up to this point, we get the coefficient sequence $$1, -1, 1, 0, -1, 1, -1, 0, 2, -3.$$ – Michael Renardy Dec 27 2010 at 11:51 1 oeis.org/A123221 This sequence begins with the absolute values of the sequence of coefficients given above. – Michael Hardy Dec 27 2010 at 19:25 2 No, the OEIS stuff does not pan out. The modulus of the next coefficient is 3, not 5. – Michael Renardy Dec 27 2010 at 21:21 show 2 more comments ## 2 Answers I've voted up Pietro Majer's incomplete answer and Michael Renardy's incomplete answer in the "comments" section. Here's my own incomplete answer. Here's how I got this series: start with the identity $$\arctan a - \arctan b = \arctan \frac{a-b}{1+ab}.$$ From this we get $$\arctan x = \arctan 1 + \arctan\frac{x-1}{1+x}.$$ Substituting 1 for $x$ everywhere in the last expression except the power of $x-1$, we get the 1st-degree term. So we need to replace the last term above by the 1st-degree term plus another arctangent by using the basic identity above, and we get $$\arctan\frac{x-1}{1+x} = \arctan\frac{x-1}{2} + \arctan\frac{-(x-1)^2}{2(1+x) +(x-1)^2}.$$ Then again substitute 1 for $x$ everwhere in the last term except in the power of $(x-1)$ in the numerator, to get the 2nd-degree term, and then write the last term above as the sum of the 2nd-degree term and another arctangent of a yet more complicated rational function. And so on. Does the sequence of arctangents of rational functions go to 0? In some sense? I don't know, nor do I know the general pattern. I actually tried this first with $x-2$ instead of $x-1$; then I decided that $x-1$ already has enough initial unclarity. I don't even know whether in some reasonable sense the process goes on forever. - 2 Modern mathematicians seem very much accustomed to power series but not to trigonometric identities. In the 18th century, Euler began with the identity $$\sin\left(\sum_k \theta_k\right) = \sum_{\text{odd }n \ge 1} (-1)^n \sum_{|A|=n} \prod_{i\in A} \sin\theta_i \prod_{i\not\in A} \cos\theta_i$$ and derived the power series for sine from it by saying $$\theta = \frac{\theta}{n} + \cdots + \frac{\theta}{n}$$ where $n$ is an infinitely large integer, then applying the identity above, then saying that since $n$ is infinitely large, $\sin(\theta/n) = \theta/n$ and $\cos(\theta/n) = 1$. – Michael Hardy Dec 27 2010 at 19:47 I suppose I should add a qualification. The identity would be $$\arctan a - \arctan b = \text{one of the values of }\arctan\frac{a-b}{1+ab}.$$ – Michael Hardy Dec 27 2010 at 21:38 Michael: nice identity, but the sums are over $n\ge0$ (odd or even) and $|A|=2n+1$. – Didier Piau Dec 28 2010 at 11:12 Typo. Let's try again: $$\sum_{\text{odd }n \ge 1} (-1)^{(n-1)/2} \cdots\cdots.$$ – Michael Hardy Dec 28 2010 at 17:59 So the full identity is $$\sin\left( \sum_k \theta_k \right) = \sum_{\text{odd }n\ge 1} (-1)^{(n-1)/2} \sum_{|A|=n} \prod_{i\in A} \sin\theta_i \prod_{i\not\in A} \cos\theta_i.$$ – Michael Hardy Dec 28 2010 at 18:01 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I have no references for this particular series, but here's some hints to get a closed formula for the coefficients listed above by Michael Renardy. If we let $u:=\frac {1-x} 2$, an expansion $$\arctan(1-2u)=\arctan(1) + \sum_{k=1}^\infty \frac {c_k} k \arctan(u^k)$$ can be obtained by term-wise integration over on $[0,u]$ of a (somehow more common) expansion into rational fractions $$\frac 2 {1 + (1-2u)^2}= \sum_{k=1}^\infty \ c_k \frac {u^{k-1}} {1+u^{2k} }\ ,$$ (such expansions have a role in number theory, and are related to Dirichlet series). Here the coefficients may be identified expanding formally the geometric series $(1+ u^{2k} )^{-1}$ and rearranging into a series of powers of $u$, to be compared with the power series of the LHS. One finds an equality with an arithmetic convolution, that inverted gives the $c_k$'s. The exponential growth of the $c_k$ give a positive radius of convergence (I guess $1/\sqrt 2$), that in particular allows the term-wise integration. Note that $\frac 2 {1 + (1-2u)^2}= \mathrm{Im} { \frac 2 {1-2u+i} }$, that simplifies things a bit. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9168554544448853, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/39653/list
## Return to Answer 2 deleted 2 characters in body One way to approach this question quantitatively is suggested by probability. One can put various measures on the space of all simplicial complexes on $n$ vertices. One perhaps fairly natural measure is to take a random graph and then take the clique complex. This doesn't give us all complexes on $n$ vertices but every complex is homeomorphic to the clique complex of some graph, so we are covering everything up to homeomorphism as $n \to \infty$. The main point of my paper Topology of random clique complexes is that almost all simplicial complexes arising this way are fairly simple topologically. In particular is shown that for a typical $d$-dimensional clique complex, the homology groups $H_k$ all vanish when $k > \lfloor d/2 \rfloor$ and when $k< d/4$, and that almost all of whatever homology remains is concentrated in the middle dimension $k=\lfloor d/2 \rfloor$. It is currently an open problem to decide whether the homology is vanishing (or merely small) between $k=d/4$ and $k=d/2$. If one could establish this, then one would be well on the way to showing that almost all flag complexes are homotopy to a wedge of spheres; indeed the last thing to do would be to rule out torsion in middle homology with integer coefficients. I don't have a good feel for whether either of these things is even true, but I do believe think that this paper gives good anecdotal evidence that most flag complexes are somewhat simple topologically, and is a step in the direction of answering Forman's question. (This particular measure seems especially natural from the point of view of combinatorics, since so many simplicial complexes arise as order complexes of posets, hence are automatically flag complexes.) 1 One way to approach this question quantitatively is suggested by probability. One can put various measures on the space of all simplicial complexes on $n$ vertices. One perhaps fairly natural measure is to take a random graph and then take the clique complex. This doesn't give us all complexes on $n$ vertices but every complex is homeomorphic to the clique complex of some graph, so we are covering everything up to homeomorphism as $n \to \infty$. The main point of my paper Topology of random clique complexes is that almost all simplicial complexes arising this way are fairly simple topologically. In particular is shown that for a typical $d$-dimensional clique complex, the homology groups $H_k$ all vanish when $k > \lfloor d/2 \rfloor$ and when $k< d/4$, and that almost all of whatever homology remains is concentrated in the middle dimension $k=\lfloor d/2 \rfloor$. It is currently an open problem to decide whether the homology is vanishing (or merely small) between $k=d/4$ and $k=d/2$. If one could establish this, then one would be well on the way to showing that almost all flag complexes are homotopy to a wedge of spheres; indeed the last thing to do would be to rule out torsion in middle homology with integer coefficients. I don't have a good feel for whether either of these things is even true, but I do believe that this paper gives good anecdotal evidence that most flag complexes are somewhat simple topologically, and is a step in the direction of answering Forman's question. (This particular measure seems especially natural from the point of view of combinatorics, since so many simplicial complexes arise as order complexes of posets, hence are automatically flag complexes.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567499160766602, "perplexity_flag": "head"}
http://mathoverflow.net/questions/72147?sort=newest
## Is this polynomial irreducible? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm just curious about the polynomial $\det (x_k^iy_k^j)_{0\leq i\leq d-1, 0\leq j\leq e-1, 1\leq k\leq de}$ (determinant of $de\times de$-matrix, $x_k$, $y_k$ are all independent variables). Is anything known about its factorization on irreducible polynomials? The question is based only on my own interest. - 1 Uhm, a matrix with three indices? – Federico Poloni Aug 5 2011 at 7:11 2 I think, rows are indexed by pairs (i,j) and columns by k – Fedor Petrov Aug 5 2011 at 7:16 1 Please change either question or title. This polynomial is not irreducible for sure, it is divisible by x_1. – Fedor Petrov Aug 5 2011 at 7:22 @Fedor: Done! Thanks! – zroslav Aug 5 2011 at 12:34 ## 1 Answer Geometrically, your question is the following. Let $v_{d-1}:\mathbb{P}^1 \to \mathbb{P}^{d-1}$ and $v_{e-1}:\mathbb{P}^1 \to \mathbb{P}^{e-1}$ be the Veronese embeddings. Let $s_{d-1,e-1}:\mathbb{P}^{d-1}\times \mathbb{P}^{e-1} \to \mathbb{P}^{de-1}$ be the Segre embedding. Consider the composition $c$, i.e., $s_{d-1,e-1}\circ (v_{d-1},v_{e-1}):\mathbb{P}^1 \times \mathbb{P}^1 \to \mathbb{P}^{de-1}$. The composition is "linearly nondegenerate", i.e., for a general choice of $de$ points $(x_k,y_k)$ of the domain $\mathbb{P}^1\times \mathbb{P}^1$, the image points in $\mathbb{P}^{de-1}$ span the target. And you are now considering the divisor $\Delta$ in $(\mathbb{P}^1\times \mathbb{P}^1)^{de}$ where the image points are linearly degenerate. You want to know if this divisor is reducible. I claim $\Delta$ is irreducible. Choose some index $k=1,\dots,de$ and consider the projection $\pi_k:(\mathbb{P}^1\times \mathbb{P}^1)^{de} \to (\mathbb{P}^1\times \mathbb{P}^1)^{de-1}$ which forgets the $k^{\text{th}}$ component. Consider the restriction $\pi_k:\Delta \to (\mathbb{P}^1 \times \mathbb{P}^1)^{de-1}$. Consider a generic point in the target $(\mathbb{P}^1\times \mathbb{P}^1)^{de-1}$. This parameterizes $de-1$ points in $\mathbb{P}^1\times \mathbb{P}^1$ whose images in $\mathbb{P}^{de-1}$ span a generic hyperplane $H$. The intersection of $H$ with $c(\mathbb{P}^1\times \mathbb{P}^1)$ is a generic curve $\Gamma$ of bidegree $(d-1,e-1)$ in $\mathbb{P}^1\times \mathbb{P}^1$, which is irreducible. For the $k^{\text{th}}$ point of $\mathbb{P}^1\times \mathbb{P}^1$, the total collection of $de$ points are linearly independent in $\mathbb{P}^{de-1}$ unless the $k^{\text{th}}$ point maps into $H$, i.e., unless the $k^{\text{th}}$ point is in the irreducible curve $\Gamma$. What this proves is that there is a unique irreducible component of $\Delta$ which dominates $(\mathbb{P}^1\times \mathbb{P}^1)^{de-1}$. And for any component which does not dominate, the fiber dimension over its image must be precisely $2$ and the image must be a divisor, i.e., for some codimension $1$ subset of $(\mathbb{P}^1\times \mathbb{P}^1)^{de-1}$, every choice of $k^{\text{th}}$ points makes the total collection of $de$ points linearly degenerate. Since $c(\mathbb{P}^1\times \mathbb{P}^1)$ spans $\mathbb{P}^{de-1}$, the image is contained in no hyperplane. So the only possibility is that the collection of $de-1$ points is itself linearly dependent. But, we expect $de-1$ points in $\mathbb{P}^{de-1}$ to be linearly independent in codimension $2$, not in codimension $1$. Of course the "expected codimension" may be wrong, but since this situation is so homogeneous, I bet it is easy to prove the expected codimension equals the actual codimension. I will think about it a bit more and post soon. Edited. Okay, by the above, we have only to prove that for every effective Cartier divisor $D$ in $(\mathbb{P}^1\times \mathbb{P}^1)^{de}$, i.e., $D$ is an irreducible component of $\Delta$, there exists an index $k=1,\dots,de$, such that for the projection $\pi_k:(\mathbb{P}^1\times \mathbb{P}^1)^{de} \to (\mathbb{P}^1\times \mathbb{P}^1)^{de-1}$, $\pi_k:D \to (\mathbb{P}^1\times \mathbb{P}^1)^{de-1}$ is dominant, i.e., the intersection of $D$ with a general fiber $F_k$ is nonzero Now the Picard group of $(\mathbb{P}^1\times \mathbb{P}^1)^{de}$ is $\mathbb{Z}^{2de}$, i.e., every invertible sheaf is of the form $\text{pr}_1^*\mathcal{O}(a_1,b_1)\otimes \dots \otimes \text{pr}_{de}^*\mathcal{O}(a_{de},b_{de})$ for some choice of integers $a_l,b_l$. If the divisor $D$ is effective, then its corresponding invertible sheaf has every $a_l,b_l$ nonnegative. And the fiber $F_k$ is isomorphic to $\mathbb{P}^1\times \mathbb{P}^1$. The restriction of the invertible sheaf to $F_k$ is isomorphic to $\mathcal{O}(a_k,b_k)$. In particular, the intersection of $D$ with $F_k$ can be empty only if $a_k$ and $b_k$ equal $0$. But if this holds for every choice of $k$, then the invertible sheaf os just $\mathcal{O}$ which forces $D$ to be empty. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9058181047439575, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/black-holes+string-theory
# Tagged Questions 1answer 87 views ### Where and how is the entropy of a black hole stored? Where and how is the entropy of a black hole stored? Is it around the horizon? Most of the entanglement entropy across the event horizon lies within Planck distances of it and are short lived. Is ... 0answers 51 views ### paper about black branes and implications to 4d black holes This paper makes a case for piezoelectric response (electric dipole moment under mechanical oscillations) of black branes. This paper does not make an implication of their results for 4D black holes ... 1answer 148 views ### what does holographic principle from string theory say about the possibilities of wormhole travel? Is travel through stable macroscopic wormholes between remote points of spacetime going to be possible in a definitive theory of gravity, be it string theory or something beyond it? Physicists level ... 1answer 139 views ### Black hole entropy Bekenstein and Hawking derived the expression for black hole entropy as, $$S_{BH}={c^3 A\over 4 G \hbar}.$$ We know from the hindsight that entropy has statistical interpretation. It is a measure ... 8answers 2k views ### What are cosmological “firewalls”? Reading the funny title of this talk, Black Holes and Firewalls, just made me LOL because I have no idea what it is about but a lively imagination :-P (Sorry Raphael Bousso but the title is just too ... 3answers 255 views ### Information loss in a black hole How does the Holographic Principle help to establish the fact that all the information is not lost in a black hole? 2answers 156 views ### $2+1$ dimensional physics theory of our universe? Is there any physics theory that depicts our universe as $2+1$ dimensional? I heard that black holes seem to suggest that the world might be $2+1$ dimensional, so I am curious whether such theory ... 1answer 86 views ### random matrix ensembles from BMN model My friends working on Thermalization of Black Holes explained solutions to their matrix-valued differential equations (from numerical implementation of the Berenstein-Maldacena-Nastase matrix model) ... 1answer 170 views ### Could strings be geons? Is it possible that string theory strings are geons? This may be an overly speculative or naive question, but is there an obvious reason why not? Both strings and geons seem to have roughly the same ... 1answer 64 views ### Global symmetry in string theory It is often stated that in quantum gravity only charges coupled to gauge fields can be conserved. This is because of the no hair theorem. If a charge is coupled to a gauge field then when it falls ... 2answers 679 views ### Understanding Black hole information paradox? I am not a physicist, I am a enthusiast trying to understand thinking behind "Holographic Principle" by Leonard Susskind. Recently I saw program on DS Through the wormhole - The riddle of the black ... 2answers 296 views ### Singularities and quantized space time Discrete space time quanta would solve the problems of infinite densities for singularities in General Relativity and Quantum Gravity by imposing a non zero limit on the minimum radius of black holes. ... 1answer 326 views ### Charges and Topology I must apologize, I was a little bit excited when I began understanding some of this, I can not say I can compete with professionals, and words are still difficult concepts. In (1) S.H. et al. it is ... 1answer 494 views ### On black holes, Hawking radiation and gravitational atoms Over the past hour or so I've been following one of my standard physics-based, wanders-through-the-internet. Specifically, I began by reviewing some details of dark energy theory but soon found myself ... 2answers 538 views ### Black Hole Singularity and String Theory This question arises in a somewhat naive form because I am largely unfamiliar with String Theory. I do know that it incorporates higher space dimensions where I shall take the overall dimensionality ... 3answers 780 views ### Has the black hole information loss paradox been settled? This question was triggered by a comment of Peter Shor's (he is a skeptic, it seems.) I thought that the holographic principle and AdS/CFT dealt with that, and was enough for Hawking to give John ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9157661199569702, "perplexity_flag": "middle"}
http://sciencehouse.wordpress.com/2009/07/03/why-consciousness/
# Scientific Clearing House Carson C. Chow ## Why consciousness? If we believe in materialism then we must accept that the function of our brain is the collective action of the biological parts.  This doesn’t imply reductionism (i.e. just knowing the parts is enough), but it does imply that there is nothing beyond the laws of physics, chemistry and biology required for the operation of the brain.  Given that the brain is just some really big dynamical system, why then from a computational and evolutionary point of view is there consciousness?  I was criticized correctly in a previous post for not defining consciousness before talking about it. I will argue below that the definition of consciousness is intimately tied to its purpose but for now it suffices to work with the definition that consciousness is the sense of self awareness that I personally have.  I’m pretty sure you have it too but of course I can’t prove it.  For what I will discuss, it won’t matter if consciousness is an illusion or not.  I will focus on why in a purely materialistic world, say a computer simulation, would a being in that world composed entirely of interacting components (e.g. bits), have a sense of self awareness and spectate the world around them. At this purely mechanistic level, there is no free will.  We are all Skinneresque creatures of stimulus and response.  Thus, each of us could be represented by a function or table that maps the current state of the brain with its sensory inputs into a new state and a set of responses. Now, this function is going to be really, really big since the dimension of the brain state could be something like $10^{20}$ or more (state of all neuron and synaptic variables), the sensory input space is even bigger (all possible things we could experience from the outside world), and the space of possible responses  (everything we can say or do) is just as unlimited.  So, how can you possibly program or even store such a massive look up table? How, can you make the brain program run in real time?  This is a computational tractability question. This is where I see how consciousness could be useful.  Consciousness may be an algorithmic trick to speed up the brain program.  For example, in wiring up the brain function or table (where I’ve purposely mixed metaphors), the various autonomic, sensory, motor and executive functions must be connected to each other (if you want to be concrete, let’s say that there must be logical connectives between the states of these different components). You could program this in multiple ways.  You could design the system so that you first account for pairwise interactions between the components (i.e if eyes see X and ears hear Y then do Z), then consider three component interactions (i.e. if eyes see R, ears hear S, nose smells T, then do U), then four, then five, and so forth.  You can imagine that this brute force way would be a very inefficient way to code things. An alternative is to have all the components connect to a common bulletin board.  The board gets updated each time a component posts to it.  If other components need to act, they simply check what’s on the board and act on that information.  The nice thing about this approach is that if new components get added, they just need to tap into the bulletin board.  In the brute force way,  interactions with all the other components would have to be wired in separately.  The tricky part is how the board gets updated.  This is what I think consciousness is.  It is the super duper compressed summary of the current state of the brain and inputs.  This is the running dialog in my head. That is why we can watch a movie and give a review in 140 characters – something that computers are no where close to being able to do right now. I think evolution tapped into this trick for nervous systems pretty early on so perhaps all life forms have consciousness in the sense of an ongoing compressed summary of their current state.  Differences in animals would then be quantitative – bigger and faster.  It is possible that there are bifurcations or phase transitions in the operation as a function of size and speed so that quantitative increases lead to abrupt qualitative differences in performance. So an insect’s consciousness is qualitatively different from ours. I also think that artificial minds that can emulate human-like tasks in real time may require this design.  It might be that from a purely computational tractability issue,  artificial intelligence and artificial consciousness cannot be separated. Acknowledgments:  Many of these ideas were inspired by an article in IEEE Spectrum by Christof Koch and Giulio Tononi. ### Like this: This entry was posted on July 3, 2009 at 11:23 and is filed under Neuroscience, Philosophy. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 8 Responses to “Why consciousness?” 1. Romain Says: July 8, 2009 at 07:47 Hi, I may have missed your point but, how do you link the two definitions of consciousness you gave: - the sense of self awareness that I personally have - the super duper compressed summary of the current state of the brain and inputs R. 2. Carson Chow Says: July 8, 2009 at 08:36 Well obviously this is total speculation but I think that this sense of self awareness is a consequence of the existence of a compressed summary. How this works exactly, I don’t know. -cc 3. Carson Chow Says: July 8, 2009 at 08:53 Hi Romain, Let me expand some more on this issue. I was trying to suggest that the describable part of “self-awareness” is the compressed summary of our current state. For example, the running dialog in my head is a manifestation of the summary that is a low fidelity recapitulation of auditory and vocal processes. The internal “visual display” I have is in a form that is convenient for my motor system to use in setting gaze direction and planning movement. My emotional states, like joy, agitation, fear, etc provide a summary for future actions. So, all aspects of my self-awareness seems to framed in terms that are convenient for various neural systems to use. This probably doesn’t explain what you really want to know but I think it gives some sense of why consciousness would be useful from a computational standpoint. best, cc 4. sjf Says: August 13, 2009 at 09:30 “At this purely mechanistic level, there is no free will.” IMHO the problem for materialism isn’t free will, which I can believe is an “illusion,” but rather qualia, which I can’t. 5. Carson Chow Says: August 13, 2009 at 12:01 So are you arguing against materialism? 6. Eric Says: August 2, 2010 at 12:00 Carson, I like what you have to say, but I’m not sure it really gets at the “Why” of consciousness. It seems like you are saying consciousness is necessary to solve a computational problem. It seems that puts the computational problem ahead of the evolutionary problem of why bother to evolve this computationally complex brain in the first place. The answer, IMHO, is that consciousness is there in order to allow us more behavioral flexibility. What I mean by this is that non-conscious (or less conscious) creatures might function more like if X and Y then do Z. Organisms with consciousness are able to say if X and Y, then do Z or A or B – this is what our computationally and biologically expensive brains “buy us” in evolutionary terms. Thoughts? 7. Carson Chow Says: August 2, 2010 at 12:22 Hi Eric, The question of why anything evolves is difficult if not impossible to answer. If consciousness gives a computational advantage then if that advantage is useful for increasing fitness then it will fix in a population if some chance event gets it started. I think it would be an advantage for creatures that predate and actively avoid predation. I think you are correct that consciousness does give more flexibility but I think that even before you get to flexibility, it allows you to address a combinatorial explosion of possibilities. So an animal couldn’t even do a logical task with more than a few states without it. 8. Scientific Clearing House Says: September 25, 2010 at 20:50 [...] in cognitive science right now.   Many of my views on consciousness, which I partly summarized here,  have been strongly influenced by his [...] %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282931089401245, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/99270/a-simple-question-about-the-difference-between-directional-derivatives-and-deriv
# A simple question about the difference between directional derivatives and derivatives As the topic, let $\mathbf{u}=u\hat{i}+v\hat{j}$. Why $$D_ug(0,0) = \lim_{h\rightarrow0}\frac{g(hu,hv)-g(0,0)}{h}$$ while $$Dg(0,0)= \lim_{(h,k)\rightarrow (0,0)}\frac{g(h,k)-L(0,0)}{\sqrt{h^2+k^2}},$$we don't need to use the linearization in the case of directional derivatives. - Some notes on formatting: To type a vector in bold you can use \mathbf{ }. For the unit vectors you can use \hat{ }. Also use \lim for limits. – BenjaLim Jan 15 '12 at 14:13 ## 1 Answer I will assume that $L(0,0) = g(0,0)$; otherwise, the error in the second definition will be elsewhere, but I will need the definition of $L$ to pinpoint it. The problem is that the second definition is false (in that it does not match with the usual definition of the derivative). For instance, if we take $g(h,k) = h$, we get: $$Dg (0,0) = \lim_{(h,k) \to (0,0)} \frac{h}{\sqrt{h^2+k^2}},$$ but this limit does not exist: if $h=0$ and $k$ goes to $0$, then the limit is $0$, but if $k=0$ and $h$ goes to $0$ then this limit is $\pm 1$. So, with this definition, even this function $g$ would not be differentiable! The definition of the derivative $Dg (0,0)$ of $g$ in $(0,0)$ is that it is (if it exists) a linear operator from $\mathbb{R}^2$ to $\mathbb{R}$, such that for all vectors $\mathbf{u}$: $$\lim_{(h,k) \to (0,0))} \frac{g(h,k) - g(0,0) - Dg (0,0) \mathbf{u}}{\sqrt{h^2+k^2}} = 0,$$ or equivalently: $$g(h,k) =_{(0,0)} g(0,0) + Dg (0,0) (h,k) + o (\|(h,k)\|).$$ If the derivative of $g$ exists with this definition, then $D_{\mathbf{u}} g (0,0) = Dg (0,0) \mathbf{u}$ [exercise], so that this definition of the derivative and the definition of the directionnal derivative are coherent. Note, however, that the converse is not true: the fact the directionnal derivatives exist do not imply that a ("full") derivative exists. - I think it is a clear answer but i still got one question,why ppl would find $D_ug$ from $$D_ug(0,0) = \lim_{h\rightarrow0}\frac{g(hu,hv)-g(0,0)}{h}$$ rather than $$\lim_{(h,k) \to (0,0))} \frac{g(h,k) - g(0,0) - Dg (0,0) \mathbf{u}}{\sqrt{h^2+k^2}} = 0,$$ as we all know,$$\lim_{(h,k) \to (0,0))} \frac{g(h,k) - g(0,0) - Dg (0,0) \mathbf{u}}{\sqrt{h^2+k^2}}$$ not always equal to 0 In general . – Mathematics Jan 15 '12 at 14:42 Sorry, I fear I do not understand this question. – D. Thomine Jan 15 '12 at 15:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9168813228607178, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/04/23/homomorphisms-of-modules/?like=1&source=post_flair&_wpnonce=a832ba55c5
# The Unapologetic Mathematician ## Homomorphisms of modules Again, once we have a new structure it’s important to understand what sort of functions connect different instancees of the structure. For modules we have module homomorphisms. By now this should be pretty straightforward. A left $R$-module $M$ is an abelian group equipped with an action of $R$. A homomorphism of $R$-modules is a homomorphism of abelian groups $f:M\rightarrow N$ that also preserves the action of $R$. That is, it’s a linear function satisfying $f(r\cdot m)=r\cdot f(m)$ for all $m\in M$. We call such a function $R$-linear. In terms of the tensor product picture, any linear function $f:M\rightarrow N$ gives us a linear function $1_R\otimes f:R\otimes M\rightarrow R\otimes N$. The condition of $R$-linearity is that the following diagram commute: where the horizontal arrows are the actions of $R$ on $M$ and $N$, respectively. Now if we pick two left $R$-modules $M$ and $N$ we have a bunch of different $R$-module homomorphisms from $M$ to $N$. We’ll call the set of them $\hom_{R{\rm -mod}}(M,N)$, sometimes shortened to $\hom_R(M,N)$, or even just $\hom(M,N)$. The important thing about this set is that it inherits the structure of an abelian group from the one on $N$. There’s always a homomorphism $0:M\rightarrow N$ sending every element of $M$ to the zero element of $N$. Also, given homomorphisms $f$ and $g$ we can define $\left[f+g\right](m)=f(m)+g(m)$. It’s straightforward to check that this preserves the action of $R$ as long as $f$ and $g$ both do. Finally, there’s a homomorphism defined by $\left[-f\right](m)=-f(m)$. We can easily see that these operations on $\hom(M,N)$ satisfy the axioms of an abelian group. In fact it gets even better. Not only are the homomorphism sets abelian groups, but composition is bilinear! Let’s consider four homomorphisms between three modules $f_1:A\rightarrow B$, $f_2:A\rightarrow B$, $g_1:B\rightarrow C$, and $g_2:B\rightarrow C$. Then we build up the composition $(g_1+g_2)\circ(f_1+f_2)$. Let’s see what it does to an element $a\in A$. $\left[(g_1+g_2)\circ(f_1+f_2)\right](a)=\left[g_1+g_2\right](\left[f_1+f_2\right](a))=$ $\left[g_1+g_2\right](f_1(a)+f_2(a))=g_1(f_1(a)+f_2(a))+g_2(f_1(a)+f_2(a))=$ $g_1(f_1(a))+g_1(f_2(a))+g_2(f_1(a))+g_2(f_2(a))=$ $\left[g_1\circ f_1+g_1\circ f_2+g_2\circ f_1+g_2\circ f_2\right](a)$ This gives us a linear function $\hom(A,B)\otimes\hom(B,C)\rightarrow\hom(A,C)$ for every three modules $A$, $B$, and $C$. What happens if we pick all three modules to be the same one? Each homomorphism set is $\hom_R(M,M)$, which we’ll call ${\rm End}_R(M)$. Then we get a linear function ${\rm End}_R(M)\otimes{\rm End}_R(M)\rightarrow{\rm End}_R(M)$. This is a ring structure! We call it the ring of $R$-endomorphisms of $M$. If the ring $R$ is the ring of integers and $A$ is an abelian group, then ${\rm End}_\mathbb(A)$ is just the endomorphism ring ${\rm End}(A)$ we considered earlier. This is an example of how the theory of modules naturally extends the theory of abelian groups. ### Like this: Posted by John Armstrong | Ring theory ## 1 Comment » 1. [...] and direct sum We now have three ways of putting modules together: the abelian group of left -module homomorphisms, the tensor product of a right -module and a left -module , and the direct sum of two left [...] Pingback by | May 1, 2007 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 59, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8670573234558105, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/412/verify-product-without-revealing-multipliers/465
# Verify product without revealing multipliers Situation: Several participants contribute encrypted random numbers. These numbers will be used to generate community-agreed random (by simple multiplication). Question: Is there any way to detect same products values without revealing multipliers? Maybe it is possible to use some homomorphic encryption? What I originally want to do: The task is do eliminate same random values (results), generated in different rounds. There should be some history log and we should regenerate random result if we already generated equal one. - 1 I suggest you use addition or XOR instead of multiply. If one of the random numbers turns out to be zero then multiply is not a good thing to do. – rossum Aug 16 '11 at 11:06 @rossum - Thanks. Definitely, you're right. – Andrei Petrenko Aug 16 '11 at 11:07 2 This question is poorly posed. It doesn't explain what you're really trying to achieve. It assumes a particular approach (based upon multiplying submitted values). That approach happens to be flawed. You should tell us what you're really trying to achieve, without making assumptions about what the solution will look like, and we'll tell you the best approach. Also, the question doesn't explain why you want to check for duplicate submissions (I suspect I know why -- I suspect you're trying to defeat a particular attack -- but stopping that attack isn't enough; the approach is still insecure). – D.W. Aug 19 '11 at 5:35 ## 3 Answers You are asking the wrong question. You shouldn't do it the way you described (by multiplying or adding random numbers submitted by the participants). This problem has been well-studied, and there are solutions to it. Your question assumes a particular approach to the problem, but that approach turns out to be flawed. Your approach is vulnerable to numerous security problems. For instance, if you multiply, any one participant can force the final result to be 0 by just submitting 0 as their contribution. As another example, if you add, any one participant can force the final result to be anything they want, by waiting for everyone else to submit their contributions, looking at their contributions, and then choosing their own contribution so they all sum to the desired result. Instead, if you want to jointly generate a community random value that none of you can influence, here is what you should do: 1. Each party picks a random value, and publicly commits to it. In detail: the party Pi should pick a 128-bit random value ri and broadcast yi = Hash(ri, Pi). 2. After everyone has received everyone else's commitments (yi values), then each party should open their commitment. In detail: once party Pi has received all n-1 commitments, he/she broadcasts ri. Everyone checks that each published ri value is consistent with the earlier commitment yi. If anyone detects any inconsistency, or if anyone doesn't finish the protocol, you have to call the whole thing off and punish whoever didn't follow the instructions. 3. Finally, compute R = Hash(r1, r2, ..., rn). The value R is the random value that everyone has jointly generated. The security property we get is that no one party, or no coalition of a subset of the parties, can influence the final random number R, except by refusing to finish the protocol. (You'll have to have a separate out-of-band way to punish parties who don't follow instructions or don't finish the protocol. The scheme above describes how to detect such parties; it will be up to you to adequately disincentivize such behavior. If a malicious party Pj is willing to possibly decline to finish the protocol, they can exert modest influence over the final random number R, simply by waiting to be last to reveal their random number rj, checking what R would be if they finished the protocol, and determining whether it is more favorable to them to finish the protocol and end with random number R or to refuse to finish the protocol and leave everyone with no random number.) - What I'll describe works with any homomorphic scheme, whether multiplicative (Elgamal) or additive (Paillier; maybe exponential Elgamal or BGN depending), but I'll describe it with multiplicative. I assume what you mean is something like this: you have, say, five people. They all generate a random value $r_i$, and post the encryption of it: $c_i=\mathsf{Enc}(r_i)$. If you multiply all the $c_i$'s together, you get an encryption of all the $r_i$'s multiplied together, and assuming one party is honest (chooses a truly random $r_i$ and does not reveal it), the result is random. Note that if you submit $r_i = 0$, this will not be a valid ciphertext in Elgamal (so parties should also check that each $c_i$ is in $\mathbb{G}q$). I am not clear on your question: is it, how can I tell if two people submit the same $r_i$? If so, there is an expensive (quadratic) way of telling. Take two $c_i$ values to test, say $c_j$ and $c_k$. Divide (i.e., invert and multiply) them. This will give you: $c_d=\mathsf{Enc}(r_j/r_k)=\mathsf{Enc}(d)$. If they are the same, $d$ will be $1$ and $c_d=\mathsf{Enc}(1)$. If they are different, $d$ is not $1$ (and equal to their difference). You could decrypt $c_d$ and see if it is $1$ or not, however if it isn't $1$, this will leak some information about $r_j$ and $r_k$: namely their difference. So the trick is to have your five people all generate another random value, $b_i$, and exponentiate $c_d$ by it: ${c_d}^{b_i}=\mathsf{Enc}(d^{b_i})$. If $d$ is one, exponentiating it by a random value will still result in $1$. If it is not $1$, exponentiating it by an honestly chosen random value will (overwhelmingly) result in a random value that is neither $1$ nor will leak any information about the original values. Have each of your five people do this independently and then have the holder of the decryption key decrypt the result (ideally the key would be distributed among the 5 people). Three remarks: 1. This is expensive. For $n$ people and thus $n$ values of $r_i$, you have to do $n^2$ comparisons, and each comparison involves the $n$ people doing a modular exponentiation (plus the decryption cost). 2. This is assuming the parties are honest but curious (will follow the protocol but are happy to learn anything they can about the values). You can use some basic zero-knowledge proofs to enforce everyone behaves honestly (for example, they actually apply an exponent $b_i$ to the ciphertext instead of making a brand new ciphertext and claiming it is the result of their exponentiation). 3. It shouldn't matter if two people submit the same $r_i$ value. Maybe you are concerned with people submitting the same values if you repeat this protocol to generate a couple of values? If so, you can do the same tests between a person's submission in each round. Edit: the method of exponential blinding I described works to test the equality of any two plaintexts (encrypted under the same public key). Each time the protocol is executed, you can take the result and run the test against each of the previous generated values. This requires less tests than comparing individual contributions to each other. - If the five people could all jointly generate a random value by some means, we wouldn't need this protocol in the first place -- we'd just use that means to jointly generate the random value. – D.W. Aug 19 '11 at 5:33 1 Agreed but the question does say "without revealing multipliers" which I took to mean that each party's contribution remains private. I'm not sure why that is needed, I just took the requirement at face value and worked within it's additional constraint. If it were only the case of generating a fair random value, then the coin flipping protocol in your answer suffices and is more efficient. – PulpSpy Aug 19 '11 at 13:51 @PulpSpy "If they are the same, $d$ will be $1$ " , that implies a deterministic scheme with no randomness. Is that the case with the participants when choosing different keys? – curious Apr 11 at 9:08 @PulpSpy I mean that if you can observe whether or not two ciphertexts came from same plaintext encrypted with ELGamal then this leaks some info and cannot be treated as semantically secure. Or i am wrong? – curious Apr 11 at 9:32 Homework? Try using ElGamal encryption. I.e., given an ElGamal ciphertext that encrypts an integer $m$ and your own factor $f$, try finding a way to generate another ElGamal ciphertext that encrypts $f·m$. One difficulty is doing this knowing only the public key. Another difficulty is to avoid that an attacker knowing the encryption of $m$ and the encryption of $f·m$ should not be able to find $f$. Finally you also need to compare ciphertexts for equality. This requires access to the private key, which you should share among the participants. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333986043930054, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/27749/what-are-some-correct-results-discovered-with-incorrect-or-no-proofs/27753
## What are some correct results discovered with incorrect (or no) proofs? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Many famous results were discovered through non-rigorous proofs, with correct proofs being found only later and with greater difficulty. One that is well known is Euler's 1737 proof that $1+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+\cdots =\frac{\pi^2}{6}$ in which he pretends that the power series for $\frac{\sin\sqrt{x}}{\sqrt{x}}$ is an infinite polynomial and factorizes it from knowledge of its roots. Another example, of a different type, is the Jordan curve theorem. In this case, the theorem seems obvious, and Jordan gets credit for realizing that it requires proof. However, the proof was harder than he thought, and the first rigorous proof was found some decades later than Jordan's attempt. Many of the basic theorems of topology are like this. Then of course there is Ramanujan, who is in a class of his own when it comes to discovering theorems without proving them. I'd be interested to see other examples, and in your thoughts on what the examples reveal about the connection between discovery and proof. Clarification. When I posed the question I was hoping for some explanations for the gap between discovery and proof to emerge, without any hinting from me. Since this hasn't happened much yet, let me suggest some possible explanations that I had in mind: Physical intuition. This lies behind results such as the Jordan curve theorem, Riemann mapping theorem, Fourier analysis. Lack of foundations. This accounts for the late arrival of rigor in calculus, topology, and (?) algebraic geometry. Complexity. Hard results cannot proved correctly the first time, only via a series of partially correct, or incomplete, proofs. Example: Fermat's last theorem. I hope this gives a better idea of what I was looking for. Feel free to edit your answers if you have anything to add. - 5 I was thinking also of stuff like Witten. – Steve Huntsman Jun 11 2010 at 1:01 10 In Tom Hales account of Jordan's proof, he states that there is essentially no problem with Jordan's original proof, and that claims to the contrary are themselves wrong or based on misunderstandings. As far as I can tell, he is correct, and there is no reason to impugn Jordan's original proof. (See "Jordan's proof of the Jordan curve theorem" at math.pitt.edu/~thales/papers ) – Emerton Jun 11 2010 at 2:49 1 @Emerton. I stand corrected. Maybe Jordan's proof should be in the same category as Heegner's: thought to be incorrect, but essentially correct when properly understood. – John Stillwell Jun 11 2010 at 3:09 10 A further remark: I think that is important to distinguish between polishing an argument, or perhaps interpreting it in terms of contemporary language and formalism, which will almost always be required when reading arguments (especially subtle ones) from 100 or more years ago, and genuinely incomplete arguments. As an example of the latter, one can think of Riemann's arguments with the Dirichlet principle, where this result was simply taken as an axiom. Additional work was genuinely required to validate the Dirichlet principle, and thus complete Riemann's arguments. – Emerton Jun 11 2010 at 5:59 3 I would argue that (although it came after the drive for rigor had already started thanks to Cantor, Weierstrass, et al.) the dawn of modern statistical and quantum physics had a great deal to do with the consolidation of rigor throughout mathematics. Indeed, ergodic theory and functional analysis owe a great deal to these disciplines, and neither could have existed in the time of (say) Euler because the approach to mathematics was different. – Steve Huntsman Jun 11 2010 at 12:32 show 8 more comments ## 35 Answers Heegner's proof in 1952 that there is no tenth imaginary quadratic field of class number one is an interesting example. It was thought to be incorrect because of some gaps. Stark gave a correct proof in 1967 and explained how it was essentially the same as Heegner's proof. In 1969 Stark formally filled in the gap in Heegner's proof. Heegner "died before anyone really understood what he had done" (Goldfeld). This information comes from http://en.wikipedia.org/wiki/Stark-Heegner_theorem. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In 1905, Lebesgue gave a "proof" of the following theorem: If $f:\mathbb{R}^2\to\mathbb{R}$ is a Baire function such that for every x, there is a unique y such that f(x,y)=0, then the thus implicitly defined function is Baire. He made use of the "trivial fact" that the projection of a Borel set is a Borel set. This turns out to be wrong, but the result is still true. Souslin spotted the mistake, and named continuous images of Borel sets analytic sets. So a mistake of Lebesgue led to the rich theory of analytic sets. Lebesgue seemingly enjoyed this fact and mentioned it in the foreword to a book of Souslins's teacher Lusin. - 1 This is an interesting category of incorrect proof: where the mistake is actually fruitful. I'd like to see more of these! – John Stillwell Jun 11 2010 at 5:57 2 Try mathoverflow.net/questions/879/… . – Qiaochu Yuan Jun 11 2010 at 8:36 show 1 more comment This identity is still not proven: $$\sum_{n=0}^\infty \left(\frac{1}{(7n+1)^2}+\frac{1}{(7n+2)^2}-\frac{1}{(7n+3)^2}+\frac{1}{(7n+4)^2}-\frac{1}{(7n+5)^2}-\frac{1}{(7n+6)^2}\right)=\frac{24}{7\sqrt{7}}\int_{\pi/3}^{\pi/2} \log \left| \frac{\tan t + \sqrt{7} }{\tan t - \sqrt{7} } \right| dt$$ It arose from physical applications. - 1 Very interesting. Any references? – Andrey Rekalo Nov 5 2010 at 8:14 6 From here: crd.lbl.gov/~dhbailey/dhbpapers/math-future.pdf It has been verified up to 20,000 digits. – Anixx Nov 5 2010 at 8:23 21 For what it's worth, the left hand side is the value of the Dirichlet L-series for the nontrivial character with conductor $7$ at $s = 2$. – Franz Lemmermeyer Nov 5 2010 at 14:03 1 The left hand side is also the Legendre symbol $\displaystyle \biggl(\frac{n}{7}\biggr)$ – Chandrasekhar Jul 7 at 7:36 show 1 more comment The Nielsen realization problem. Let $S$ be a compact oriented topological surface and let $\text{Mod}(S)$ be its mapping class group, ie the group of orientation preserving diffeomorphisms of $S$ modulo isotopy. There is a natural surjection $\text{Diff}^+(S) \rightarrow \text{Mod}(S)$. The Nielsen realization problem was the conjecture (due to Jacob Nielsen) that every finite subgroup of $\text{Mod}(S)$ can be lifted to a finite subgroup of $\text{Diff}^+(S)$ (and thus is a subgroup of the group of automorphisms of a Riemann surface). Nielsen proved this for finite cyclic subgroups (this is very nontrivial!), and a number of other people slowly chipped away at other classes of finite subgroups. In 1959, Kravetz published a paper which purported to prove that Teichmuller space is negatively curved. A "center of mass" argument would then establish that every finite subgroup of $\text{Mod}(S)$ fixes a point in Teichmuller space, and it then follows easily that the finite subgroup can be lifted to $\text{Diff}^+(S)$. This was an important result, and Kravetz's paper was frequently quoted. However, in 1971 Linch pointed out in his thesis that Kravetz's paper had an error! In fact, in his 1974 thesis Howie Masur proved that Teichmuller space is not negatively curved (in a pretty strong sense). Finally, in 1980 Steve Kerckhoff proved that Teichmuller space, while not negatively curved, did satisfy a subtle negative-curvature like property which gave the desired result. - 2 Wow, that's a pretty sobering story. Almost a poster child for Lamport's thesis that much of the mathematical literature is shot through with errors in proofs. – Todd Trimble Aug 27 2011 at 18:20 The "Yamabe problem": Every compact Riemannian manifold admits a conformally-related metric with constant scalar curvature. Yamabe thought he had proved this in 1960, but his proof had--I'm not making this up--a sign error. The error was discovered by Neil Trudinger in 1968, after Yamabe's death. As I understand it, Trudinger was working on a similar nonlinear elliptic PDE problem (with a critical Sobolev exponent) and got stuck, so he looked at Yamabe's paper to see how Yamabe had dealt with the same issue. Turned out he hadn't. Trudinger was able to give a partial solution to the problem; later Aubin expanded it to cover more cases, and finally in 1984 Rick Schoen was able to prove it the cases that Aubin had left open (with a small gap in the higher-dimensional case that was repaired by Schoen and Yau in 1988). The proof surprisingly used the positive mass theorem from general relativity. Yamabe's original paper never attracted much attention until the error was found. But because of the subtlety of the methods required to fill in the gap, it has become a model for applications of nonlinear elliptic PDE to geometry, especially to conformally invariant problems and other problems with critical regularity. - 1 Interesting! I had heard of this theorem before, but not of the error, or that it used the positive mass theorem. Does this mean there's some non-physics geometrical insight behind the positive mass theorem? I had always thought of this as being true as a consequence of the dominated energy condition, which is a very "physical" condition to require. – jeremy Jun 12 2010 at 2:26 2 Also, I wouldn't say that "Schoen was able to prove the whole theorem". Aubin proved it for all dimensions $\geq 6$ when $M$ is not locally conformally flat, and Schoen proved it for 3,4, and 5 and all locally conformally flat manifolds. In fact, it is a curious fact that Schoen's proof doesn't work in the cases where Aubin's worked. (1 dimension has no curvature, and 2 dimension follows from uniformization theorem.) – Willie Wong Jul 8 2010 at 21:39 3 A very readable account of the history of the Yamabe problem is available ams.org/journals/bull/1987-17-01/… – Willie Wong Jul 8 2010 at 21:45 show 3 more comments There are at least two Hilbert problems that were considered to be solved, but the proofs turned out to be incomplete, as pointed out by Yulii Ilyashenko. 1. In 1923 Dulac published a 140+ page memoir purporting to show that a polynomial vector field on the plane has only finitely many limit cycles, the second part of the 16th Hilbert problem. The memoir was difficult to read, but the claim was generally accepted until in 1981 Ilyashenko found a serious gap. Full proofs were obtained independently by Écalle and Ilyashenko around 1991. Read the full story. 2. Existence of linear differential equations having a prescribed monodromic group was the subject of the 21st Hilbert problem, also known as the Riemann-Hilbert problem. From Wikipedia article: Josip Plemelj published a solution in 1908. This work was for a long time accepted as a definitive solution; there was work of G. D. Birkhoff in 1913 also, but the whole area, including work of Ludwig Schlesinger on isomonodromic deformations that would much later be revived in connection with soliton theory, went out of fashion. Plemelj produced a 1964 monograph Problems in the Sense of Riemann and Klein, (Pure and Applied Mathematics, no. 16, Interscience Publishers, New York) summing up his work. A few years later the Soviet mathematician Yuliy S. Il'yashenko and others started raising doubts about Plemelj's work. In fact, Plemelj correctly proves that any monodromy group can be realised by a regular linear system which is Fuchsian at all but one of the singular points. Plemelj's claim that the system can be made Fuchsian at the last point as well is wrong. (Il'yashenko has shown that if one of the monodromy operators is diagonalizable, then Plemelj's claim is true.) Indeed in 1989 Soviet mathematician Andrey A. Bolibrukh (1950–2003) found a counterexample to Plemelj's statement. This is commonly viewed as providing a counterexample to the precise question Hilbert had in mind; Bolibrukh showed that for a given pole configuration certain monodromy groups can be realised by regular, but not by Fuchsian systems. - 2 I was just starting to wonder about wrong statements with wrong proofs that everyone believed for a long time. Nice examples! – Paul Siegel Jun 11 2010 at 12:18 show 2 more comments Riemann's flawed proof of the Riemann mapping theorem which crucially relied on Dirichlet's principle. The theorem was stated by Bernhard Riemann in 1851 in his PhD thesis. Lars Ahlfors wrote once, concerning the original formulation of the theorem, that it was “ultimately formulated in terms which would defy any attempt of proof, even with modern methods”. Riemann's flawed proof depended on the Dirichlet principle (whose name was created by Riemann himself), which was considered sound at the time. However, Karl Weierstraß found that this principle was not universally valid. Later, David Hilbert was able to prove that, to a large extent, the Dirichlet principle is valid under the hypothesis that Riemann was working with. The first proof of the theorem is due to Constantin Carathéodory, who published it in 1912. His proof used Riemann surfaces and it was simplified by Paul Koebe two years later in a way which did not require them. - 2 Another one we can credit to physical intuition. – John Stillwell Jun 11 2010 at 0:47 4 The earliest known correct proof of Riemann mapping theorem appears in a paper of William Osgood in 1900.It is in volume 1 of the transactions of the AMS. – Mohan Ramachandran Jun 11 2010 at 16:32 show 1 more comment The fundamental theorem of algebra was given incomplete proofs by d'Alembert, Euler, Lagrange, Laplace, Gauss.. http://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra - Dehn's lemma was given an incorrect proof by Dehn in 1910; only in 1956 was a true proof found - 15 I suggest renaming it "Dehn's lemon". – Victor Protsak Jun 11 2010 at 7:50 7 The proof was found by C. Papakyriakopoulos, right? It seems only fair to mention his name! – Pete L. Clark Jun 22 2010 at 14:26 7 That's right, and I should have mentioned him. People with names longer than Nakayama's probably have a hard time getting things named after them. – paul Monsky Jun 22 2010 at 22:30 When Stephen Smale was a graduate student, he thought he had a proof of the Poincaré Conjecture as follows: Take a compact simply-connected 3-manifold M and remove the interiors of two disjoint 3-balls to get, say, M1 having as boundary two copies of S2. It is easy to show that M1 has a nonsingular vector field entering along one S2 and exiting along the other. Clearly by the simply-connectedness of M, each orbit entering on one boundary component must exit on the other one. Thus M1 must be S2 x [0,1] and hence by replacing the removed 3-balls, M must have been S3. QED. I'm not sure who first pointed out the error, but undoubtedly understanding examples like this helped him appreciate the subtlety of the problem and ultimately prove the Generalized Poincaré Conjecture for dimensions ≥ 5. - 3 Daniel, could you please explain the error in the reasoning? – Tom LaGatta Jun 11 2010 at 21:24 1 Sure. The sentence starting with "Clearly" isn't. In fact there exist orientable 1-foliations (which result from C<sup>1</sup> nonsingular vector fields as the solutions to the corresponding ODE) on even S<sup>2</sup> x [0,1] that are entering on one boundary component and exiting on the other, without every trajectory that enters on one boundary component exiting on the other one. – Daniel Asimov Jun 12 2010 at 0:30 2 (cont'd) This can be achieved by starting with the canonical flow on S<sup>2</sup> x [0,1] (i.e., the one parallel to [0,1]) and introducing a "plug" -- a copy of S<sup>1</sup> x [0,1] x[0,1] -- on which the flow is altered. See, for instance, Plugging Flows by Percell and Wilson. For those with access, at < jstor.org/stable/pdfplus/1997824.pdf >. – Daniel Asimov Jun 12 2010 at 0:34 6 I heard Smale tell a version of this story at the Clay conference in Paris a couple of months ago. He got interested in the Poincaré conjecture and spent a night coming up with a simple proof. The next morning he went to his advisor and explained the details, and all the time his advisor just sat there silent and nodded from time to time. Smale left the meeting a little frustrated that his proof hadn't been met with more interest, until he realized later that day that he had never used the hypothesis of simple connectedness. But yeah, he did say that this helped him in the proof for n>=5. – Gunnar Magnusson Jul 9 2010 at 9:47 I submit for your consideration Euclid's fifth postulate. Given the amount of effort taken by people to prove it from his four other postulates, I consider it canny or lucky that Euclid chose to keep it as an axiom. Of course, it took unusual thinking and some discovery to realize that Euclid's fifth was indeed independent of the other postulates. - The four-color theorem. - 3 Is this long list of two word answers OK? This almost feels like some sort of spam. – Adrián Barquero Jun 11 2010 at 2:39 17 It's a big list, proper form is to separate them. – Steve Huntsman Jun 11 2010 at 3:06 3 This is an excellent answer. In the 19th century Heawood proved the 5 color theorem and gave a false proof of the 4 color theorem. But his ideas in the proof of the 5 color theorem were the basic starting point for all further progress. – paul Monsky Jun 11 2010 at 10:42 6 Indeed Heawood proved the 5-color theorem. But I'm not aware that he gave an incorrect proof of the 4-color theorem. What he is known for doing is finding a flaw in an 1879 supposed proof, by Kempe, that had stood for 11 years. Perhaps at least as impressive, he determined the "Heawood number" -- an upper bound for the chromatic number -- for every compact surface, and conjectured it was the actual chromatic number. This number turned out to be the actual chromatic number of every compact surface except the Klein bottle, as shown by Ringel & Youngs (except for the sphere) in 1968. – Daniel Asimov Jun 12 2010 at 2:12 show 1 more comment According to Weierstrass, Riemann knew about the existence of continuous nowhere differentiable functions. (Weierstrass' celebrated example was published in 1872, some 6 years after Riemann's death.) In his lectures, Riemann allegedly suggested the example $$f(x)=\sum\limits_{k=1}^{\infty}\frac{\sin k^2x}{k^2}$$ as early as 1861. He gave no proof and just mentioned that it could had been obtained from the theory of elliptic functions (see the historical note "Riemann’s example of a continuous “nondifferentiable” function continued" by S.L. Segal). Hardy proved in 1916 that $f$ has no finite derivative at any $x=\pi\xi$ where $\xi$ is irrational but left the general case open. It was only in 1970 that J. Gerver finally proved that the Riemann function is in fact differentiable when $$x=\pi\frac{2m+1}{2n+1},\qquad m,n\in\mathbb Z,$$ and $f'(x)=-1/2$ at these points ("The Differentiability of the Riemann Function at Certain Rational Multiples of π", ). - I suppose we can cite here Fermat's Last Theorem as a prime example, although I'm not really sure about the connection between discovery and proof here. - 1 You beat me to it,Adrian. Andrew Wiles original proof of the Taniyama-Shimura conjecture-the major result in modular form theory that has Fermat's famous theorum as a corollary-had a massive gap in it when it was first presented in oral lectures in 1993. With the help of his student Richard Taylor,the final,correct proof was published in 1995. All this is of course common knowledge-what I think a lot of mathematicians sort of forget in all this is the enormous path of discovery leading to Wiles' result. In many ways,it was the culmination of nearly a hundred years of progress. – Andrew L Jun 11 2010 at 1:23 1 I think Adrian is referring to Fermat's discovery. – Qiaochu Yuan Jun 11 2010 at 1:52 2 @Andrew: Quibble: Wiles did not prove (nor did he think or claim he did) the full T-S conjecture; he only established the 'semi-stable case', which was sufficient (from Ribet's work) to establish FLT. The proof of the full conjecture came later, taking off from the work of Wiles and Taylor. – Arturo Magidin Jun 11 2010 at 2:47 8 Dear Andrew L, I think that to say there was a "massive gap" is not quite correct. There was a gap (and one could say that a miss is as good, or in this case, as bad, as a mile), but it was filled within a year or so, in his joint work with Taylor, and the fundamental structure, as well as many of the details, of the argument remained unchanged. In any event, this is no sense an instance of the situation John is envisaging in his question. – Emerton Jun 11 2010 at 2:53 4 Adrián, let's not forget that Fermat also claimed many facts that aren't true. Fermat primes come to mind first. – Victor Protsak Jun 11 2010 at 7:48 show 6 more comments Euler "proved" that $\sum \mu(n)/n = 0$ by observing that $\sum \mu(n) n^{-s} = 1/\zeta(s)$ and setting $s = 1$. Actually, the result $\sum \mu(n)/n = 0$ was later proved by von Mangoldt, and shown to be equivalent to the prime number theorem by Landau. - There are two famous examples from enumerative algebraic geometry. The Schubert calculus was used by Schubert to solve many elaborate enumerative problems, but it was only fairly recently that these results were verified according to modern standards of mathematical rigor. Also, string theory predicted some enumerative results that the mathematicians were only later able to verify. More generally, modern theoretical physics has produced enormous numbers of mathematical results whose derivations are non-rigorous. Some of these have been rigorously verified but some remain open problems. - How about mirror symmetry of Calabi-Yaus? This started from the observation by physicists that string theory on certain pairs of Calabi-Yaus gave identical theories. This has lead to a lot of work by physicists and mathematicians to understand what's going on, leading to things like the SYZ conjecture, homological mirror symmetry, etc. So, more specifically physicists theories treat spacetime $M$ as something that locally looks like $M=\mathbb{R}^4\times X$ in such a way that $X$ is "small" by saying (roughly) operators (which represent observables) when "looking at things" below a certain energy scale can't see directly the dynamics associated with $X$. Associated with $M$ is a special kind of quantum field theory called a superconformal field theory (SCFT), which requires that $X$ be a Calabi-Yau 3-fold. Various topological invariants of $X$ can tell us about how the SCFT behaves. But it was discovered that the associated SCFTs don't uniquely determine $X$. It turns out there are pairs of Calabi-Yau 3-folds $(X,\hat{X})$ (called mirrors) that give the same SCFT. From the SCFT point of view, these two mirror manifolds are related by an automorphism of the SCFT, which does not correspond to an automorphism of the Calabi-Yau manifold, but instead gives a mirror manifold in a way that switches cohomology groups around. It can also be thought of as switching complex structures with symplectic ones somehow. From the rigorous point of view, though, not much of this is well-defined. It relies on the machinery of QFTs which no one has been able to come close to defining axiomatically, as well as string theory which relies on a lot of machinery that has the same kinds of problems. Out of this came a number of more mathematically precise conjectures, such as the SYZ conjecture, which explains this in terms of special Lagrangian manifolds and fibrations of the mirror manifolds into it. This also started ideas of homological mirror symmetry, which tries to describe this in terms of homology and derived categories. - The classification of finite simple groups was announced 1983 when Geoff Mason was still working on the quasithin case. I've heard somewhere that he lost his motivation then and never finished his 600+ pages manuscript. The gap was closed 20 years later by Michael Aschbacher and Steve Smith. - 1 Man, those finite group theorists of the 80's were hard-core, with all of those several-hundred-page papers of closely reasoned mathematics! – Todd Trimble Apr 4 2011 at 15:13 2 @Jonathan: In 1983 all 26 sporadic groups were known and their existence and uniqueness proven (The "Atlas of Finite Groups" was published 1985). You could only complain that some proofs still were computer-assisted. – Someone Apr 5 2011 at 7:24 show 1 more comment Another classic example is the Littlewood-Richardson rule for decomposing products of Schur polynomials. It was discovered and proved in some special cases in 1934 by Littlewood and Richardson. In 1938 Richardson published a purported proof which had some gaps; however, apparently he managed to write so obscurely that the result was accept at least until the '50's. The first complete proofs were found in the '70's by Schützenberger and Thomas. This is definitely an example in which the trouble arose from the difficulty of the result, which involves from pretty thorny combinatorics. In his paper "The representation theory of the symmetric groups", Gordon James said the following : "Unfortunately the Littlewood–Richardson rule is much harder to prove than was at first suspected. The author was once told that the Littlewood–Richardson rule helped to get men on the moon but was not proved until after they got there." Remark : The above chronology is taken from wikipedia. I learned the Littlewood-Richardson rule from modern accounts, but I have to admit that I've never tried to go back and read the early papers on the subject. - Grunwald's Theorem on the existence of extensions satisfying local data was well known and widely used; Whaples even gave a second proof of this result before Wang found a counterexample and closed the gap. A similar mistake occurred when Shafarevich proved that solvable groups are Galois groups over the rationals - the case of 2-groups was "problematic". On a more fundamental level, Kummer's proofs of unique factorization into prime ideal numbers had gaps because he did not know about the concept of integral closure. This gap was noticed and closed only by Dedekind. - Just to complement Gerhard Paseman's answer. The story of how Girolamo Saccheri in early 1700's "almost" discovered hyperbolic geometry is quite amusing. Actually he died thinking he had proved the fifth postulate, but his argument was weak: "the hypothesis of the acute angle is absolutely false; because it is repugnant to the nature of straight lines". The sentence referes to his construction of a quadrilateral with two sides of equal length perpendicular to a given one. The acute angles are the ones opposite to the right ones. But Wikipedia explains this too... In this example an ideological bias prevented the discovery of beautiful mathematics... I wonder if this still happens now a days, probably yes. - In 1983 or 84, Frey announced that he could prove that Taniyama-Weil conjecture implies Fermat's last theorem. The proof was flawed but this announcement had spectacular consequences: $\bullet$ Serre pulled out an unpublished conjecture of his and strengthened it so that Taniyama-Weil + $\varepsilon$ would imply FLT, $\bullet$ Ribet proved enough of $\varepsilon$ so ensure that TW would imply FLT, $\bullet$ Wiles realized that FLT would be proved as TW could not be ignored and so decided that it had to be by him (in doing so, he completely changed the way people thought about the field and this has led to impressive results including the proof of TW or of Sato-Tate conjecture), $\bullet$ Shimura decided that he wanted his name attached to the conjecture and Lang made a campaign to remove Weil's... - The Kronecker-Weber theorem needed 3 proofs spanned upon 30 years before being completely proved (it states that all abelian extensions of ${\mathbb Q}$ can be found inside cyclotomic fields). It lead to class field theory. - In a sense, the entire field of ergodic theory was born from Boltzmann's incomplete proof of the H-theorem. - I guess the historically first example is the Theorem of Pythagoras, already known to the Babylonians but probably not discovered by a "proof" satisfying modern standards. - Looman (1923) proved that existence of partial derivatives of a function defined on an open subset of the complex plane is a sufficient condition for the function to be analytic. His proof had a gap that was fixed by Menchoff (1936) and we now have the Looman-Menchoff theorem. - According to M. Meo, Cauchy's proof of Cauchy's theorem (existence of elements of order a given prime p in every finite group of order divisible by a p) is wrong. Cauchy works with subgroups of $S_n$, and his proof depends on the construction of what we now call a Sylow subgroup of $S_n$. This subgroup is obtained as a semidirect product, which Cauchy seems to say is actually a direct product (which would be abelian). I am not completely sure whether Cauchy was really wrong, or he did know what was going on, and simply lacked the appropriate language. In any case, would be an example of Lack of foundations. - The Alternating Sign Matrix Conjecture in combinatorics was discovered (by researchers in the National Security Agency, so we don't know the motivation) in the late 1970s, but not proved for nearly 20 years. There is a wonderful book about it: Proofs and Confirmations, by David Bressoud. - According to Atiyah (Responses to: A. Jaffe and F. Quinn, Theoretical mathematics: toward a cultural synthesis of mathematics and theoretical physics'' Bull. Amer. Math. Soc. (N.S.) 29(1993), no. 1, 1--13; MR1202292 (94h:00007)) Hodge's proofs on what is now called Hodge Theory (representation of deRham cohomology classes by harmonic forms) were incorrect, because Hodge was not an analyst, though the theory was correct. - - 2 There is a more elementary, yet mathematically important and challenging "renormalization": the procedure by which Feigenbaum universality is proved (and its variants). I suggest that you expand your answer. – Victor Protsak Jun 11 2010 at 2:45 1 I agree that an expansion of this answer would be illuminating, but I would not go so far as to pile on negative votes. – jc Jun 11 2010 at 12:11 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 3, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9660033583641052, "perplexity_flag": "middle"}
http://nrich.maths.org/9691
### Fence It If you have only 40 metres of fencing available, what is the maximum area of land you can fence off? # Perimeter Possibilities ##### Stage: 3 Challenge Level: Watch the video below. How many other possible perimeters can you find, for a rectangle with an area of $24cm^2$? Now watch the video to see what Alison and Charlie did next. Here are some questions you might like to consider: • What other odd number perimeters can you make, if the area is $24$cm$^2$? • What is the smallest perimeter you can make, if the area is $24$cm$^2$? What about the largest perimeter? • Which perimeters in between is it possible to make? More generally... • Is it possible to make a rectangle with a fractional perimeter but a whole number area? • Is it possible to make a rectangle with a whole number perimeter but a fractional area? Take a look at Can They Be Equal? to explore rectangles where the area is numerically equal to the perimeter. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9004972577095032, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/01/29/extracting-the-determinant-from-the-characteristic-polynomial/?like=1&source=post_flair&_wpnonce=3fdfac1636
# The Unapologetic Mathematician ## Extracting the Determinant from the Characteristic Polynomial This one’s a pretty easy entry. If we know the characteristic polynomial of an endomorphism $T$ on a vector space $V$ of finite dimension $d$, then we can get its determinant from the constant term. First let’s look at the formula for the characteristic polynomial in terms of the matrix entries of $T$: $\displaystyle\sum\limits_{\pi\in S_d}\mathrm{sgn}(\pi)\prod\limits_{k=1}^d(\lambda\delta_k^{\pi(k)}-t_k^{\pi(k)})$ Now we’re interested in $\det(T)$, which is exactly what we calculate to determine if the kernel of $T$ is nontrivial. But the kernel of $T$ is the eigenspace corresponding to eigenvalue ${0}$, so this should have something to do with the characteristic polynomial at $\lambda=0$. So let’s see what happens. $\displaystyle\sum\limits_{\pi\in S_d}\mathrm{sgn}(\pi)\prod\limits_{k=1}^d(-t_k^{\pi(k)})=\sum\limits_{\pi\in S_d}\mathrm{sgn}(\pi)(-1)^d\prod\limits_{k=1}^dt_k^{\pi(k)}=(-1)^d\sum\limits_{\pi\in S_d}\mathrm{sgn}(\pi)\prod\limits_{k=1}^dt_k^{\pi(k)}$ This is just $(-1)^d$ times our formula for the determinant of $T$. But of course we know the dimension ahead of time, so we know whether to flip the sign or not. So just take the characteristic polynomial, evaluate it at zero, and flip the sign if necessary to get the determinant. There’s one thing to note here, even though it doesn’t really tell us anything new. We’ve said that $T$ is noninvertible if and only if its determinant is zero. Now we know that this will happen if and only if the constant term of the characteristic polynomial is zero. In this case, the polynomial must have a root at $\lambda=0$, which means that the ${0}$-eigenspace of $T$ is nontrivial. But this is just the kernel of $T$ is nontrivial. Thus (as we already know) a linear transformation is noninvertible if and only if its kernel is nontrivial. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 1 Comment » 1. [...] of any choice of basis. The leading coefficient is always , so that’s not very interesting. The constant term is the determinant, which we’d known from other considerations before. There’s one more coefficient [...] Pingback by | January 30, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8967790007591248, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/73551-solved-difficult-binomial-theorem-question-involving-greatest-integer-function.html
# Thread: 1. ## [SOLVED] Difficult binomial theorem question(involving greatest integer function)? If (3√3 + 5)^9 = R and [R] denotes the greatest integer less than or equal to R, then: A) [R] is divisible by 10 B) [R^2] is divisible by 512 C) [R] is divisible by 15 D) [R] is an even number More than one options may be correct. 2. ## Binomial Hello again fardeen_gen As with your previous posting, I'm not sure how you're supposed to tackle this question. I have used a calculator, and found [R] to be 1191041440. So A and D are true. The others are not. Grandad 3. Originally Posted by fardeen_gen If (3√3 + 5)^9 = R and [R] denotes the greatest integer less than or equal to R, then: A) [R] is divisible by 10 B) [R^2] is divisible by 512 C) [R] is divisible by 15 D) [R] is an even number More than one options may be correct. Here is another approach: $(3 \sqrt{3} + 5)^9 = \sum_{i=0}^9 \binom{9}{i} 3^{3(9-i)/2} 5^i$ $(3 \sqrt{3} - 5)^9 = \sum_{i=0}^9 (-1)^i \binom{9}{i} 3^{3(9-i)/2} 5^i$ Subtracting, the even-numbered terms cancel, yielding $(3 \sqrt{3} + 5)^9 - (3 \sqrt{3} - 5)^9 = 2 \sum_{i=0}^4 \binom{9}{2i+1} 3^{3(4-i)} 5^{2i+1}$ So $(3 \sqrt{3} + 5)^9 = (3 \sqrt{3} - 5)^9 + 2 \sum_{i=0}^4 \binom{9}{2i+1} 3^{3(4-i)} 5^{2i+1}$ $[(3 \sqrt{3} + 5)^9] = 2 \sum_{i=0}^4 \binom{9}{2i+1} 3^{3(4-i)} 5^{2i+1}$ because $0 < 3 \sqrt{3} - 5 < 1$ and the summation is an integer. 4. ## Binomial Originally Posted by awkward Here is another approach: $(3 \sqrt{3} + 5)^9 = \sum_{i=0}^9 \binom{9}{i} 3^{3(9-i)/2} 5^i$ $(3 \sqrt{3} - 5)^9 = \sum_{i=0}^9 (-1)^i \binom{9}{i} 3^{3(9-i)/2} 5^i$ Subtracting, the even-numbered terms cancel, yielding $(3 \sqrt{3} + 5)^9 - (3 \sqrt{3} - 5)^9 = 2 \sum_{i=0}^4 \binom{9}{2i+1} 3^{3(4-i)} 5^{2i+1}$ So $(3 \sqrt{3} + 5)^9 = (3 \sqrt{3} - 5)^9 + 2 \sum_{i=0}^4 \binom{9}{2i+1} 3^{3(4-i)} 5^{2i+1}$ $[(3 \sqrt{3} + 5)^9] = 2 \sum_{i=0}^4 \binom{9}{2i+1} 3^{3(4-i)} 5^{2i+1}$ because $0 < 3 \sqrt{3} - 5 < 1$ and the summation is an integer. That is pretty neat! It's clear, then, that R is a multiple of 10. And it's not a multiple of 3, since all the terms are except the last. Therefore R is not a multiple of 15. Any idea how this might determine the truth of statement B? (Other than by evaluating [R], of course.) Grandad PS I have just looked again at statement B, and noticed that it is $R^2$, not just R. Since $512 = 2^9$, this will be true if R is a multiple of 32. And it is. So B is true after all!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9181178212165833, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/42127?sort=newest
## generalization of Bezout’s Theorem? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I learned Bezout's Theorem in class, stated for plane curves (if irreducible, sum of intersection multiplicities equals product of degrees). What is the proper general statement, for projective varieties of degree n? I think it is something like: If finite, the sum of multiplicities equals the product of degrees.. else the (dimension? degree? sums over irreducible components?) of the intersection is less than or equal to the difference in degrees. Help is appreciated! - The intersection does not have to be finite to have a nice statement. You may want to try "Algebraic Geometry, a first course" by Harris which strikes a good balance between giving a somewhat general result and keeping it accessible. (Fulton is the go to reference for the full general case, but as mentioned already, it takes a whole book, so there is no short answer in that case.) – Thierry Zell Nov 1 2010 at 12:33 ## 2 Answers Dear unknown, the most straightforward generalization of Bézout's theorem might be the following. Consider $\mathbb P^n$, projective space over the field $k$, and $n$ hypersurfaces $H_1,...,H_n$in general position in the sense that their intersection is a finite set. Then, calling $h_i$ their local equations, Bézout says $$\sum dim_k \mathcal O_{\mathbb P^n,P_i}/(h_1,...,h_n) =\prod deg (H_i)$$ The dimension on the left hand side is, of course, to be interpreted as the multiplicity with which to count the point $P_i$, seen as a fat point i.e. a zero-dimensional non-reduced scheme. A related, more abstract point of view is the description of the Chow ring of $\mathbb P^n$ as $CH^\ast (\mathbb P^n)=\mathbb Z[x]/(x^{n+1})$ ( where $x$ is the class of a hyperplane in $\mathbb P^n$). From this point of view we have the following version of Bézout. Consider $r$ cycles $\alpha_1,...,\alpha_r$ on $\mathbb P^n$ with $\alpha_i \in CH^{d_i}(\mathbb P^n)$ and $d_1+...d_r \leq n$, . Then $$deg \prod {\alpha_i} =\prod deg (\alpha_i)$$ the product of the $\alpha_i$'s on the left being calculated in the Chow ring and the degree $deg (\alpha)$ of a cycle $\alpha \in CH^d (\mathbb P^n)$ being the integer $t$ such that $\alpha =t . x \in CH^d(\mathbb P^n)=\mathbb Z .x$. This is only the tip of the iceberg: a definitive answer would require a book. Fortunately that book exists and has been written, to our eternal gratitude, by Fulton: Intersection theory, volume 2 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. Springer-Verlag, Berlin, second edition, 1998. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In that wonderful book of Fulton, the more general result is at the bottom of page 223, where it says that for r pure dimensional schemes in P^n, whose codimensions add to at most n, the product of their degrees is at least as great as the sum of the degrees of the irreducible components of their intersection. Thus for instance, if three quadric hypersurfaces in P^4 have a common curve of degree 8, their intersection has no further components. - A very relevant remark and example, Roy. Your statement really describes the heart of Fulton's profound but rather technical Theorem 12.3, and is formulated in a strikingly simple way ( not explicitly spelled out in Fulton), which might have escaped a reader just browsing through the book – Georges Elencwajg Nov 2 2010 at 7:51 A small remark: this also holds with no restriction on the sum of codimensions (as Fulton mentions at the bottom of the same page). Just imbed everything into a larger projective space, take cones, and intersect with the complimentary subspace. – jacob Sep 13 at 6:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259128570556641, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/45089/what-is-the-length-of-a-sine-wave-from-0-to-2-pi/45092
# What is the length of a sine wave from $0$ to $2\pi$? What is the length of a sine wave from $0$ to $2\pi$? Physically I would plot $$y=\sin(x),\quad 0\le x\le {2\pi}$$ and measure line length. I think part of the answer is to integrate this: $$\int_0^{2\pi} \sqrt{ 1 + (\sin(x))^2} \ \rm{dx}$$ Any ideas? - Does the dx math-e-magically escape the sqrt? – philcolbourn Jun 13 '11 at 12:04 Thanks everyone. – philcolbourn Jun 17 '11 at 12:07 ## 4 Answers I'm nowhere near a computer with elliptic integrals handy, so I'll give the explicit evaluation of $$\int_0^{2 \pi} \sqrt{1+\cos^2 x}\,\mathrm dx$$ Note that an entire sine wave can be cut up into four congruent arcs; we can thus consider instead the integral $$4\int_0^{\pi/2} \sqrt{1+\cos^2 x}\,\mathrm dx$$ (alternatively, one can split the integral into four "chunks" and find that those four chunks can be made identical; I'll leave that manipulation to somebody else.) Now, after some Pythagorean manipulation: $$4\int_0^{\pi/2} \sqrt{1+\cos^2 x}\,\mathrm dx=4\int_0^{\pi/2} \sqrt{2-\sin^2 x}\,\mathrm dx$$ and then a bit of algebraic massage: $$4\sqrt{2}\int_0^{\pi/2} \sqrt{1-\frac12\sin^2 x}\,\mathrm dx$$ we then recognize the complete elliptic integral of the second kind $E(m)$ $$E(m):=\int_0^{\pi/2}\sqrt{1-m\sin^2u}\mathrm du$$ (where $m$ is a parameter): $$4\sqrt{2}E\left(\frac12\right)$$ As Robert notes in a comment, different computing environments have different argument conventions for elliptic integrals; Maple for instance uses the modulus $k$ (thus, $E(k)$) instead of the parameter $m$ as input (as used by Mathematica and MATLAB), but these conventions are easy to translate to and from: $m=k^2$. So, using the modulus, the answer is then $4\sqrt{2}E\left(\frac1{\sqrt 2}\right)$. Now to address the noted equivalence for negative parameter and a parameter in the interval $(0,1)$ by Henry, there is what's called the "imaginary modulus transformations"; the DLMF link gives the transformation for the incomplete case, but I'll explicitly do the complete case here for reference since it's not too gnarly to do (all you have to remember are the symmetries of the trigonometric functions): Letting $E(-1)=\int_0^{\pi/2}\sqrt{1+\sin^2 u}\,\mathrm du$, we then go this way: $$\int_0^{\pi/2}\sqrt{1+\sin^2 u}\,\mathrm du=\int_{-\pi/2}^0\sqrt{1+\sin^2 u}\,\mathrm du$$ $$=\int_0^{\pi/2}\sqrt{1+\sin^2\left(u-\frac{\pi}{2}\right)}\,\mathrm du=\int_0^{\pi/2} \sqrt{1+\cos^2 u}\,\mathrm du$$ from which I've shown what you're supposed to do earlier. Computationally, the complete elliptic integral of the second kind isn't too difficult to evaluate, thanks to the arithmetic-geometric mean. Usually, this method is used for computing the complete elliptic integral of the first kind, but the iteration is easily hijacked to compute the integral of the second kind as well. Here's some C(-ish) code for computing $E(m)$: ````#include <math.h> double ellipec(double m) { double f,pi2,s,v,w if (m == 1.0) return 1.0; pi2 = 2.0*atan(1.0); v = 0.5*(1.0+sqrt(1-m)); w = 0.25*m/v; s = v*v; f = 1.0; do { v = .5(v+sqrt((v-w)*(v+w))); w= 0.25*w*w/v; f *= 2.0; s -= f*w*w; } while (abs(v)+abs(w) != abs(v)) return pi2*s/v; } ```` (make sure either your compiler does not (aggressively) optimize out the `while (abs(v)+abs(w) != abs(v))` potion, or you'll have to use a termination criterion of the form `abs(w) < tinynumber`.) Finally, "I am also puzzled: a circle's circumference is $2\pi r$ and yet an ellipse's is an infinite series - why?" My belief is that we are actually very lucky that the arclength function for a circle is remarkably simple compared to most other curves, the symmetry of the circle (and thus also the symmetry properties of the trigonometric functions that can parametrize it) being one factor. The reduction in symmetry in going from a circle to an ellipse means that you will have to compensate for those "perturbations", and that's where the series comes in... - This is not yet my official return; I decided to answer this question in the short time I have access to a computer today. – J. M. Jul 5 '11 at 7:36 1 Well J. M. your contributions to the site a very much appreciated so even if it is only to answer a question every now and then when you have some free time and computer access, it is really a great thing. Good luck with whatever you're doing =) – Adrián Barquero Jul 5 '11 at 7:45 Thanks. I am out of my depth here and I don't really understand what E(x) is. It is interesting that a seemingly simple question can have a complex and I gather a difficult answer to calculate. I am also puzzled: a circle's circumference is 2.pi.r and yet an ellipse's is an infinite series - why? – philcolbourn Jul 7 '11 at 12:01 @phil: On the contrary, the complete elliptic integral is in fact not very hard to evaluate numerically. Let me update this answer a bit... – J. M. Jul 14 '11 at 9:02 The arc length of the graph of a function $f$ between $x=a$ and $x=b$ is given by $\int_{a}^{b} \sqrt { 1 + [f'(x)]^2 }\, dx$. So, if you're considering $f(x)=\sin(x)$ then the correct integral is $\int_{0}^{2\pi} \sqrt { 1 + [\cos(x)]^2 }\, dx$. Unfortunately, this integral cannot be expressed in elementary terms. This is quite common for arc-length integrals. However, the definite integral might be expressible in elementary terms; Wolfram Alpha says it cannot. - 2 That's interesting, as the two definite integrals are clearly the same over this interval. Following up my comment to Chandru's answer, Wolfram Alpha's gives you $4 \sqrt{2} E(\tfrac{1}{2}) \approx 7.6404$, which is the circumference of an ellipse. – Henry Jun 13 '11 at 12:03 It is given by $$I = \int_{0}^{2 \pi} \sqrt{ 1 + (\cos{x})^{2}} \ \rm{dx}$$ and I think this is an elliptic integral of the second kind. (That's what Wolfram says.) - 2 – Henry Jun 13 '11 at 11:42 @Henry: Thanks. My simple 8-straight line approximation yielded 7.58... – philcolbourn Jun 13 '11 at 12:11 The first word of this answer isn't quite right, because the value of the integral in the question is correct. – Jonas Meyer Jun 13 '11 at 18:47 @Jonas: for the arc length it has to derivative of $\sin{x}$ that is $\cos{x}$, so I have added $\cos{x}$ – user9413 Jun 13 '11 at 18:50 Maple, which uses a different convention for the elliptic integrals, gives the answer as $4 \sqrt{2} {\rm EllipticE}(\sqrt{2}/2)$. The circumference of an ellipse with semi-major axis $a$ and eccentricity $e$ would be, in this notation, $4 a {\rm EllipticE}(e)$. – Robert Israel Jun 13 '11 at 19:28 Responding to Henry, June 6, 2011, this equivalence emerges from a simple experiment given by Hugo Steinhaus in 'Mathematical Snapshots'. Take a roll of something (I use paper towelling) and saw through it obliquely, thus producing elliptic sections. Unroll it and you have a sine curve. (Tom Apostol and Mamikon Mnatsakanian suggest you rest a paint roller at an angle in the paint tray. Then paint!) Paul Stephenson May 8 '13 at 21.00 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.917362630367279, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/52144/relationship-between-mars-and-earth-rotation
# Relationship between Mars and Earth rotation Is it by pure random chance that Mars and the Earth have nearly the same day duration (Mars day is barely 40 minutes longer, which is just 3% difference), or there is some causal relationship between the two? - ## 1 Answer The length of the day on Earth has been changing ever since it was formed due to the influence of the Moon, so the current near equality is an accident of timing. Go back or foward a few billion years and the day lengths would be more different. - 1 is there some way to know the duration of Mars a few billion years ago? – lurscher Jan 25 at 15:42 A good question and one that I deliberately avoided! :-) As far as I know there are no observations to tell how the length of the day on Mars has changed. However the main reason the day length changes on Earth is the tidal force from the Moon and the moons of Mars are too small to raise any significant tides. Tidal forces from the Sun will have some effect, but these are a lot lower on Mars, partly because they vary as $r^{-3}$ and partly because Mars has no oceans and much less energy is dissipated in tidal movements on land. So ... – John Rennie Jan 25 at 16:21 ... although we don't know if the day length has changed on Mars it seems likely any change will be much less than the change on Earth. – John Rennie Jan 25 at 16:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9683061242103577, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php/Talk:Prime_spiral_(Ulam_spiral)
# Talk:Prime spiral (Ulam spiral) ### From Math Images Iris6/28 I made all the changes to pf Maurer, Abram, and Anna's comments. I am ready for more comments Smaurer1 11:03, 12 June 2010 (UTC) I've gone through the latest version now. The demonstration and movie showing that the 2nd differences are 8 is lovely. If you think about it, this demonstration shows that not only are diagonals quadratic in the Ulam Spiral, but so are horizontal and vertical lines -- in each case if you start far enough from the origin and keep going away from the origin. Basically, you have to go far enough out on the line so that to find the first difference of any consecutive terms by counting off along the spiral, you have to go once around a square. Think about it, you have a proof looking for a theorem, that is, a correct technique that merely needs the right restrictions to be applicable. Throughout you say "line" when you mean "line segment". To a mathematician a line is infinite. Actually, all your results are correct for half lines - infinite in one direction. Your claim is not actually all that interesting for line segments: every finite sequence of numbers is polynomial. Also, as we discussed, it appears that the line y=x (through the origin) is the only line on which the Ulam numbers are polynomial in both directions with the same one polynomial. Now that you've added the movie showing that $\Delta^2 a_n = 8$, and explained how to get a quadratic from that, then the first part of your example wih 5, 19, 41, 71, 109 is redundant; we already know it will be quadratic and don't need the parabola. In fact, I think this whole example of determining the specific polynomial 2 ways (I am not sure I would call it deriving the polynomial) really belongs in the difference table helper page, not here. It's not really central to the Ulam Spiral discussion, even though it is very nice. I have a number of detailed comments that we can discuss in person. I've also made a few individual edits in the Newton Interpolation subsection. ## Xingda's Comments • All images should be labeled correctly and explained correctly. For example, the part explains the hockey stick does not match the picture given by Pascal. • Certain explanation can be more easily grasped by putting a picture besides it. For example, the first part of the MME can be accompanied by a picture. • label the sequence in the Sierpinski Triangle and explain the picture exactly as it is drawn. • Explain why the hockey stick's head has to point in certain direction but not others. • put the first three patterns in the other patterns and properties in the basic description. They are very simple and should not come as the last part of MME. • make some of the obvious connection more obvious for example, put the relationship of $a$ and $b$ at the very beginning of the MME next to the conclusion of the Binomial Coefficients so that the connection is obvious and your whole argument comes full circle. ## Individuals' statuses Abram, 6/9: I notice that the page has been revision since our meeting on 6/8, but I haven't yet had a chance to review these revisions. My comments dated 6/8 are from a discussion on 6/8, but were not posted on the discussion page until the evening of 6/9. Clearly, some of them have already been addressed. ## Active Comments • Great brief intro to the spiral. The context in which it was invented and the description of how we don't know formulas for prime numbers really help make the page seem relevant. • Good use of examples in the quadratic polynomials section • Sentence like, "It might seem strange that..." do a good job guiding the reader through the content Abram, 6/9 Layout of the diagonals section Also, in this section, there's a lot of text in pretty big paragraphs. Can you break it up into smaller chunks? (Anna 6/25) Iris I tried breaking them down. (6/28_=) It looks much better now (7/14) Use of subheadings and/or at least one picture to go with each idea might also help. For instance, sub-headings on "Definition of half-lines" and "Examples and equations for half-lines", or having Image 3 and Image 5 break up the text instead of being on the side. I don't really know, but it's something you could play with. You may want to look for other places in the page where images that are on the side could look more appealing and be physically closer to the accompnay text if they were in-line as well. (Abram, 7/8) Iris(7/12) I fixed this There's a huge amount of white space around Image 3 now. Is there any good way to deal with this? (Abram, 7/14) I'd be okay with the whitespace if you centered the image (Anna 7/14) Iris(7/15) I fixed this You could center image 8, or not. You could also put image 16 and 17 under the text instead of to the side, or not. (Abram, 7/16) Iris(7/19) I tried putting the image 16, 17 under the text, but it doesn't work properly. I centered image 8 Image 8 still doesn't look centered on my browser, but that can wait for another person at another time. (Abram, 7/19) A couple mathematical details • I have no idea what the last sentence in the Archimedean Spiral definition means. (Abram, 7/14) Removing that last sentence took care of the problem. This page still doesn't identify exactly how you space out the dots on the spiral, but you could add that as a future suggestion or something. (Abram, 7/16) Iris(7/18) I put up in future suggestion that we need a Archimedean spiral helper page My guess is that an Archimedean Spiral helper page wouldn't actually address this placement of dots, because dots are not an inherent part of the Archimedean Spiral -- they're just added for the sake of the Sack's Spiral. But again, this can wait for another person at another time. (Abram, 7/19) (Abram, 7/8) ## Archived Commetns A couple mathematical details • Diagonal lines either have a slope of +/- 1 or they move upward or downward at a 45 degree angle. But they don't have a slope of 45 degrees. • The beginning of the "Prime numbers in lines" section includes the sentence "The nonprime numbers that appear on prime-concentrated columns are all multiples of prime numbers." Of course that's true -- all numbers are multiples of prime numbers! I'm guessing there is something else you are trying to say here. • Add mouse-over definitions or external links for Gilbreath's conjecture and Goldbach's conjecture (Abram, 7/8) Iris I fixed all of these (7/8) The Goldbach's conjecture mouse-over wasn't working on my browser. Otherwise, looks good. (Abram, 7/14) Fixed (as described above). (Abram, 7/16) • In the triangular number section, the first sentence should read, "The nth triangular number, T_n, is given by the formula", otherwise you technically never define the symbol T_n (even though it's really easy to tell what you mean by it). (Abram, 7/14) Iris(7/14) I dealt with this problem Looks good. (Abram, 7/16) Edits for the sake of explanation Looks good. (Abram, 7/14) • The sentence "Indeed, as Image 4 illustrates, regardless of the exact number of the blue boxes, there are 8 more boxes in the outer ring than in the inner ring" is confusing. What is "the" outer ring and "the" inner ring? Seems like maybe it should say something like, "There are 8 more boxes in any given ring than there are in the ring that is one layer inwards from it" or something. (Abram, 7/14) Iris(7/15) I fixed this Looks good. (Abram, 7/16) We could always do more Iris, this page is looking quite good. We could always do more. For instance, there are a few slightly confusing sentences here and there. Also, we haven't subjected the "prime numbers in lines" section to nearly the same scrutiny that we have with the rest of the page. It's a recent addition, and it's a really nice section, but there are certainly ways to clean it up. However, you have already done a lot of really good work for this page, and you should feel free to say you feel done with it. (Abram, 7/14). I actually feel like the "prime numbers in a line" section is very straight forward, and I can't come up with anything that would really help the section. So, like Abram, I think you can leave it as is. (Anna 7/14) Other numbers and patterns section • The triangle number section is really clear and concise. Is there any reason why they form that patter? You don't have to answer that question, but it might be interesting. Iris(7/8) professor Maurer and I have been trying to find an explanation, but we couldn't until now. I'm happy to let this go for the time being. Anna? (Abram, 7/14) I am too. I was just curious! (Anna 7/14) Reframe the description of quadratic polynomials along diagonals All entries along a diagonal can be described with quadratics that have a leading coefficient of 4, not just the prime entries. Rephrase this section to make that more clear. (Abram, 6/8) I rephrased the section so that any diagonals can be described through quadratics. pf. Maurer and I talked, and it turns out that there are some exceptions, so I'll have to fix that point. (IRis, 6/11) Nice job with the math content. See the separate discussion thread about making the description of "rings" and "diagonal half-lines" clearer. (Abram, 6/28). Also, it would be good to include a proof of this fact, rather than simply an example. A proof that this is true along the main diagonals (that go through entry number 1) won't be too ugly. We haven't yet figured out if doing a proof along other diagonals is or is not nasty. (Abram, 6/8) --I provided the proof with the small animation. (Iris, 6/11) Can you write an explanation that walks through each step of the animation? (Anna 6/25) Iris I added an explanation (6/28) Nice job with this. There are a couple of wording details that can be dealt with in a final pass through. The only substantive problem right now is that if the reader starts looking at the animation at the wrong moment, they won't see the "innermost light blue boxes" that you refer to. If we can't add a "play" button to the animation, you might want to tell the reader to wait until they can see three rings of blue boxes, and then watch the animation all the way through. (Abram, 6/28) Iris I added a sentence (6/29) Looks good. (Abram, 7/8) Clarify the description of "rings" and "diagonal half-lines" It seems like we've decided to super-impose the blue spiral from Image 1 onto Image 3 so that we can point out examples of how the red lines give you numbers that are in the same "ring". We also decided to change things so that either we come up with a new term for diagonal half-line or in some other way rephrase this, because a) many things that one would intuitively think of as a diagonal half-line are not diagonal half-lines according to this definition (e.g. the red lines in image 3), and b) many things that one would not think of as diagonal at all satisfy this definition (e.g. many horizontal and vertical lines). It's not that there's anything mathematically *wrong* with this. It's just a really confusing choice of terminology. (Abram, 6/28) [Iris] (6/30) I clarified the terms involving half-lines. I superimposed the blue spiral, and I added a couple sentences to describe what we meant by ring, but I'm not sure if this is clear enough. Really nice clarification of half-lines. It might be nice for the definition of a ring to avoid use of variables (e.g. you could describe them as "concentric rings centered around the 1 at the center of the spiral" and give a couple examples of pairs of numbers that are on the same ring). I don't know if my suggestion was actually correct, and maybe this is impossible. It would just be nice. (Abram, 7/7) Iris(7/8) I fixed a little bit. Really nicely done. (Abram, 7/14) Elaborate on the Euler section Does this section generalize in any way to work with numbers other than 41? Does the fact that there are 40 consecutive prime diagonal entries give a hint of some larger pattern, or is it just an isolated curiosity? In general, make the significance of this material (and whether or not it is even seen as significant) a little clearer. (Abram, 6/8) As a small thing in this section, make sure to point to Image 8 when you are talking about it in the text (Anna 6/25) Iris] I made this change (to both Abram and Anna's comment_ (6/28) Aha, I think I figured out what was confusing me. Look at all the material in the Euler section that starts off hidden (all the text after "To learn more about how to find Euler's polynomial, click show more". First, instead of describing this section as a way of "finding" Euler's polynomial, would it be accurate to describe this as a way of deriving why the Ulam spiral that starts at 41 generates the same outputs as Euler's polynomial. Second, you might want to point out that because the "central" diagonal line follows the rule about never staying in the same ring in both directions, you can actually plug in negative numbers to generate the numbers that are "downhill" from the center. As it stands right now, the x= -19 through 20 seem to come out of nowhere. Third, when you point out that x = -19 to x = 20 into 4x^2 - 2x + 41 generates prime numbers, this is actually a little bit confusing. The reason is that you are in the middle of showing how you get *to* the conclusion that this polynomial is in some sense equivalent to the Euler polynomial (once you do an appropriate transformation), but the fact that plugging x = -19 to x = 20 into this polynomial gives prime numbers actually comes *from* the fact that the these polynomials are essentially equivalent. It's not that you said anything wrong. It's just that pointing out an implication of statement X while you are in the middle of proving statement X is a bit disorienting. There are a few more small changes that could help clarify this section, but this will make a big difference. (Abram, 7/1) I really like the way this section reads now. You've done a great job of flushing it out and explaining your pictures better. (Anna 7/7) Agreed. Nicely done. (Abram, 7/8) Add A "Why It's Interesting" section We have discussed that this spiral is not something mathematicians have studied seriously, but on the other that the patterns you describe are not just people's minds trying to find patterns, which indicates the spiral could be significant. Having a section about this could be interesting. (Abram, 6/8) I have added a "Why its interesting" section. I actually addressed the problem of people's mind trying to find patterns in the more mathematical section where I compare the Ulam spiral for prime numbers and random numbers. I'm not sure whether I should move this section (Iris, 6/11) I think moving this section about how the patterns aren't random to the Why It's Interesting section is a really good idea. (Abram, 6/28) Iris I moved this section (6/29) Nicely done. (Abram, 7/8) Nit Picky Details (Also known as "doing more"--see Abram's comments below) Each of these comments should take you very little time to address. (Anna 7/14) • In the "Why is it interesting" section, you refer to images 12 and 13 being side by side. In my browser, one appears on top of the other. Iris(7/15) fixed this If you wanted to, you could replace every reference throughout the page to "the image below" with a reference to an image number (it happens somewhere else in the page too), but don't worry about it if you don't want to. (Abram, 7/16) Iris(7/19) I decided to leave it as it is OK. (Abram, 7/19) • Something's not right with the first two images. When my window's taking up just part of my screen, the pictures smoosh the text around and make it very difficult to read Iris(7/15) I fixed this for now, but maybe I'll use Xingda's table format later on Looks good to me. (Abram, 7/16) • The sentences before and after image 3 both start with "for instance" Can you rephrase one or both so it's not so repeatative? Iris(7/15) fixed this Looks good to me. (Abram, 7/16) • There's a stray mark in this sentence: "First, we found out in the previous section that Eq. (1) is the polynomial for the diagonal that goes through the center 1 and has a slope of °+1" Iris(7/15) fixed this Looks good to me. (Abram, 7/16) • Your image numbering skips two sections--can you go back through and number those, and update the numbers in the last section? Iris(7/15) fixed this Looks good to me. (Abram, 7/16) • The "Goldbach's conjecture" bubble isn't working on my computer. I don't know what's up with that. Iris(7/15) fixed this Looks good to me. (Abram, 7/16) • In the prime numbers in lines section, you've got some layout kinks. You might want to center the big images here. The animation/picture in my browser ends up cutting off the equation numbering in a really weird way--the number jumps down to after the word "or" but above the next equation. Other than that, this section is quite clear and well written (Anna 7/7) Iris(7/8) I rearranged the images Looks good to me. Anna? (Abram, 7/14) I agree (Anna 7/14) Edits for the sake of explanation • The first sentence of the basic description can be deleted, if the mouse-over definition of the prime numbers is moved to the next time those words are used. Great. The mouse-over definition of prime-numbers is still placed over the *second* use of the term. This is a bigger problem in the difference tables section, where the link to difference tables is placed in the second use of the term, more than a paragraph after the first use of the term. (Abram, 7/14) Iris (7/15) I moved the mouse-over definition of prime numbers to the very top, and I added a mouseover definition to difference tables the first time I used it The mouse-over definition might be a little bit clearer if you said, "The difference table list the terms of a sequence in one row, and the differences between consecutive terms in the next row...", but you can leave your definition if you'd like. (Abram, 7/16) Iris(7/19) I fixed this Looks good. (Abram, 7/19)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9544900059700012, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/cp-violation?sort=unanswered&pagesize=15
# Tagged Questions is a property of a system or theory where the combined charge conjugation (replacing particles with their anti-particles) and parity (reversing the sign of every vector, equivalent to reflecting the system through the origin) operations yield non-equivalent results. Only the weak interaction is know ... 1answer 88 views ### How to determine predicted CP violation for a given SUSY point? I'm currently studying at the spectra of some supersymmetric models, and would like to know whether the parameter points I'm looking at are ruled out due to excessive CP violation. I am using SPheno, ... 0answers 110 views ### What is the status of Witten's and Vafa's argument that the QCD vacuum energy is a minimum for zero $\theta$ angle? The argument, which I reproduce here from Ramond's `Journies BSM', is originally by Witten and Vafa in ($\it{Phys}$. $\it{Rev}$. $\it{Lett}$. 53, 535(1984)). The argument is that for $\theta = 0$ ... 0answers 38 views ### CP-symmetry and Ward identities and finite temperature I have a few questions about Ward-identities which I summarize here. For each I am very greateful for answers and references to literature. Wikipedia states about Ward-identities: The ... 0answers 60 views ### CP-violation in weak and strong sectors There is a possible CP-violating term in the strong sector of the standard model proportional to $\theta_\text{QCD}$. In the absence of this term, the strong interactions are CP-invariant. In the ... 0answers 99 views ### What are the prerequisites to study CP violation? If one would like to study CP violation, what would be the prerequisites for it? For example, until now I have not studied quantum field theory and have done very little classical field theory, but ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9290114045143127, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/116719/localization-of-a-pure-injective-module-is-pure-injective/117547
## Localization of a pure-injective module is pure-injective? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, is there some work on localization of pure-injective modules? Is a localization of a pure-injective module pure-injective? By localization I mean the standard localization defined for any multiplicatively closed subest S of the ring R. I'm interested in this question for modules over commutative (noetherian) rings. Thank you, - what is your setting? What do mean by "localization"? This could be an interesting question, it would be nice to have some more details (do you refer to a left or right Ore localization? are you thinking to modules over a commutative ring? ...). – Simone Virili Dec 18 at 18:33 ## 1 Answer I believe the answer in your setting is yes: localizations of pure injective modules are pure injective. I don't seem to use Noetherian below, but I am using the fact that $R$ is commutative all over the place. I don't know what happens without this hypothesis. Recall that $M$ is pure injective if for every pure submodule $P$ of $M$, maps $P\to N$ extend to maps $M\to N$. Recall also that submodules of $S^{-1}M$ take the form $S^{-1}P$ for $P$ a submodule of $M$. Here is a sketch of the proof, with details to be filled in below. Suppose $S^{-1}P$ is a pure submodule of $S^{-1}M$ and let $f:S^{-1}P \to N$. Then we have $f\circ \phi: P \to S^{-1}P \to N$ and this will extend to some $g:M\to N$ because $P$ is a pure submodule (see below) and $M$ is pure injective. So we have the commutative diagram below, where $f \circ \phi$ takes $S$ to units, so $g$ takes $S$ to units: \begin{array}{ccc} P & \to & M \\ \downarrow & & \downarrow \\ S^{-1}P & \to & N \end{array} By the universal property of localization, there exists a unique $S^{-1}g: S^{-1}M \to N$. By construction, this map extends $f$ and will fit into the commutative diagram above on the right hand side. This proves $S^{-1}M$ is pure injective. DETAILS The formal way to phrase the universal property of localization for modules (i.e. to understand what is meant by taking $S$ to units") requires a shift from thinking of $S$ as a subset of $R$ into thinking of $S$ as a collection of endomorphisms of $M$. For each $s\in S$ let $\mu_s:M\to M$ be multiplication by $s$. The universal property then says that if $f:M\to N$ takes every map $\mu_s$ to an isomorphism then there exists a unique map $g:S^{-1}M \to N$. It a fun exercise to prove this is equivalent to the universal property as taught in a standard first year algebra course, using the fact that $S^{-1}M \cong M\otimes_R S^{-1}R$ and the universal properties for tensor product and for localizations of rings. Secondly, we have to justify the statement that $P$ is pure in $M$ above. Actually, I don't know if $S^{-1}P$ pure in $S^{-1}M$ implies $P$ pure in $M$, but I know it implies it $S$-locally (i.e. for any $S$-local modules $X$, $f\otimes 1_X: P\otimes X \to M\otimes X$ is injective). The reason is simple: a module $X$ is $S$-local if $X\cong S^{-1}R\otimes Y$ for some $Y$. Since $P\otimes X \cong P\otimes (S^{-1}R \otimes Y) \cong S^{-1}P\otimes Y$, we see that the map $f\otimes 1_X$ can be written as $S^{-1}f \otimes 1_Y$, which is injective because $S^{-1}P$ is pure in $S^{-1}M$. The $S$-local version of the purity is enough because we only need to know that maps $P\to N$ taking $S$ to units extend to maps $M\to N$. So basically, I'm avoiding the issue by moving to the $S$-local category. If someone can prove $P$ is pure in $M$ without this shift I'd like to hear it. - By the way, there might be a different proof which doesn't use universal properties and homological algebra. The simpler proof would be to use the fact that $M$ is pure injective iff $M$ is algebraically compact. Then you just need to understand how localization affects systems of equations. See en.wikipedia.org/wiki/Pure_injective_module. I prefer homological algebra myself, and mainly wrote this answer to try to brush up on it and have some fun. For my tastes, the approach via equations wouldn't have been as much fun – David White Dec 29 at 18:40 I think that your definition of puré injective is not correct. Your conditions are equivalent to pure injectivity for $N$, not for $M$. – Fernando Muro May 5 at 7:08 Hi Fernando. I can't remember anything about this problem, since it was so long ago. I see my links above were to wikipedia, so maybe there was some ambiguity there. A better source for info on pure injectives appears to be eprints.ma.man.ac.uk/1148/01/covered/…, and it confirms that you're right. Anyway, this hasn't been a very popular post, and I don't think the OP ever came back so I'm not going to bother trying to tweak this answer to prove $S^{-1}N$ is pure injective. I think the same methods and types of considerations should work. – David White May 5 at 14:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942546010017395, "perplexity_flag": "head"}
http://polymathprojects.org/2011/02/05/polymath6-improving-the-bounds-for-roths-theorem/?like=1&source=post_flair&_wpnonce=104f7cb3e5
# The polymath blog ## February 5, 2011 ### Polymath6: improving the bounds for Roth’s theorem Filed under: polymath proposals — gowers @ 12:03 pm For the time being this is an almost empty post, the main purpose of which is to provide a space for mathematical comments connected with the project of assessing whether it is possible to use the recent ideas of Sanders and of Bateman and Katz to break the $1/\log N$ barrier in Roth’s theorem. (In a few hours’ time I plan to write a brief explanation of what one of the main difficulties seems to be.) Added later. Tom Sanders made the following remarks as a comment. It seems to me to make more sense to have them as a post, since they are a good starting point for a discussion. So I have taken the liberty of upgrading the comment. Thus, the remainder of this post is written by Tom. This will hopefully be an informal post on one aspect of what we might need to do to translate the Bateman-Katz work into the $\mathbb{Z}/N\mathbb{Z}$ setting. One of the first steps in the Bateman-Katz argument is to note that if $A \subset \mathbb{F}_3^n$ is a cap-set (meaning it is free of three-term progressions) of density $\alpha$ then we can assume that there are no large Fourier coefficients, meaning $\sup_{0_{\widehat{G}}\neq\gamma \in \widehat{\mathbb{F}_3^n}}{|\widehat{1_A}(\gamma)|} \leq C\alpha/n$. They use this to develop structural information about the large spectrum, $\rm{Spec}_{\Omega(\alpha)}(1_A)$, which consequently has size between $\Omega(C^{-3}n^3)$ and $O(\alpha^{-3})$. This structural information is then carefully analysed in the `beef’ of the paper. To make the assumption there are no large Fourier coefficients they proceed via the usual Meshulam argument: if there is a large coefficient then we get a density increment of the form $\alpha \mapsto \alpha(1+\Omega(Cn))$ on a subspace of co-dimension $1$ and this can be iterated until they have all been removed. In the $\mathbb{Z}/N\mathbb{Z}$ setting this has to be `Bourgainised’. We proceed relative to Bohr sets rather than subspace. The problem with Bohr sets is that they do not function exactly as subspaces and, in particular, they do not have a nice additive closure property. A good model to think of is the unit cube $Q$ in $\mathbb{R}^d$. The sumset $Q+Q$ is not roughly equal to $Q$; it is about $2^d$ times as large as $Q$. However, if we take some small dilate $Q'$ of $Q$, say the cube of side length $\delta$ then we do have that $Q+Q' \approx Q$ since $\mu_G(Q+Q') = \mu_G(Q)(1+O(d\delta))$. This provide a sort of approximate additive closure, and the fact that it can be usefully extended to Bohr sets and used for Roth’s theorem was usefully notice by Bourgain in his paper `On triples in arithmetic progression‘. In our situation, if $B$ is a Bohr set of dimension $d$, and $A \subset B$ has relative density $\alpha$ then we shall try to remove all characters $\gamma$ such that $|(1_A - \alpha1_B)^\wedge(\gamma)| =\Omega(C\alpha/\log N)$. Given such a character we produce a new Bohr set $B'$ defined to be the intersection of $B$ (dilated by a factor of $\alpha^{O(1)}$) and the $\alpha^{O(1)}$-approximate level set of $\gamma$ (meaning the set of $x$ such that $|\gamma(x)-1| \leq \alpha^{O(1)}$) with $\rm{width}(B') \geq \alpha^{O(1)}\rm{width}(B) \textrm{ and } \rm{dim} B' \leq \rm{dim} B + 1$ and $A$ has density $\alpha(1+\Omega(C\log N))$ on a translate of $B'$. After running this for at most $O(C^{-1}\log N)$ iterations we end up with a Bohr set $B$ such that $\rm{width}(B) \geq \alpha^{O(C^{-1}\log N)} \textrm{ and } \rm{dim} B = O(C^{-1}\log N)$. However, the only lower bound we have on the size of a Bohr set $B$ in a general Abelian group is $\mu_G(B) \geq \rm{width}(B)^{\rm{dim} B}$, which means we have to take $C=\Omega(\sqrt{(\log\alpha^{-1})(\log N)})$ or else our Bohr sets will become too small. Of course, in $\mathbb{F}_3^n$ the width plays (essentially) no role in determining the size of the Bohr set and we have $\mu_G(B) \geq 3^{-\rm{dim} B}$ and we can take $C=O(1)$ as desired for the Bateman-Katz analysis. Having seen this weakness of `Bourgainisation’ one naturally wants to look for arguments which somehow involve iterating a smaller number of times: if we had been able to take many Fourier coefficients together each time we passed to a new Bohr set we would not have had to iterate, and therefore narrow, the Bohr set so many times. In fact Heath-Brown and Szemeredi in their papers `Integer sets containing no arithmetic progressions‘ and `Integer sets containing no arithmetic progressions‘ provided such. The key idea of the Heath-Brown-Szemeredi approach in the Bohr set context is to intersect the dilate of the Bohr set $B$ with the $\alpha^{O(1)}$-approximate level set of all the characters in the large spectrum $\rm{Spec}_{\Omega(C/\log N)}(1_A)$. This set has size at most $O(C^{-2}\alpha ^{-1}\log^2 N)$ by Parseval’s theorem and so we get a Bohr set $B'$ with $\rm{ width}(B') \geq \alpha^{O(1)}\rm{ width}(B) \textrm{ and } \rm{dim} B' \leq \rm{dim} B + O(C^{-2}\alpha ^{-1}\log^2 N)$. However, in this case we end up with a much bigger density increment. Indeed, $\widehat{\beta'}(\gamma) \approx 1$ for all $\gamma \in \rm{Spec}_{\Omega(C/\log N)}(1_A)$ from which we essentially get that $\sum_{\gamma}{|\widehat{1_A}(\gamma)|^2|\widehat{\beta'}(\gamma)|^2} \geq \alpha^2(1+\Omega(1))\mu_G(B)$. This translates to a density increment of $\alpha \mapsto \alpha(1+\Omega(1))$ and such an increment can only be iterated $O(\log \alpha^{-1})$ times — that is to say not very many times. Unfortunately even when combined with Chang’s theorem this does not give an improvement over Bourgain’s original argument and it wasn’t until 2008 that Bourgain produced a new argument improving our understanding in `Roth’s theorem on progressions revisited‘. In this sequel a more careful analysis of the large spectrum is produced and this benefits from knowing whether or not $\rm{Spec}_{\Omega(C/\log N)}(1_A)$ contains most of the Fourier mass in $\rm{Spec}_{\Omega(\alpha)}(1_A)$ or not. The point here is that we are given by the usual Fourier arguments that $\rm{Spec}_{\Omega(\alpha)}(1_A)$ supports a large Fourier mass. Now, if $C/\log N$ is somewhat bigger than $\alpha$ then it is a stronger statement to say that $\rm{Spec}_{\Omega(C/\log N)}(1_A)$ contains most of the Fourier mass. If it does then our plan might be to run one of the known Roth arguments; if it doesn’t then $\rm{Spec}_{\Omega(\alpha)}(1_A) \setminus \rm{Spec}_{\Omega(C/\log N)}(1_A)$ is large and we can hope to run the Bateman-Katz argument. Hopefully I’ll talk more about Bourgain’s method which gives $r_3(N) = O(N/\log^{2/3-o(1)}N)$ (and a slight refinement which gives $r_3(N) = O(N/\log^{3/4-o(1)}N)$) because these along with Bourgain’s original approach can all make use of the fact that $\rm{Spec}_{\Omega(C/\log N)}(1_A)$ is large, rather than simply the statement that $A$ has no non-trivial three-APs (which is stronger). One of the problems we face is that naively the proof giving $r_3(N) = O(N/\log^{1-o(1)}N)$ cannot make use of the fact that $\rm{Spec}_{\Omega(C/\log N)}(1_A)$ is large unless this can be converted into a meaningful physical space statement. I did wonder if there was some slight hope that the $\epsilon$ in the Bateman-Katz result would be sufficiently large, say bigger than $1/4$, that it could be combined with the $r_3(N) = O(N/\log^{3/4-o(1)}N)$ argument to give an improvement. This seems unlikely as I am told $\epsilon$ is rather small. ## 19 Comments » 1. [...] This post was mentioned on Twitter by Michael Druck, S.C. Kavassalis. S.C. Kavassalis said: Polymath6: improving the bounds for Roth’s theorem http://bit.ly/h1MzzB [...] Pingback by — February 6, 2011 @ 2:41 am 2. I have a fairly obvious question, which I think other people are likely to have thought about in detail, though I’m also tempted to think about it myself. Tom refers to the weakness of Bourgainization above, by which he means the fact that when you argue with regular Bohr sets, you have to take a “small” Bohr set inside the regular one, and eventually you pass down to that, which means that at each iteration the width of your Bohr set decreases. In some arguments, this is not a problem, but in others, when you have to iterate several times, it starts to affect bounds in a serious way. In particular, it explains why merely having the idea of replacing subspaces by Bohr sets is not enough to give a $1/\log N$ bound for Roth’s theorem. This is one of our main difficulties, since if one could just straightforwardly do that replacement, then we would have a straightforward dictionary from the Roth/Meshulam argument to a version in $\mathbb{Z}_N$, which would presumably make it straightforward to add on the Bateman-Katz improvement. One version of the question I would ask in response to that is a rather vague one: does the “weakness of Bourgainization” reflect a genuinely important difference between $\mathbb{F}_3^n$ and $\mathbb{Z}_N$ or does it reflect the weakness of the arguments we have so far thought of? In the rest of this comment, I’ll try to make this question more precise, though I don’t think I’ll succeed to the extent that I would ideally like. First of all, there obviously is a genuine difference between $\mathbb{F}_3^n$ and $\mathbb{Z}_N,$ which is that $\mathbb{F}_3^n$ has lots of subgroups/subspaces and $\mathbb{Z}_N$ doesn’t. Since I’m perfectly well aware of that, my task is to explain what I mean by this difference being important or not. As a first attempt at that, let me give a sense in which the difference might seem more important than it actually is. One nice feature of subspaces of $\mathbb{F}_3^n$ is that they allow one to define averaging projections: given a subspace $V$ and a function $f$ defined on $\mathbb{F}_3^n$ we define $\pi_Vf$ by the formula $\pi_Vf(x)=\mathbb{E}_{v\in V}f(x+v),$ which replaces the value at $x$ by the average over the coset $x+V.$ If $B$ is a Bohr set, we can’t do this: it is not in general possible to partition $\mathbb{Z}_N$ by translates of $B,$ and even if one could, one would have to choose some translates and not others, a choice one does not have to make with subspaces. Nevertheless, there is a useful analogue of averaging projections, which is convolving with characteristic measures. That is, we define $\pi_Bf$ by the formula $\pi_Bf(x)=f*\mu_B(x)=\mathbb{E}_{y\in B}f(x+y).$ (The last equality relied on the fact that $B=-B.$) Note that we can define the averaging projection as $\pi_V=f*\mu_V,$ so this is a direct generalization. When it comes to proving Roth’s theorem in $\mathbb{F}_3^n$, we pass to subspaces rather than taking averaging projections, so it starts to look as though it matters more that we are dealing with Bohr sets in $\mathbb{Z}_N.$ But I can’t help wondering whether it might be possible to do something a bit cleverer that would allow one to get away with a smaller sacrifice of width. At this point, my thoughts get a little hazy, which is why I am asking whether anyone else has been down this road already and had clearer thoughts — either negative or positive. The sort of picture I have in my mind is something like this. Instead of passing to one Bohr set $B$, where it really is the case that if you pick a random point $x\in B$ and then a difference $d,$ you need $d$ to belong to a much smaller Bohr set in order to be able to say that $x+d$ probably belongs to $B$ as well, could one do something like passing to an ensemble of Bohr sets inside which you have an ensemble of smaller Bohr sets, but this time with widths smaller by a constant factor, so that if you spill out of one Bohr set you find yourself in a neighbouring one? That’s just one suggestion that I haven’t thought about at all (I’m really trying to convey the type of thing I’d like to look for rather than the actual thing I’m looking for). But I can now try to rephrase my question about genuine importance. Does the fact that Bohr sets don’t have sharp boundaries, so to speak, really have a bearing on Roth’s theorem, or is just a technical irritant that we haven’t yet fully seen how to deal with? One final remark is this. It may seem as though by talking about averaging projections I am advocating an energy-incrementation argument rather than a passing-to-subspaces argument. But that isn’t exactly what I’m trying to do: I’m wondering whether there could be an argument that is global enough to avoid the severe edge-effects associated with Bohr sets, but local enough to have the advantages of the passing-to-subspaces approach. Comment by — February 6, 2011 @ 11:00 am 3. To be slightly more specific, perhaps instead of showing that the set correlates with Bohr sets, one could show that it correlates with non-negative Bohr-continuous functions or something like that. (There are various definitions one could give for this. A natural one would be to say that f is B-continuous if $f\approx f*\mu_B$.) I don’t think this actually works though, or at least not in any straightforward way. If you normalize such a function so that it becomes a probability measure $\lambda,$ and you then try to exploit the fact that $A$ contains no progression of length 3, the idea would be to argue that $A\lambda$ has a small AP count as well, and that this would give bias with respect to a new trigonometric function. And the hope would be that if $\lambda\approx\lambda*\mu_B,$ then $B$ could play the role of the small Bohr set in the calculations rather than the large Bohr set. And then a suitable function built out of $\lambda$ and the trigonometric function would become the new $\lambda$ and a Bohr set built out of $B$ and the new frequency would become the new $B.$ And for some reason that I cannot see without doing calculations and have no convincing reason to expect, the width of $B'$ would not have to be substantially smaller than the width of $B.$ Comment by — February 6, 2011 @ 2:39 pm 4. One small question which occurs to me from the question of the limitation of Bourganisation above is whether being in a high dimensional Bohr set helps in the construction of three-term arithmetic progression-free sets or not. Specifically, can one construct a set $A \subset [N]^d$, free of proper three-term progressions, with $A$ larger than the Behrend bound. It’s not clear to me that it helps at all given that the lower bounds for $r_3(\mathbb{F}_3^n)$ are worse than $r_3(3^n)$ but it would be some evidence of a genuine difference between Bohr sets and subspaces. I suppose that there are high-dimensional progressions (rather than Bohr sets) which contain large three-term progression free sets. Indeed, $\{0,1\}^n$ in $\mathbb{F}_5^n$ is a $d$-dimensional progression containing no proper three-term progressions… Comment by Tom — February 6, 2011 @ 3:04 pm 5. A somewhat related remark is this. I think the problem with the kind of proposal I suggested in comment 2 is that one has to show that the B-continuous function $\lambda$ “ought” to contain several APs of length 3. But the only ones that it seems to have to have are “local” ones, where you have a common difference belonging to B. But perhaps that is where the gain is: our common difference would be in B rather than in a Bohr set of much smaller width. In the course of writing this comment I have ceased to feel that I can see either that there is a problem or that there isn’t a problem. But I agree with you Tom that once one has got to the stage of considering a Bohr set of dimension $d,$ one is morally living in a d-dimensional lattice, so if there is some sense in which the bounds for Roth are weaker in $d$ dimensions than they are in one dimension, then there would appear to be a genuinely important difference between $\mathbb{Z}_N$ and $\mathbb{F}_3^n.$ Comment by — February 6, 2011 @ 3:18 pm 6. WIth regard to the comments on Bohr sets above, I think it’s important to distinguish two reasons why one passes to Bohr sets of smaller “width”. In my opinion one (1) is really crucial: the fact that we want a situation that is roughly closed under addition, thus for example \$B + B’ \approx B\$ if \$B’\$ has much smaller width than \$B\$. This “approximate group” property is completely vital and it does not hold for the Bohr set \$B\$ by itself, where \$B + B \sim 2^d B\$. The second reason (2) for passing to smaller Bohr sets is one that Tim mentions above – Bohr sets can have nasty “edge effects”, so one needs to work with so-called “regular” Bohr sets. My feeling – one that I think Tom will back up – is that for the purposes of this discussion we should just assume that all Bohr sets are regular. To summarise, then: (1) is a really serious conceptual point, and (2) is a technical irritant that can always be bypassed in practice. Comment by Ben Green — February 6, 2011 @ 4:57 pm 7. In our offline email exchanges I mentioned that we probably “should” be working in \$[N]\$ rather than in \$Z/NZ\$, my reasoning being that the group structure in the latter is a bit artificial (some progressions are not “real” progressions). Now I’ve thought about it a bit more, I don’t mind working in \$Z/NZ\$ so much. The thing is that there one can talk about “the set of large Fourier coefficients”, something that Bateman and Katz are doing all the time, rather than looking at sets of \$1/100 N\$-separated Fourier coefficients on the circle \$R/Z\$ or something like that. Of course this is all a matter of finding a good language to talk in and there is no genuine mathematical gain to be made from working in one or the other setting (so far as I know). OK, enough from me. I plan to try and present one or two lemmas from Bateman Katz in the way that I like to understand things. I know other people (for example Tim) are doing the same, and fully concur with him that this should be helpful despite the inevitable duplication of effort. Comment by Ben Green — February 6, 2011 @ 5:02 pm 8. Responding to comment 7: I just want to clarify that the thought I was floating in comment 3 was that if we want approximate closure under addition, we don’t necessarily have to have a pair of Bohr sets B and B’ with B’ smaller than B. Instead, we could have a single Bohr set B and a probability measure $\lambda$ about which we know nothing other than that $\lambda*\mu_B\approx\lambda.$ Thus $\lambda$ would be “approximately closed under B-addition”. I’m not claiming that that actually works, or that if it does then it gives better bounds for anything. But I do want to think about it a little unless someone can persuade me in advance that it’s not worth the bother. Comment by — February 6, 2011 @ 5:40 pm 9. [...] the bounds for Roth’s theorem. See this post. Also there is a page on the polymath blog here. There is also a wiki here. Also see this post and this [...] Pingback by — February 6, 2011 @ 9:58 pm 10. Hi all, I am interested in the project of getting useful physical space information from large Fourier coefficients. If this can’t be done for one coefficient as in the cap-set question, perhaps can the presence of many large coefficients help? What would constitute a counterexample convincing us that such an approach is futile? One way I imagined being helpful at this stage was to write some notes helping to explain our argument. I see you have a wiki for this, but I don’t have the appropriate permissions to modify it. I’ve noticed that several of you plan on contributing to it and don’t want to step on your toes. However let me know if an outline of the argument in section 6 and a description of some examples that necessitate Nets Comment by Nets Katz — February 7, 2011 @ 2:53 am • Hi Nets, I created some sections on the wiki for your paper with Bateman and started to add some content based on the discussions we had last week. I don’t want this to dissuade anyone from writing different takes on whatever I have added, however. Indeed, I imagine having several different perspectives on things will be very useful. So please don’t worry about the stepping on of toes on my account. I think you should be able to create an account on the wiki yourself; at least there is a link towards the top right of the site that gives that impression. Olof Comment by Olof Sisask — February 7, 2011 @ 6:06 am • Olof, Thanks. I’m not very experienced with wikis. Nets Comment by Nets Katz — February 7, 2011 @ 6:13 am 11. I would like to join this project. Unfortunately, I don’t know much about Bohr sets yet, which may be something of a handicap. However, I have researched the cap-set problem a little, and have come up with an idea for a computer experiment to test the finite-field heuristic by comparing $\mathbb{F}_3^n$ against $(\mathbb{Z}/3 \mathbb{Z})^n$ for $\Lambda(1_A)$, which I have described in a blog post here. Obviously though, I am far from certain whether data from such an experiment would be of interest, so feedback on that would be welcome. Comment by Paul Delhanty — February 7, 2011 @ 6:04 pm • Could I ask what the difference is between $\mathbb{F}_3$ and $\mathbb{Z}/3\mathbb{Z}$ in what you write? Comment by — February 7, 2011 @ 7:06 pm • Regarding your very reasonable question, you are of course correct that there is no difference between ${{\mathbb F}_3}$ and ${{\mathbb Z}/3{\mathbb Z}}$. I stupidly wrote ${{{\mathbb Z}/3{\mathbb Z}}^n}$ when I was thinking of ${{{\mathbb Z}/3^n{\mathbb Z}}}$, and then copied the LaTeX for my mistake all over my blog post. Accordingly, I have updated to refer to ${{{\mathbb Z}/3^n{\mathbb Z}}}$, which is what I meant. So for clarity, I am proposing to compare ${{\mathbb F}_3^n}$ against ${{{\mathbb Z}/3^n{\mathbb Z}}}$, obtaining frequency counts for possible values of ${\Lambda(1_A) - P_G(A)^3}$ at each density level-set in the power set of ${[3^n]}$. I don’t think that I have explained very well why I find the experiment interesting. Certainly, I can update my rather rushed blog post with a clearer explanation. However, even if I explain that clearly, I am far from confident that the experimental data would be of wider interest. If the experiment is of interest, then I can move material to this blog or to the wiki. On the other hand, if the experiment is not of interest, I still feel honour bound to complete the experiment before moving on. Comment by Paul Delhanty — February 8, 2011 @ 5:14 am 12. [...] Gowers very reasonably asked here “Could I ask what the difference is between and and in what you write?” The answer is [...] Pingback by — February 8, 2011 @ 4:51 am 13. This comment is a reply to comment 2 and the need to pass to subspaces rather than taking averaging projections when proving Roth’s theorem for ${{\mathbb F}_3^n}$. I have dug out a sketch of an alternative proof of Lemma 10.15 (Non-uniformity implies density increment) from Tao & Vu that uses the dual form of the Poisson summation formula (essentially taking averaging projections). Seeing as Lemma 10.15 is used when passing to a subspace, might it be possible to unpack my proof somehow and then replace ${\pi_V f}$ by ${\pi_B f}$ for the Bohr set case? The proof also improves the constant in Lemma 10.15 from 1/2 to ${\sqrt 2}$, which makes me quite nervous that I have made a silly mistake. However, I am going to post the proof anyway in case something can be salvaged from the idea. Lemma 10.15 + improved constant: Write Z as a shorthand for ${{\mathbb F}_p^n}$. Let ${f: Z \rightarrow {\mathbb R}}$ be a function with mean zero. Then there exists a codimension 1 subspace V of Z and a point ${x_0 \in Z}$, such that $\displaystyle \mathop{\mathbb E}_{y \in x_0+V} f(y) \geq \sqrt 2 ||f||_{u^2(Z)}$ Sketch of proof: Choose ${\xi \in Z}$ such that ${|\hat f(\xi)| = ||f||_{u^2(Z)}}$. Let W be the dimension 1 Fourier subspace generated by ${\xi}$ and let V be the orthogonal complement of ${\xi}$. Let ${h := \pi_V f}$. By the dual form of the Poisson Summation formula (Exercise 4.1.7 in Tao & Vu) ${\hat h = \hat f . 1_W}$. From the Parseval identity and the fact that ${f}$ is real valued: $\displaystyle \mathop{\mathbb E}_{x \in Z} |h(x)|^2 = \sum_{\zeta \in Z} |\hat h(\zeta)|^2 \geq |\hat f(\xi)|^2 + |\hat f(-\xi)|^2 = 2 ||f||_{u^2(Z)}$ The result follows from the fact that ${h}$ is constant on codimension 1 affine subspaces and the pigeonhole principle. Comment by Paul Delhanty — February 9, 2011 @ 3:40 am • Because the required bound on ${\mathop{\mathbb E}_{y \in x_0+V} f(y)}$ is signed, I have concluded that my proof is broken at the last point where I invoked the pigeonhole principle on ${\mathop{\mathbb E}_{x \in Z} |h(x)|^2}$. At that point it is necessary to choose a phase as is done in Tao &Vu. I think that the proof may be able to be salvaged, but the constant would not ${\sqrt 2}$. That would still leave the core idea intact though. Comment by Paul Delhanty — February 9, 2011 @ 6:48 am • The problem with using the $U_2$ norm (which would seem to be a very natural thing to do) is that the nonexistence of APs in a set $A$ provides a bound on the $U_2$ norm of the balanced function of $A$ that results, after the above argument, in a weaker density increment than you get if you use a Fourier expansion: the $\ell_\infty$ norm is what you care about if you are looking for a density increment, so you lose information if you use the $\ell_4$ norm. Comment by — February 9, 2011 @ 8:01 am RSS feed for comments on this post. TrackBack URI Theme: Customized Rubric. Blog at WordPress.com. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 174, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9621824026107788, "perplexity_flag": "head"}
http://enc.tfode.com/Least_squares
Least squares - 2 results found: Wikipedia, twitter Pronunciation - Also found in Dictionary: English Encyclopedia: Deutsch English Wikipedia # Least squares Regression analysis Models Estimation Background The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation. The most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, a residual being the difference between an observed value and the fitted value provided by a model. When the problem has substantial uncertainties in the independent variable (the 'x' variable), then simple regression and least squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares. Least squares problems fall into two categories: linear or ordinary least squares and non-linear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. A closed-form solution (or closed-form expression) is any formula that can be evaluated in a finite number of standard operations. The non-linear problem has no closed-form solution and is usually solved by iterative refinement; at each iteration the system is approximated by a linear one, thus the core calculation is similar in both cases. Least squares corresponds to the maximum likelihood criterion if the experimental errors have a normal distribution and can also be derived as a method of moments estimator. The following discussion is mostly presented in terms of linear functions but the use of least-squares is valid and practical for more general families of functions. Also, by iteratively applying local quadratic approximation to the likelihood (through the Fisher information), the least-squares method may be used to fit a generalized linear model. For the topic of approximating a function by a sum of others using an objective function based on squared distances, see least squares (function approximation). The result of fitting a set of data points with a quadratic function. The least-squares method is usually credited to Carl Friedrich Gauss (1795),[1] but it was first published by Adrien-Marie Legendre. ## History ### Context The method of least squares grew out of the fields of astronomy and geodesy as scientists and mathematicians sought to provide solutions to the challenges of navigating the Earth's oceans during the Age of Exploration. The accurate description of the behavior of celestial bodies was the key to enabling ships to sail in open seas, where sailors could no longer rely on land sightings for navigation. The method was the culmination of several advances that took place during the course of the eighteenth century:[2] • The combination of different observations taken under the same conditions contrary to simply trying one's best to observe and record a single observation accurately. This approach was notably used by Tobias Mayer while studying the librations of the moon. • The combination of different observations as being the best estimate of the true value; errors decrease with aggregation rather than increase, perhaps first expressed by Roger Cotes. • The combination of different observations taken under different conditions as notably performed by Roger Joseph Boscovich in his work on the shape of the earth and Pierre-Simon Laplace in his work in explaining the differences in motion of Jupiter and Saturn. • The development of a criterion that can be evaluated to determine when the solution with the minimum error has been achieved, developed by Laplace in his Method of Least Squares. ### The method Carl Friedrich Gauss is credited with developing the fundamentals of the basis for least-squares analysis in 1795 at the age of eighteen.[1] Legendre was the first to publish the method, however. An early demonstration of the strength of Gauss's method came when it was used to predict the future location of the newly discovered asteroid Ceres. On January 1, 1801, the Italian astronomer Giuseppe Piazzi discovered Ceres and was able to track its path for 40 days before it was lost in the glare of the sun. Based on this data, astronomers desired to determine the location of Ceres after it emerged from behind the sun without solving the complicated Kepler's nonlinear equations of planetary motion. The only predictions that successfully allowed Hungarian astronomer Franz Xaver von Zach to relocate Ceres were those performed by the 24-year-old Gauss using least-squares analysis. Gauss did not publish the method until 1809, when it appeared in volume two of his work on celestial mechanics, Theoria Motus Corporum Coelestium in sectionibus conicis solem ambientium. In 1822, Gauss was able to state that the least-squares approach to regression analysis is optimal in the sense that in a linear model where the errors have a mean of zero, are uncorrelated, and have equal variances, the best linear unbiased estimator of the coefficients is the least-squares estimator. This result is known as the Gauss–Markov theorem. The idea of least-squares analysis was also independently formulated by the Frenchman Adrien-Marie Legendre in 1805 and the American Robert Adrain in 1808. In the next two centuries workers in the theory of errors and in statistics found many different ways of implementing least squares.[3] ## Problem statement The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists of n points (data pairs) $(x_i,y_i)\!$, i = 1, ..., n, where $x_i\!$ is an independent variable and $y_i\!$ is a dependent variable whose value is found by observation. The model function has the form $f(x,\beta)$, where the m adjustable parameters are held in the vector $\boldsymbol \beta$. The goal is to find the parameter values for the model which "best" fits the data. The least squares method finds its optimum when the sum, S, of squared residuals $S=\sum_{i=1}^{n}{r_i}^2$ is a minimum. A residual is defined as the difference between the actual value of the dependent variable and the value predicted by the model. $r_i=y_i-f(x_i,\boldsymbol \beta)$. An example of a model is that of the straight line in two dimensions. Denoting the intercept as $\beta_0$ and the slope as $\beta_1$, the model function is given by $f(x,\boldsymbol \beta)=\beta_0+\beta_1 x$. See linear least squares for a fully worked out example of this model. A data point may consist of more than one independent variable. For example, when fitting a plane to a set of height measurements, the plane is a function of two independent variables, x and z, say. In the most general case there may be one or more independent variables and one or more dependent variables at each data point. ## Limitations This regression formulation considers only residuals in the dependent variable. There are two rather different contexts in which different implications apply: • Regression for prediction. Here a model is fitted to provide a prediction rule for application in a similar situation to which the data used for fitting apply. Here the dependent variables corresponding to such future application would be subject to the same types of observation error as those in the data used for fitting. It is therefore logically consistent to use the least-squares prediction rule for such data. • Regression for fitting a "true relationship". In standard regression analysis, that leads to fitting by least squares, there is an implicit assumption that errors in the independent variable are zero or strictly controlled so as to be negligible. When errors in the independent variable are non-negligible, models of measurement error can be used; such methods can lead to parameter estimates, hypothesis testing and confidence intervals that take into account the presence of observation errors in the independent variables.[citation needed] An alternative approach is to fit a model by total least squares; this can be viewed as taking a pragmatic approach to balancing the effects of the different sources of error in formulating an objective function for use in model-fitting. ## Solving the least squares problem The minimum of the sum of squares is found by setting the gradient to zero. Since the model contains m parameters there are m gradient equations. $\frac{\partial S}{\partial \beta_j}=2\sum_i r_i\frac{\partial r_i}{\partial \beta_j}=0,\ j=1,\ldots,m$ and since $r_i=y_i-f(x_i,\boldsymbol \beta)\,$ the gradient equations become $-2\sum_i r_i\frac{\partial f(x_i,\boldsymbol \beta)}{\partial \beta_j}=0,\ j=1,\ldots,m$. The gradient equations apply to all least squares problems. Each particular problem requires particular expressions for the model and its partial derivatives. ### Linear least squares Main article: Linear least squares A regression model is a linear one when the model comprises a linear combination of the parameters, i.e., $f(x_i, \beta) = \sum_{j = 1}^{m} \beta_j \phi_j(x_{i})$ where the functions, $\phi_{j}$, are functions of $x_{i}$. Letting $X_{ij}= \frac{\partial f(x_i,\boldsymbol \beta)}{\partial \beta_j}= \phi_j(x_{i}) . \,$ we can then see that in that case the least square estimate (or estimator, in the context of a random sample), $\boldsymbol \beta$ is given by $\boldsymbol{\hat\beta} =( X ^TX)^{-1}X^{T}\boldsymbol y.$ For a derivation of this estimate see Linear least squares (mathematics). ### Non-linear least squares Main article: Non-linear least squares There is no closed-form solution to a non-linear least squares problem. Instead, numerical algorithms are used to find the value of the parameters $\beta$ which minimize the objective. Most algorithms involve choosing initial values for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation. ${\beta_j}^{k+1}={\beta_j}^k+\Delta \beta_j$ k is an iteration number and the vector of increments, $\Delta \beta_j\,$ is known as the shift vector. In some commonly used algorithms, at each iteration the model may be linearized by approximation to a first-order Taylor series expansion about $\boldsymbol \beta^k\!$ $\begin{align} f(x_i,\boldsymbol \beta) & = f^k(x_i,\boldsymbol \beta) +\sum_j \frac{\partial f(x_i,\boldsymbol \beta)}{\partial \beta_j} \left(\beta_j-{\beta_j}^k \right) \\ & = f^k(x_i,\boldsymbol \beta) +\sum_j J_{ij} \Delta\beta_j. \end{align}$ The Jacobian, J, is a function of constants, the independent variable and the parameters, so it changes from one iteration to the next. The residuals are given by $r_i=y_i- f^k(x_i,\boldsymbol \beta)- \sum_{k=1}^{m} J_{ik}\Delta\beta_k=\Delta y_i- \sum_{j=1}^{m} J_{ij}\Delta\beta_j$. To minimize the sum of squares of $r_i$, the gradient equation is set to zero and solved for $\Delta \beta_j\!$ $-2\sum_{i=1}^{n}J_{ij} \left( \Delta y_i-\sum_{k=1}^{m} J_{ik}\Delta \beta_k \right)=0$ which, on rearrangement, become m simultaneous linear equations, the normal equations. $\sum_{i=1}^{n}\sum_{k=1}^{m} J_{ij}J_{ik}\Delta \beta_k=\sum_{i=1}^{n} J_{ij}\Delta y_i \qquad (j=1,\ldots,m)\,$ The normal equations are written in matrix notation as $\mathbf{\left(J^TJ\right)\Delta \boldsymbol \beta=J^T\Delta y}.\,$ These are the defining equations of the Gauss–Newton algorithm. ### Differences between linear and non-linear least squares • The model function, f, in LLSQ (linear least squares) is a linear combination of parameters of the form $f = X_{i1}\beta_1 + X_{i2}\beta_2 +\cdots$ The model may represent a straight line, a parabola or any other linear combination of functions. In NLLSQ (non-linear least squares) the parameters appear as functions, such as $\beta^2, e^{\beta x}$ and so forth. If the derivatives $\partial f /\partial \beta_j$ are either constant or depend only on the values of the independent variable, the model is linear in the parameters. Otherwise the model is non-linear. • Algorithms for finding the solution to a NLLSQ problem require initial values for the parameters, LLSQ does not. • Like LLSQ, solution algorithms for NLLSQ often require that the Jacobian be calculated. Analytical expressions for the partial derivatives can be complicated. If analytical expressions are impossible to obtain either the partial derivatives must be calculated by numerical approximation or an estimate must be made of the Jacobian. • In NLLSQ non-convergence (failure of the algorithm to find a minimum) is a common phenomenon whereas the LLSQ is globally concave so non-convergence is not an issue. • NLLSQ is usually an iterative process. The iterative process has to be terminated when a convergence criterion is satisfied. LLSQ solutions can be computed using direct methods, although problems with large numbers of parameters are typically solved with iterative methods, such as the Gauss–Seidel method. • In LLSQ the solution is unique, but in NLLSQ there may be multiple minima in the sum of squares. • Under the condition that the errors are uncorrelated with the predictor variables, LLSQ yields unbiased estimates, but even under that condition NLLSQ estimates are generally biased. These differences must be considered whenever the solution to a non-linear least squares problem is being sought. ## Least squares, regression analysis and statistics The methods of least squares and regression analysis are conceptually different. However, the method of least squares is often used to generate estimators and other statistics in regression analysis. Consider a simple example drawn from physics. A spring should obey Hooke's law which states that the extension of a spring is proportional to the force, F, applied to it. $f(F_i,k)=kF_i\!$ constitutes the model, where F is the independent variable. To estimate the force constant, k, a series of n measurements with different forces will produce a set of data, $(F_i, y_i), i=1,n\!$, where yi is a measured spring extension. Each experimental observation will contain some error. If we denote this error $\varepsilon$, we may specify an empirical model for our observations, $y_i = kF_i + \varepsilon_i. \,$ There are many methods we might use to estimate the unknown parameter k. Noting that the n equations in the m variables in our data comprise an overdetermined system with one unknown and n equations, we may choose to estimate k using least squares. The sum of squares to be minimized is $S = \sum_{i=1}^{n} \left(y_i - kF_i\right)^2.$ The least squares estimate of the force constant, k, is given by $\hat k=\frac{\sum_i F_i y_i}{\sum_i {F_i}^2}.$ Here it is assumed that application of the force causes the spring to expand and, having derived the force constant by least squares fitting, the extension can be predicted from Hooke's law. In regression analysis the researcher specifies an empirical model. For example, a very common model is the straight line model which is used to test if there is a linear relationship between dependent and independent variable. If a linear relationship is found to exist, the variables are said to be correlated. However, correlation does not prove causation, as both variables may be correlated with other, hidden, variables, or the dependent variable may "reverse" cause the independent variables, or the variables may be otherwise spuriously correlated. For example, suppose there is a correlation between deaths by drowning and the volume of ice cream sales at a particular beach. Yet, both the number of people going swimming and the volume of ice cream sales increase as the weather gets hotter, and presumably the number of deaths by drowning is correlated with the number of people going swimming. Perhaps an increase in swimmers causes both the other variables to increase. In order to make statistical tests on the results it is necessary to make assumptions about the nature of the experimental errors. A common (but not necessary) assumption is that the errors belong to a Normal distribution. The central limit theorem supports the idea that this is a good approximation in many cases. • The Gauss–Markov theorem. In a linear model in which the errors have expectation zero conditional on the independent variables, are uncorrelated and have equal variances, the best linear unbiased estimator of any linear combination of the observations, is its least-squares estimator. "Best" means that the least squares estimators of the parameters have minimum variance. The assumption of equal variance is valid when the errors all belong to the same distribution. • In a linear model, if the errors belong to a Normal distribution the least squares estimators are also the maximum likelihood estimators. However, if the errors are not normally distributed, a central limit theorem often nonetheless implies that the parameter estimates will be approximately normally distributed so long as the sample is reasonably large. For this reason, given the important property that the error mean is independent of the independent variables, the distribution of the error term is not an important issue in regression analysis. Specifically, it is not typically important whether the error term follows a normal distribution. In a least squares calculation with unit weights, or in linear regression, the variance on the jth parameter, denoted $\text{var}(\hat{\beta}_j)$, is usually estimated with $\text{var}(\hat{\beta}_j)= \sigma^2\left( \left[X^TX\right]^{-1}\right)_{jj} \approx \frac{S}{n-m}\left( \left[X^TX\right]^{-1}\right)_{jj},$ where the true residual variance σ2 is replaced by an estimate based on the minimised value of the sum of squares objective function S. The denominator, n-m, is the statistical degrees of freedom; see effective degrees of freedom for generalizations. Confidence limits can be found if the probability distribution of the parameters is known, or an asymptotic approximation is made, or assumed. Likewise statistical tests on the residuals can be made if the probability distribution of the residuals is known or assumed. The probability distribution of any linear combination of the dependent variables can be derived if the probability distribution of experimental errors is known or assumed. Inference is particularly straightforward if the errors are assumed to follow a normal distribution, which implies that the parameter estimates and residuals will also be normally distributed conditional on the values of the independent variables. ## Weighted least squares A special case of Generalized least squares called weighted least squares occurs when all the off-diagonal entries of Ω (the correlation matrix of the residuals) are 0. The expressions given above are based on the implicit assumption that the errors are uncorrelated with each other and with the independent variables and have equal variance. The Gauss–Markov theorem shows that, when this is so, $\hat{\boldsymbol{\beta}}$ is a best linear unbiased estimator (BLUE). If, however, the measurements are uncorrelated but have different uncertainties, a modified approach might be adopted. Aitken showed that when a weighted sum of squared residuals is minimized, $\hat{\boldsymbol{\beta}}$ is BLUE if each weight is equal to the reciprocal of the variance of the measurement. $S = \sum_{i=1}^{n} W_{ii}{r_i}^2,\qquad W_{ii}=\frac{1}{{\sigma_i}^2}$ The gradient equations for this sum of squares are $-2\sum_i W_{ii}\frac{\partial f(x_i,\boldsymbol {\beta})}{\partial \beta_j} r_i=0,\qquad j=1,\ldots,n$ which, in a linear least squares system give the modified normal equations, $\sum_{i=1}^{n}\sum_{k=1}^{m} X_{ij}W_{ii}X_{ik}\hat{ \beta}_k=\sum_{i=1}^{n} X_{ij}W_{ii}y_i, \qquad j=1,\ldots,m\,.$ When the observational errors are uncorrelated and the weight matrix, W, is diagonal, these may be written as $\mathbf{\left(X^TWX\right)\hat {\boldsymbol {\beta}}=X^TWy}.$ If the errors are correlated, the resulting estimator is BLUE if the weight matrix is equal to the inverse of the variance-covariance matrix of the observations. When the errors are uncorrelated, it is convenient to simplify the calculations to factor the weight matrix as $\mathbf{w_{ii}}=\sqrt \mathbf{W_{ii}}$. The normal equations can then be written as $\mathbf{\left(X'^TX'\right)\hat{\boldsymbol{\beta}}=X'^Ty'}\,$ where $\mathbf{X'}=\mathbf{wX}, \mathbf{y'}=\mathbf{wy}.\,$ For non-linear least squares systems a similar argument shows that the normal equations should be modified as follows. $\mathbf{\left(J^TWJ\right)\boldsymbol \Delta \beta=J^TW \boldsymbol\Delta y}.\,$ Note that for empirical tests, the appropriate W is not known for sure and must be estimated. For this Feasible Generalized Least Squares (FGLS) techniques may be used. ## Relationship to principal components The first principal component about the mean of a set of points can be represented by that line which most closely approaches the data points (as measured by squared distance of closest approach, i.e. perpendicular to the line). In contrast, linear least squares tries to minimize the distance in the $y$ direction only. Thus, although the two use a similar error metric, linear least squares is a method that treats one dimension of the data preferentially, while PCA treats all dimensions equally. ## Regularized versions ### Tikhonov regularization Main article: Tikhonov regularization In some contexts a regularized version of the least squares solution may be preferable. Tikhonov regularization (or ridge regression) adds a constraint that $\|\beta\|^2$, the L2-norm of the parameter vector, is not greater than a given value. Equivalently, it may solve an unconstrained minimization of the least-squares penalty with $\alpha\|\beta\|^2$ added, where $\alpha$ is a constant (this is the Lagrangian form of the constrained problem). In a Bayesian context, this is equivalent to placing a zero-mean normally distributed prior on the parameter vector. ### Lasso method An alternative regularized version of least squares is Lasso (least absolute shrinkage and selection operator), which uses the constraint that $\|\beta\|^1$, the L1-norm of the parameter vector, is no greater than a given value. (As above, this is equivalent to an unconstrained minimization of the least-squares penalty with $\alpha\|\beta\|^1$ added.) In a Bayesian context, this is equivalent to placing a zero-mean Laplace prior distribution on the parameter vector. One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This problem may be solved using quadratic programming or more general convex optimization methods, as well as by specific algorithms such as the least angle regression algorithm. The L1-regularized formulation is useful in some contexts due to its tendency to prefer solutions with fewer nonzero parameter values, effectively reducing the number of variables upon which the given solution is dependent.[4] For this reason, the Lasso and its variants are fundamental to the field of compressed sensing. An extension of this approach is elastic net regularization. ## Notes 1. ^ a b Bretscher, Otto (1995). Linear Algebra With Applications, 3rd ed. Upper Saddle River NJ: Prentice Hall. 2. Stigler, Stephen M. (1986). The History of Statistics: The Measurement of Uncertainty Before 1900. Cambridge, MA: Belknap Press of Harvard University Press. ISBN 0-674-40340-1. 3. J. Aldrich (1998). "Doing Least Squares: Perspectives from Gauss and Yule". International Statistical Review 66 (1): 61–81. doi:10.1111/j.1751-5823.1998.tb00406.x. 4. Tibshirani, R. (1996). "Regression shrinkage and selection via the lasso". 58 (1): 267–288. ## References • Å. Björck (1996). Numerical Methods for Least Squares Problems. SIAM. ISBN 978-0-89871-360-2. • C.R. Rao, H. Toutenburg, A. Fieger, C. Heumann, T. Nittner and S. Scheid (1999). Linear Models: Least Squares and Alternatives. Springer Series in Statistics. • T. Kariya and H. Kurata (2004). Generalized Least Squares. Wiley. • J. Wolberg (2005). Data Analysis Using the Method of Least Squares: Extracting the Most Information from Experiments. Springer. ISBN 3-540-25674-1. • T. Strutz (2010). Data Fitting and Uncertainty (A practical introduction to weighted least squares and beyond). Vieweg+Teubner. ISBN 978-3-8348-1022-9.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 57, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9052017331123352, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/102440-approximate-numbers-rounding-print.html
# Approximate numbers and rounding Printable View • September 15th 2009, 02:20 PM redshirt Approximate numbers and rounding I hope this is the proper spot for this! I did a little bit of searching and couldn't quite find what I was looking for, so here goes (it's more of a procedure question than a specific problem): The chapter I'm on in my text covers calculators and approximate numbers. The rules given for operations with approximate numbers are: 1) When approximate numbers are added/subtracted the result is expressed with the precision of the least precise number (precision being the number of decimal places, if I'm reading the text correctly). 2) When approximate numbers are multiplied/divided the result is expressed with the accuracy of the least accurate number (accuracy being the total number of significant digits). 3) When the root of an approximate number is found the result is expressed with the accuracy of the number. I'm confused about the procedure when these operations are combined, however. The text only says "where there is a combination of operations, the final operation determines how the final result is to be rounded off" with the example: $38.3 - 12.9(-3.58) = 84.482 = 84.5$ (which unfortunately isn't very comprehensive). In this example I'm lead to believe that the final answer is rounded to tenths because 38.3 only has a single decimal place of precision (the final operation being subtraction), but this assumption doesn't hold true for later questions. $-(-26.5)^2 - (-9.85)^3$ for example has a book answer of 253, which seems to indicate that the rounding that would take place on the multiplication before the subtraction has an effect on the final rounding. I don't think I can round at each step, though, as that would introduce error into the problem, much like chain rounding? Plus given that this is all in the context of using a calculator, it doesn't make much sense to stop at the end of each operation to round. Do I just kind of earmark the least number of significant digits if there's muliplication, then apply that to the final result? And vice-versa with the precision if the final operation is multiplication/division? I guess what I really need is a more in-depth explanation of the rounding procedures used with the mixed operations in a problem like this :) • September 15th 2009, 07:42 PM Wilmer Quote: Originally Posted by redshirt $-(-26.5)^2 - (-9.85)^3$ for example has a book answer of 253, which seems to indicate that the rounding that would take place on the multiplication before the subtraction has an effect on the final rounding....... Rounding ALWAYS takes place after the full operation is performed. -(-26.5)^2 - (-9.85)^3 = 253.421625 to nearest whole number: 253 to 1 decimal place: 253.4 to 2 decimal places: 253.42 to 3 decimal places: 253.422 to 4 decimal places: 253.4216 to 5 decimal places: 253.42163 to 6 decimal places: 253.421625 • September 16th 2009, 04:32 PM redshirt Quote: Originally Posted by Wilmer Rounding ALWAYS takes place after the full operation is performed. -(-26.5)^2 - (-9.85)^3 = 253.421625 to nearest whole number: 253 to 1 decimal place: 253.4 to 2 decimal places: 253.42 to 3 decimal places: 253.422 to 4 decimal places: 253.4216 to 5 decimal places: 253.42163 to 6 decimal places: 253.421625 Do I still need to keep track of what rounding would have been done to the individual operations that make up the whole, though? I'm still looking for the process to use to figure out what rounding to apply to the final answer for these type of problems in general. In that example's case, for instance, the final result's been rounded to 3 significant digits. The only way I can see getting that is by following the multiplication rule - but the final operation's subtraction :confused: • September 27th 2009, 01:34 AM redshirt 'Scuse the bump, I just noticed that this was buried in the move from the basic forum :) • September 27th 2009, 04:29 AM Wilmer Quote: Originally Posted by redshirt Do I still need to keep track of what rounding would have been done to the individual operations that make up the whole, .... No, non, nyet, ... • October 6th 2009, 09:09 AM redshirt Quote: Originally Posted by Wilmer No, non, nyet, ... I do understand that I'm not supposed to actually perform these intermediary operations as that would introduce rounding errors, but what I don't understand is how to figure out what rounding to do on a complex problem based on the rules listed in the first post. Back to the example again, the final answer is supposed to be rounded to 253 - 3 significant digits, no decimal points. As I see it, if I followed the "last operation determines the rounding" line from the book to the letter, I'd have an answer of 253.4, one decimal point from the final operation being subtraction and the least precise number being -26.5. I just need a better explanation of how to carry those rounding rules through problems that have multiple sets of operations. Is it that this kind of thing isn't standard at all? Searching around I've found a few pages and documents that list the same rules for precision and accuracy, but nothing that's gone into any detail at all about how they apply to problems with combined operations. "The final operation determines how the final result is to be rounded off" is the only explanation I've found. Frustrating, as there are instances of rounding throughout the entire text that seem to depend on these rules.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9539899230003357, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/210642/probability-of-finding-a-fit-for-bin-packing?answertab=votes
# Probability of finding a fit for bin packing Given that I know the total available space for a set of bins, and the number of bins, I'm trying to determine how likely it is that an item of size $n$ will fit into one of the bins. As an example, I figured that if I need to fit a size 4 item and I have 4 bins, then as long as the total available space is > 12, at least one bin must also have room for a size 4 item. However, if the total size is 12, the free space in the bins could have the configuration $[3,3,3,3]$, and the item would not fit in any of the bins. So the interesting cases are the ones where the total size is between $n$ (item size) and $(n-1)*b$ where $b$ is the bin count. I'm trying to determine the probability of success. I've figured out that the number of ways 12 "spaces" can be distributed across 4 bins is similar to putting $n$ balls into $k$ bins and is given by $$\binom{n+k-1}{k-1} .$$ So in the above example, this means the possible configurations for a total size of 12 and 4 bins is $$\binom{15}{3}.$$ which yields 455 possible configurations. Of these, clearly only one is incompatible (3 slots in each bin), so the probability of failure is $1/455$, and thus success is $454/455$. EDIT: I realized that the bins need to have max sizes as well, so for example if the free space is 12, but the max bin size is 10, then the $[12,0,0,0]$ solution is not possible and must not be counted. This means I can use the same approach for finding the possible distributions as the failing ones, only with different max bin sizes. And this is where I'm completely stuck, I'm not able to generalize this. When the total size is 11 for example, there are more than one case which "fails", namely $[2,3,3,3]$, $[3,2,3,3]$,$[3,3,2,3]$ and $[3,3,3,2]$. I think that to count the "failing" cases I'm asking how many ways you can distribute $n$ balls into $k$ bins, given that no bin can exceed a max size $m$, where the $m$ is my item size - 1. This seems to be answered in Number of ways to put $n$ unlabeled balls in $k$ bins with a max of $m$ balls in each bin, but I can't figure what to plug into the $r$ and $r2$ for the last formula there. Also, there might be some much easier way of figuring out the probability :) - You're assuming that the bins fill completely the space correct? – Jeremy Oct 10 '12 at 20:18 @Jeremy: Yes, the total space is the sum of free space in the bins. – Hisnessness Oct 10 '12 at 20:26 ## 1 Answer A partial answer: If you have $n$ space and $k$ bins into which you wish to place an object of size $j$, you need to determine the number of ways in which there is at least one box of size at least $j$. For the "at least one box of size $j$" portion, we have $$\binom{k+n-j-2}{n-j-1}$$ since we are holding one box of size $j$ fixed and so finding the number of solutions to $x_1+x_2+ \cdots +x_{k-1} = n-j$. And to find the number of ways to have a box of size at least $j$, you could sum up $$\sum_{i=0}^{n-j}{\binom{n-(j-i)+k-2}{n-(j-i)-1}}$$ but this would be overcounting and you'd have to take out the duplicates. - I've edited my question as I realized I needed to constrain the bin sizes. Now both the total number of ways and the failing ways require the same solution. Sorry about that. – Hisnessness Oct 11 '12 at 5:38 After trying the solution in the linked question again I found that it seems to give the right answers. – Hisnessness Oct 12 '12 at 18:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9544773101806641, "perplexity_flag": "head"}
http://mathhelpforum.com/math-topics/181708-projectiles-print.html
# projectiles Printable View • May 26th 2011, 07:36 AM Duke projectiles A Rocket R is fired from ground with inital speed s to the horizontal at an angle $\alpha$. There is a resistive force F=-mkv where m is the mass and v is the vector (v1,v2) and K is a constant. Show the equations of motion are $d/dt(v1)+kv1=0$ and $d/dt(v2)+kv2= -g$ My answer. First in the horizontal direction there is no resultant force so I don't get it • May 26th 2011, 07:50 AM TheEmptySet Quote: Originally Posted by Duke A Rocket R is fired from ground with inital speed s to the horizontal at an angle $\alpha$. There is a resistive force F=-mkv where m is the mass and v is the vector (v1,v2) and K is a constant. Show the equations of motion are $d/dt(v1)+kv1=0$ and $d/dt(v2)+kv2= -g$ My answer. First in the horizontal direction there is no resultant force so I don't get it That is not true. The air resistance is given by $F=-mkv=-mk(v_1\mathbf{i}+v_2\mathbf{j})$ This force is proportional to the velocity in both of the x and y directions. So in the y direction we have by newtons 2nd law $m\frac{dv_2}{dt}=-mg-kmv_2$ and in the x $m\frac{dv_1}{dt}=-kmv_1$ • May 26th 2011, 07:56 AM Duke How do you know the sign of kmv? Doesn't it needs to be negative. Also I need to state inital conditions for v and r (vectors) • May 26th 2011, 08:18 AM TheEmptySet First I see why you are confused I factored out the minus signs in the force but forgot to put them in the final answer. Yes I had a typo and have edited the above post. The initial conditions are implied not explicit given. Assumption 1 Since we are free to setup our own coordinate system lets start the rocket at the origin. That would give the initial conditions. $x(0)=0 \quad y(0)=0$ Also since we are launching the rocket its initial velocity should also be zero $v_1(0)=0 \quad v_2(0)=0$ • May 26th 2011, 08:32 AM Duke Thanks. If I define up as positive, do the signs change? • May 26th 2011, 08:39 AM TheEmptySet Quote: Originally Posted by Duke Thanks. If I define up as positive, do the signs change? No this set of equations defines up as positive in the y direction and right as positive in the x direction. You see this from the fact that -mg has a negative sign. If down were positive this would give the term mg. All times are GMT -8. The time now is 10:51 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400355815887451, "perplexity_flag": "middle"}
http://oumathclub.wordpress.com/tag/obama/
# Why Math Research Is Practical Posted on April 30, 2009 by We’ve discussed a few modern areas of research in mathematics:  the search for a combinatorial proof of the density Hales-Jewett theorem lead by Timothy Gowers and some of the open questions about the number $\pi$.  A fair question is why should people spend their time (and the money of the university/government/companies/etc.) on solving such impractical problems?  Shouldn’t they be working on a cure for a disease?  Even if you enjoy doing math for math’s sake, you probably have relatives who wonder why anyone would do basic research which isn’t directly related to solving a real world problem! A few days ago President Obama gave a speech at the National Academy of Science in which he gave a good answer to this question: As Vannevar Bush, who served as scientific advisor to President Franklin Roosevelt, famously said: “Basic scientific research is scientific capital.” The fact is, an investigation into a particular physical, chemical, or biological process might not pay off for a year, or a decade, or at all. And when it does, the rewards are often broadly shared, enjoyed by those who bore its costs but also by those who did not. That’s why the private sector under-invests in basic science – and why the public sector must invest in this kind of research. Because while the risks may be large, so are the rewards for our economy and our society. No one can predict what new applications will be born of basic research: new treatments in our hospitals; new sources of efficient energy; new building materials; new kinds of crops more resistant to heat and drought. It was basic research in the photoelectric effect that would one day lead to solar panels. It was basic research in physics that would eventually produce the CAT scan. The calculations of today’s GPS satellites are based on the equations that Einstein put to paper more than a century ago. – President Obama We should point out that President Obama did make one mistake.  He forgot to mention that Einstein’s theory of relativity (on which GPS technology depends) itself depends on the work of mathematicians which came even earlier.   In fact, in a recent article by Alicia Dickenstein in the Bulletin of the American Mathematical Society talks about how in the first page of Einstein’s handwritten notes for his paper on general relativity he writes about how importantly his work depends on the mathematics done by pure mathematicians (who had no idea it was useful in the “real world”!).  Dr. Dickenstein tells the story like this: These are Albert Einstein’s words on the first page of his most important paper on the theory of relativity: “The theory which is presented in the following pages conceivably constitutes the farthest-reaching generalization of a theory which, today, is generally called the “theory of relativity”; I will call the latter one—in order to distinguish it from the first named—the “special theory of relativity,” which I assume to be known. The generalization of the theory of relativity has been facilitated considerably by Minkowski, a mathematician who was the first one to recognize the formal equivalence of space coordinates and the time coordinate, and utililzed this in the construction of the theory. The mathematical tools that are necessary for general relativity were readily available in the “absolute differential calculus,” which is based upon the research on non-Euclidean manifolds by Gauss, Riemann, and Christoffel, and which has been systematized by Ricci and Levi-Civita and has already been applied to problems of theoretical physics. In section B of the present paper I developed all the necessary mathematical tools—which cannot be assumed to be known to every physicist—and I tried to do it in as simple and transparent a manner as possible, so that a special study of the mathematical literature is not required for the understanding of the present paper. Finally, I want to acknowledge gratefully my friend, the mathematician Grossmann, whose help not only saved me the effort of studying the pertinent mathematical literature, but who also helped me in my search for the field equations of gravitation.” So, indeed, he was not only paying homage to the work of the differential geometers who had built the geometry theories he used as the basic material for his general physical theory, but he also acknowledged H. Minkowski’s idea of a four dimensional “world”, with space and time coordinates. In fact, Einstein is even more clear in his recognition of the work of Gauss, Riemann, Levi-Civita and Christoffel in [7], where one could, for instance, read, “Thus it is that mathematicians long ago solved the formal problems to which we are led by the general postulate of relativity.” Posted in Math, Something Extra | Tagged Einstein, Math, Math Research, Obama |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632507562637329, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/212845-factorising-over-complex-numbers.html
# Thread: 1. ## factorising over complex numbers: Factorize: $3iu^2+26iu+4u^2+3i-4$ over complex numbers, where $i$ represents the imaginary unit. Thanks 2. ## Re: factorising over complex numbers: $\displaystyle \begin{align*} 3i\,u^2 + 26i\,u + 4u^2 + 3i - 4 &= \left( 4 + 3i \right) u^2 + 26i\,u + 3i - 4 \\ &= \left( 4 + 3i \right) \left[ u^2 + \left( \frac{26i}{4 + 3i} \right) u + \left( \frac{3i - 4}{4 + 3i} \right) \right] \\ &= \left( 4 + 3i \right) \left\{ u^2 + \left[ \frac{ 26i \left( 4 - 3i \right) }{\left( 4 + 3i \right) \left( 4 - 3i \right)} \right] u + \left[ \frac{\left( 3i - 4 \right) \left( 4 - 3i \right)}{\left( 4 + 3i \right) \left( 4 - 3i \right) } \right] \right\} \\ &= \left( 4 + 3i \right) \left[ u^2 + \left( \frac{104i - 78i^2 }{25} \right) u + \left( \frac{12i - 3i^2 - 16 + 12i}{25} \right) \right] \\ &= \left( 4 + 3i \right) \left[ u^2 + \left( \frac{78 + 104i}{25} \right) u + \left( \frac{-13 + 24i}{25} \right) \right] \\ &= \left( 4 + 3i \right) \left[ u^2 + \left( \frac{78 + 104i}{25} \right) u + \left( \frac{39 + 52i}{25} \right) ^2 - \left( \frac{39 + 52i}{25} \right) ^2 + \left( \frac{-13 + 24i}{25} \right) \right] \end{align*}$ $\displaystyle \begin{align*} &= \left( 4 + 3i \right) \left[ \left( u + \frac{39 + 52i}{25} \right) ^2 - \left( \frac{ -1183 + 4056i }{625} \right) + \left( \frac{-325 + 600i}{625} \right) \right] \\ &= \left( 4 + 3i \right) \left[ \left( u + \frac{39 + 52i}{25} \right) ^2 - \left( \frac{-858 + 3456i}{625} \right) \right] \\ &= \left( 4 + 3i \right) \left[ \left( u + \frac{ 39 + 52i }{ 25 } \right) ^2 - \left( \frac{\sqrt{ -858 + 3456i }}{25} \right) ^2 \right] \\ &= \left( 4 + 3i \right) \left( u + \frac{39 + 52i - \sqrt{ -858 + 3456i } }{25} \right) \left( u + \frac{ 39 + 52i + \sqrt{ -858 + 3456i } }{ 25 } \right) \end{align*}$ If you want you CAN get a proper complex number for this square root, but it will be in terms of sines and cosines. 3. ## Re: factorising over complex numbers: Originally Posted by Prove It $\displaystyle \begin{align*} 3i\,u^2 + 26i\,u + 4u^2 + 3i - 4 &= \left( 4 + 3i \right) u^2 + 26i\,u + 3i - 4 \\ &= \left( 4 + 3i \right) \left[ u^2 + \left( \frac{26i}{4 + 3i} \right) u + \left( \frac{3i - 4}{4 + 3i} \right) \right] \\ &= \left( 4 + 3i \right) \left\{ u^2 + \left[ \frac{ 26i \left( 4 - 3i \right) }{\left( 4 + 3i \right) \left( 4 - 3i \right)} \right] u + \left[ \frac{\left( 3i - 4 \right) \left( 4 - 3i \right)}{\left( 4 + 3i \right) \left( 4 - 3i \right) } \right] \right\} \\ &= \left( 4 + 3i \right) \left[ u^2 + \left( \frac{104i - 78i^2 }{25} \right) u + \left( \frac{12i - 3i^2 - 16 + 12i}{25} \right) \right] \\ &= \left( 4 + 3i \right) \left[ u^2 + \left( \frac{78 + 104i}{25} \right) u + \left( \frac{-13 + 24i}{25} \right) \right] \\ &= \left( 4 + 3i \right) \left[ u^2 + \left( \frac{78 + 104i}{25} \right) u + \left( \frac{39 + 52i}{25} \right) ^2 - \left( \frac{39 + 52i}{25} \right) ^2 + \left( \frac{-13 + 24i}{25} \right) \right] \end{align*}$ $\displaystyle \begin{align*} &= \left( 4 + 3i \right) \left[ \left( u + \frac{39 + 52i}{25} \right) ^2 - \left( \frac{ -1183 + 4056i }{625} \right) + \left( \frac{-325 + 600i}{625} \right) \right] \\ &= \left( 4 + 3i \right) \left[ \left( u + \frac{39 + 52i}{25} \right) ^2 - \left( \frac{-858 + 3456i}{625} \right) \right] \\ &= \left( 4 + 3i \right) \left[ \left( u + \frac{ 39 + 52i }{ 25 } \right) ^2 - \left( \frac{\sqrt{ -858 + 3456i }}{25} \right) ^2 \right] \\ &= \left( 4 + 3i \right) \left( u + \frac{39 + 52i - \sqrt{ -858 + 3456i } }{25} \right) \left( u + \frac{ 39 + 52i + \sqrt{ -858 + 3456i } }{ 25 } \right) \end{align*}$ If you want you CAN get a proper complex number for this square root, but it will be in terms of sines and cosines. But wolfram alpha gives out this... $\framebox{3iu^2+26iu+4u^2+3i-4 = (u+(3+4i))((4+3i)u+i)}$ Wolfram doesn't give out a solution for this, and I wanted to know how was it done... Any idea how it was done? factorise3iu^2+26iu+4u^2+3i-4 - Wolfram|Alpha
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8751527667045593, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2326/brute-forcing-cardan-grille?answertab=oldest
Brute forcing Cardan grille Having a "rotating" square Cardan Grill with sides long n cells, how could i determinate the cost of a brute-force attack? How many configurations should be considered to perform an exhaustive search? EDIT FOR MORE DETAIL: This was an exam question so these are all the information given to me, but i think i can suppose that the grille has $(n^2)/4$ holes, the plaintext has $n^2$ letters, letters are not visible in more than one orientation and the grille have to be rotated 4 times. - An obvious upper bound is $2^{n^2}$, but it's hard to say anything more specific without knowing more details about the system. For example, do you know how many holes the grille has? And may a given letter in the message be visible through the grille in more than one orientation? (If yes, that means you have to strike out letters you've already used when decoding a message.) – Ilmari Karonen Apr 10 '12 at 22:53 Please provide more details (rotating, parameters, etc.). – Iceberg Hotspot Apr 11 '12 at 4:38 thanks for the comment. I added more details, hoping that they are useful. – dciriello Apr 11 '12 at 6:56 1 Answer With the extra information you've provided, there are $4^{\left\lfloor \frac{n^2}{4} \right\rfloor}$ possible grilles to consider. Specifically, consider an $n \times n$ grille with just one hole, and make a mark on the paper through that hole with the grille in each orientation. If you do that, you'll see that there will always be exactly four marked positions on the paper, placed symmetrically around the center (unless $n$ is odd and the hole is in the middle of the grille, in which case all marks will be in the same position). Clearly, with the grille in its original orientation, exactly one of these four positions must have a hole: if two or more of them had holes, the same letters would show through all of them, and if none of them had holes, the letters in those positions would not be visible in any orientation (contradicting the claim that the plaintext has $n^2$ letters). (Incidentally, if the requirement that no letters are visible in more than one orientation is taken strictly, it means that the center position cannot have a hole for odd $n$, meaning that the plaintext length for odd $n$ can be at most $n^2-1$. So we must either relax one of the requirements or rule out odd $n$.) Each grille has $n^2$ positions (including the center position for odd $n$) and thus $\left\lfloor \frac{n^2}{4} \right\rfloor$ of these 4-position cycles. (The floor is there for odd $n$; if $n$ in even, $n^2$ is divisible by 4.) Assuming maximal plaintext length, each of these cycles must contain a hole (and, as shown above, cannot contain more than one hole). By symmetry, each of these holes can be in any of the four positions within their respective cycle. Thus, we have $\left\lfloor \frac{n^2}{4} \right\rfloor$ independent choices, with four options for each choices, giving us a total of $4^{\left\lfloor \frac{n^2}{4} \right\rfloor}$ possible sets of choices. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367585778236389, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/68058-solving-trig-identities.html
# Thread: 1. ## Solving Trig Identities?? For my homework tonight, there is this one problem that is supposed to be easy but i can't figure it out. Solve for exact solutions in the interval [0degrees, 360degrees) [ means include, & ) means exclude 2sinx -1= cscx I am supposed to find the degree value for x. and i think they want me to use cscx=1/sinx 2. Originally Posted by shanzwhaat For my homework tonight, there is this one problem that is supposed to be easy but i can't figure it out. Solve for exact solutions in the interval [0degrees, 360degrees) [ means include, & ) means exclude 2sinx -1= cscx I am supposed to find the degree value for x. and i think they want me to use cscx=1/sinx They do! Then multiply by $\sin x$ and you have a quadratic equation in $\sin x$ which you can solve. 3. Thanks, but i did that and got 0degrees, 180degrees, 30 and 150 but the back of the book said the answers were 90degrees, 210degrees, and 330degrees.. Do you know what I'm doing wrong??/ 4. Originally Posted by shanzwhaat Thanks, but i did that and got 0degrees, 180degrees, 30 and 150 but the back of the book said the answers were 90degrees, 210degrees, and 330degrees.. Do you know what I'm doing wrong??/ If $2\sin x - 1 = \csc x$ then $2\sin x - 1 = \frac{1}{\sin x}$ so $2 \sin^2 x - \sin x -1 = 0$ or $(2 \sin x + 1)(\sin x -1) = 0\;\;\Rightarrow\;\; \sin x = - \frac{1}{2}, 1$ Did you get that? 5. Originally Posted by shanzwhaat For my homework tonight, there is this one problem that is supposed to be easy but i can't figure it out. Solve for exact solutions in the interval [0degrees, 360degrees) [ means include, & ) means exclude 2sinx -1= cscx I am supposed to find the degree value for x. and i think they want me to use cscx=1/sinx $2\sin{x} - 1 = \csc{x}$ $2\sin{x} - 1 = \frac{1}{\sin{x}}$ $2\sin^2{x} - \sin{x} = 1$ $2\sin^2{x} - \sin{x} - 1 = 0$. Let $X = \sin{x}$ so $2X^2 - X - 1 = 0$. $2X^2 - 2X + X - 1 = 0$ $2X(X - 1) + 1(X - 1) = 0$ $(X - 1)(2X + 1) = 0$ $X - 1 = 0$ or $2X + 1 = 0$ $X = 1$ or $X = -\frac{1}{2}$ So $\sin{x} = 1$ or $\sin{x} = -\frac{1}{2}$. In the domain $x \in [0^{\circ}, 360^{\circ})$, $\sin{x} = 1$ if $x = 90^{\circ}$. In the domain $x \in [0^{\circ}, 360^{\circ}), \sin{x} = -\frac{1}{2}$ if $x = 210^{\circ}$ (Quadrant 3) or $x = 330^{\circ}$ (Quadrant 4). Does that make sense? 6. YES! thank you everyone who answered!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9548639059066772, "perplexity_flag": "middle"}
http://cdsmith.wordpress.com/2009/06/15/thoughts-on-an-associative-metric-for-the-non-negative-reals/?like=1&_wpnonce=24a0599870
software, programming languages, and other ideas June 15, 2009 / cdsmith A recent reddit post posed the question of whether there exists an associative metric on the non-negative reals.  I didn’t see it until after most people moved on, so I’m posting my thoughts on the subject here.  It is likely that none of this is new, but perhaps some people will find it interesting to have it collected in one place. As a brief review just to collect the relevant definitions in one place… a function $d : X^2 \to \mathbb{R}_{\geq 0}$ is a metric if: 1. $d(x,y) = 0$ when $x = y$, and $d(x,y) > 0$ when $x \neq y$.  This is called definiteness. 2. $d(x,y) = d(y,x)$.  This is called symmetry. 3. $d(x,y) \leq d(x,z) + d(z,y)$.  This is the triangle inequality. In particular, if $X = \mathbb{R}_{\geq 0}$, then $d$ is a binary operation.  One may then ask whether it is associative.  That is, if $d(x, d(y,z)) = d(d(x,y),z)$.  The problem is whether there exists a metric $d$ on the non-negative reals that is in fact associative. # A Key Observation To get started, let’s just play some games with manipulating the axioms above.  Let’s consider any non-negative real number $x$, and let $k = d(0,x)$.  That is, $k$ is the distance of $x$ from the origin.  Then something interesting happens.  By the associative property we’ve just discussed, we have: $d(0, d(x,k)) = d(d(0,x), k)$ But remember, $d(0,x)$ on the right-hand side of that equation is the same as $k$, and by the definiteness property of a metric, $d(k,k) = 0$.  So we really have: $d(0, d(x,k)) = d(k,k) = 0$ Now, applying definiteness again in the other direction, we can conclude that $0 = d(x,k)$, and then applying it yet a third time, that $x = k$.  In other words, we have: Proposition 1: Suppose $d$ is an associative metric on the non-negative reals.  Then for all non-negative reals $x$, $d(0,x) = x$. This is not an unusual property of a metric on the reals.  For example, the standard metric $d(x,y) = |x-y|$ has this property for all non-negative numbers.  However, it’s not true of all metric (not the discrete metric, for example.)  Note that so far, we have used only associativity and definiteness.  The really interesting property of metric, though, is the triangle inequality.  So let’s apply that to try to get some bounds on the possible choices of $d$, by decomposing distances into distances from the origin.  Suppose we choose $x$ and $y$ as arbitrary non-negative reals.  Then we have: 1. $d(0,x) \leq d(0,y) + d(y,x)$ 2. $d(0,y) \leq d(0,x) + d(x,y)$ 3. $d(x,y) \leq d(x,0) + d(0,y)$ Now, just using symmetry and proposition 1 to simplify those expressions, we get: Proposition 2: Suppose $d$ is an associative metric on the non-negative reals.  Then for all non-negative reals $x$ and $y$,$|x-y| \leq d(x,y) \leq x + y$. These are not very tight bounds on the possible values of $d$, really.  But they do tell us something interesting: namely, that a metric that works for us will necessarily have something to do with the magnitudes of the numbers involved.  This was not necessarily a given — one can imagine metrics on many sets of numbers that are entirely unrelated to the magnitudes of the numbers involved.  It turns out that in our case, our willingness to interchange the value of the metric with its inputs via the associative property somehow forces the metric to give significance to the magnitude of the numbers. # Group Structure The original post to Reddit made the casual observation that it can be shown that if this is true, then $\mathbb{R}_{\geq 0}$ is a group under this operation.  To review another basical definition, a group is any set together with a binary operation (which I’ll call $\bullet$), which satisfies: 1. Associativity: that is, $x \bullet (y \bullet z) = (x \bullet y) \bullet z$. 2. Identity: that is, there exists $e$ such that for all $x$, $e \bullet x = x \bullet e = x$. 3. Inverses: that is, for any $x$, there is a $x^{-1}$ such that $x \bullet x^{-1} = x^{-1} \bullet x = e$. As a quick change of notation (so that we’re more consistent with the infix notation normally used for groups), we define $x \bullet y = d(x,y)$.  Of course, the associativity of $\bullet$ was something we assumed from the very beginning.  Proposition 1, together with the symmetry property, give us the identity, which is zero.  Definiteness gives us inverses: for any $x$, we know $d(x,x) = 0$, so that $x$ is its own inverse.  Therefore, $\langle \mathbb{R}_{\geq 0}, \bullet\rangle$ is a group. A good bit could be said about the structure of this group.  In fact, that every element is its own inverse gives us very specific information about what this group looks like (it’s built from copies of $\mathbb{Z}_2$, for example.)  But even just knowing that it’s a group gives us some interesting ways to think about the problem.  The bounds we found in the previous section were actually compatible with the standard metric on the real numbers.  But seen as a group operation, our associative metric starts to develop some interesting properties that are somewhat different from the standard metric. For example: suppose you give me a non-negative real $x$, and ask me what other non-negative reals are a distance $y$ away from it.  Because distance is a group operation, there is precisely one answer to this question: if $x \bullet z = y$, then $z = x^{-1} \bullet y$ (and in this case, since every number is its own inverse, that’s the same as $x \bullet y$).  In other words the function $d_x(y) = d(x,y)$ is necessarily one-to-one, for any $x$. That’s a pretty significant statement!  In fact, it’s fairly simple now to show that for $x \neq 0$, $d_x$ therefore cannot be a continuous function in the topology induced by the standard metric $|x-y|$.  We might previously have hoped that perhaps the metric that worked would look something like the standard metric, though perhaps stretched or distorted in some way.  In fact, the earlier proposition gave us extra hope, by limiting the amount by which our metric can differ from the standard metric.  It’s now clear, though, that an associative metric will be quite  different from the standard metric. (As an aside, it’s worth noting that $d_x$ above is continuous in the topology induced by the metric $d$.  This is basically equivalent to requiring that for any $\epsilon > 0$, there exists $\delta > 0$ such that whenever $y \bullet y_0 < \delta$, we also have $d_x(y) \bullet d_x(y_0) < \epsilon$.  But $d_x(y) \bullet d_x(y_0) = x \bullet y \bullet x \bullet y_0 = y \bullet y_0$, so choosing $\delta = \epsilon$ always works.) # Where Next? Who knows.  We have two conclusions now that seem rather at odds with each other: first, then an associative metric would need to overestimate the standard metric by a bounded (though, admittedly, very loosely bounded) amount.  Second, we have that in terms of topological properties, our metric differs significantly.  Resolving these tensions — or showing that they can’t be resolved — would perhaps be an important next step in addressing the problem. ### Like this: Filed under Math #### 8 Comments 1. Twan van Laarhoven / Jun 15 2009 2:27 pm I think I have something that almost works. Define d(a,b) as the digit-wise xor (aka sum mod 2) of a and b. So for example: d(1/3,2/3) = d(0.010101..,0.101010..) = 0.111111.. = 3/3 d(1/2,1/3) = 0.11010101.. = 5/6 d(1/4,1/3) = 0.00010101.. = 1/12 The problem is of course that the binary expansion of real numbers is not uniquely defined, 0.01111.. = 0.10000.., so you get also d(1/2,1/3) = 0.00101010.. = 1/6 Maybe we can just pick one of these? Say, the smallest? Let’s try that. d(1/2,1/3) = d(0.011111,0.0101010) = 0.00101010 = 1/6 or d(0.100000,0.0101010) = 0.11010101 = 5/6 d(1/2,5/6) = d(0.011111,0.1101010) = 0.10101010 = 2/3 or d(0.100000,0.1101010) = 0.01010101 = 1/3 We can’t have both d(1/2,1/3)=1/6 and d(1/2,5/6)=1/3, since then 1/6 = d(1/2,1/3) = d(1/2,d(1/2,5/6)) = d(d(1/2,1/2),5/6) = d(0,5/6) = 5/6 Maybe we should consistently pick either .011111 or .100000, let’s say we go with the latter. Then d(1/2,1/3) = 5/6 and d(1/2,5/6) = 1/3. But what about d(1/2,1/6) = d(0.100000,0.0010101) = 0.101010101 = 2/3 So far so good, but now d(1/3,2/3) = d(0.010101,0.1010101) = 0.111111111 = 1 but d(1/3,1) = d(0.010101,1.0000000) = 1.010101010 = 4/3 The problem is of course that we switched representation for 0.111111111 to 1.0000000. • Jade NB / Jun 17 2009 1:43 pm @Twan: Notice that your operation mirrors very directly the observation made about the structure of the group in question, namely, that it is ‘built up’ from copies of the cyclic group of order 2. Of course, one has to be a little careful with the meaning of ‘built up’, since we don’t have a finitely generated group and so can’t invoke the fundamental theorem; but the fact that your proposed metric is such a natural reflection of an underlying algebraic structure suggests that it must be nearly the right thing. By the way, note also that the fact that all elements of this putative group square to the identity already gives us the symmetry (since xyxy = (xy)^2 = e = x^2 y^2 = xxyy, so that xy = yx), so that we don’t even need all the axioms of a metric! • Matthew / Jun 22 2009 4:07 am So we can check the vector space axioms to show that the group is a vector space over Z2. Then by a cardinality argument its dimension must be the cardinality of the reals, 2^omega. So that pins it down up to isomophism; we know exactly what the group structure must be, the problem is in how to map it to the reals so that the group operation respects the triangle inequality. Would it help to find a basis for this vector space, using the axiom of choice? • cdsmith / Jun 22 2009 11:16 am Actually replying to Matthew’s comment. I’m not 100% certain that simply knowing the cardinality, and that {0,x} for any x is a subgroup isomorphic to Z2, is enough to determine the group up to isomorphism. But then again, it’s early, and I haven’t had my coffee yet. In any case, yes I definitely agree that the hard part is to determine if one can find the specific one-to-one correspondence between this specific group and the real numbers, which happens to preserve the triangle inequality with standard addition on the reals. I have no interesting ideas on how to approach this, though. At the moment, I’m stuck just thinking that the obvious correspondence, which doesn’t work, is the only one I can imagine. Though of course any permutation of the reals that maps 0 to the identity of the group would be a candidate… and there are unfathomably many of them. • Matthew / Jun 22 2009 3:25 pm cdsmith: Let me try and convince you (and myself) with the details: Checking the axioms for a vector space, from http://en.wikipedia.org/wiki/Vector_space#Definition with Z2 as the field, addition as our group operation/metric, and scalar multiplication defined to be 1.x = x, 0.x = 0 Associativity of addition: by definition Commutativity of addition: again by definition, also follows from the other assumptions as Jade NB notes Additive identity: Proposition 1 in the article Inverse elements for addition: we’ve shown that each element is self-inverse Scalar multiplication axioms are trivial checks of the definition, not sure these even matter to the cardinality argument but here goes: Distributivity of scalar multiplication with respect to vector addition: 0.(v+w) = 0 = 0.v + 0.w; 1.(v+w) = v+w = 1.v + 1.w Distributivity of scalar multiplication with respect to field addition: (1+1).v = 0.v = 0 = v+v = 1.v + 1.v (0+1).v = 1.v = v = 0+v = 0.v + 1.v (1+0).v = 1.v = v = v+0 = 1.v + 0.v (0+0).v = 0.v = 0 = v+v = 1.v + 1.v Compatibility of scalar multiplication with field multiplication a(bv) = (ab)v [nb 3] (0.0)v = 0v = 0.(0v) (0.1)v = 0v = 0 = 0.(1v) (1.0)v = 0v = 0 = 1.0 = 1.(0v) (1.1)v = 1v = 1.(1v) Identity element of scalar multiplication: 1.v = v So we know we’re looking for the additive group of a vector space over Z2. Then using a few facts: – Every vector space has a basis (requires axiom of choice for the infinite-dimensional case) – All bases of a vector space have equal cardinality, defining the dimension of the space – An vector space isomorphism can be constructed from a bijection between bases – All vector spaces of the same dimension are isomorphic – The dimension of an infinite-dimensional vector space is equal to its cardinality (or that of the field if it’s greater, clearly not in this case when it’s Z2): http://planetmath.org/encyclopedia/DimensionFormulaeForVectorSpaces.html#SECTION00040000000000000000 We get that the space is THE vector space over Z2 of dimension 2^omega. You might be tempted to think of this space as the space of functions from R -> Z2, ie the space of arbitrary subsets of R. But it is also isomorphic to the space of only /finite/ subsets of R, since this is also a vector space over Z2 with XOR as the operation, and it has the same cardinality. Something a little surprising (as consequences of AC can sometimes be…) Dunno if that helps at all :) • Matthew / Jun 22 2009 3:34 pm Mistake in that last paragraph: it’s NOT the space of arbitrary subsets of R, since that has even bigger cardinality. You could see it as the space of finite subsets of R with XOR, (or eg of R+, R_>=0 if you prefer) or alternatively as the space of arbitrary subsets of N with XOR. I was looking for a way to use either of these to construct a nice bijection onto the reals, which kept some relationship with magnitude, but failed. Gut feeling is that some non-constructive magic is needed, perhaps a cleverer application of axiom of choice, perhaps some hard-hitting theorem from topology… also considered fiddling around with the 2-adic numbers, which might lead somewhere but not just yet… 2. Matthew / Jun 21 2009 2:53 pm I wonder if you could prove this in the negative, by showing that all attempts to construct such a metric must head down this kind of route, and suffer from the same problem. I tried, it felt like the triangle inequality condition wasn’t quite strong enough, or my powers too weak :) 3. Matthew / Jun 21 2009 2:54 pm (clarification: referring to Twan’s post) %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 71, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9180599451065063, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/4162-roots-equation.html
# Thread: 1. ## Roots of equation How many negative roots has the equation $x^6 - bx^5 - 2ax^3 - cx + a^2=0, b\geq0, c\geq0?$ Keep Smiling Malay 2. Since $f(-x)=x^{6}+bx^{5}+2ax^{3}+cx+a^{2}$ has no variation in signs, there are no negative roots. It's been awhile since I thought about Descartes rule. Isn't that the way it goes?. 3. Originally Posted by galactus Since $f(-x)=x^{6}+bx^{5}+2ax^{3}+cx+a^{2}$ has no variation in signs, there are no negative roots. It's been awhile since I thought about Descartes rule. Isn't that the way it goes?. It would be... $f(-x)=-x^{6}+bx^{5}+2ax^{3}+cx+a^{2}$ which does have a variation. 4. Originally Posted by galactus Since $f(-x)=x^{6}+bx^{5}+2ax^{3}+cx+a^{2}$ has no variation in signs, there are no negative roots. It's been awhile since I thought about Descartes rule. Isn't that the way it goes?. I thought for Descartes rule to be applicable the coefficients of all the powers up to the maximum must be non-zero. RonL 5. That would be true, Cap'n. I forgot about that. In Descartes rule, it is assumed that terms with 0 coefficients are deleted and the constant term is not 0. Also, descartes rule says the number of negative roots is equal to the change in signs or less than that by an even integer. Let's say b and c are 0, then we'd have $(-x)^{6}+2a(-x)^{3}+a^{2}=x^{6}-2ax^{3}+a^{2}$ 2 change of signs. Has 2 negative roots or 0 negative roots. If b and c were not 0 , then from before. No change of signs. No negative zeros. BTW, Quick, $(-x)^{6}=x^{6}$. May I ask where you got the negative from?. Maybe I am missing something. 6. Originally Posted by galactus BTW, Quick, $(-x)^{6}=x^{6}$. May I ask where you got the negative from?. Maybe I am missing something. I assumed that since you changed all the negatives to positive than you would change the positives to negative, but I forgot that even-numbered exponents are automatically positive 7. I am not familiar with Descartes rule. Please guide me. What is the answer?What is the use of f(-x)? Keep Smiling Malay 8. Originally Posted by malaygoel I am not familiar with Descartes rule. Please guide me. What is the answer?What is the use of f(-x)? Keep Smiling Malay RonL 9. Originally Posted by CaptainBlack RonL What is meant by sign changes?How will you calculate sign changes in $x^7+x^6-x^4-x^3-x^2+x-1$and $-x^7+x^6-x^4+x^3-x^2-x-1?$ What is the logic behind Descartes rule? Keep Smiling Malay 10. Originally Posted by CaptainBlack RonL What is meant by sign changes?How will you calculate sign changes in $x^7+x^6-x^4-x^3-x^2+x-1$and $-x^7+x^6-x^4+x^3-x^2-x-1?$ What is the logic behind Descartes rule? Keep Smiling Malay 11. Originally Posted by malaygoel What is meant by sign changes?How will you calculate sign changes in $x^7+x^6-x^4-x^3-x^2+x-1$and $-x^7+x^6-x^4+x^3-x^2-x-1?$ What is the logic behind Descartes rule? Keep Smiling Malay First I believe Descartes rule does not work for these polynomials. To work I believe that it requires all the powers less than the maximum appear with non- zero coefficients. So it applies to: $x^7+x^6+2x^5-x^4-x^3-x^2+x-1$, where the signs are +,+,+,-,-,-,+,-, which change three times, and so this polynomial has at most three positive roots, and so has either 1 or 3 positive roots. Now for the negative roots we switch the signs of the odd powers to give: $-x^7+x^6-2x^5-x^4+x^3-x^2-x-1$, now the signs are -,+,-,-,+,-,-,-, which change sign 4 times, so there are at most four negative roots to the original polynomial. So the original polynomial has 4, 2 or 0 negative roots. RonL 12. Originally Posted by CaptainBlack First I believe Descartes rule does not work for these polynomials. To work I believe that it requires all the powers less than the maximum appear with non- zero coefficients. So it applies to: $x^7+x^6+2x^5-x^4-x^3-x^2+x-1$, where the signs are +,+,+,-,-,-,+,-, which change three times, and so this polynomial has at most three positive roots, and so has either 1 or 3 positive roots. Now for the negative roots we switch the signs of the odd powers to give: $-x^7+x^6-2x^5-x^4+x^3-x^2-x-1$, now the signs are -,+,-,-,+,-,-,-, which change sign 4 times, so there are at most four negative roots to the original polynomial. So the original polynomial has 4, 2 or 0 negative roots. RonL Malay 13. Originally Posted by Malay Originally Posted by CaptainBlack First I believe Descartes rule does not work for these polynomials. To work I believe that it requires all the powers less than the maximum appear with non- zero coefficients. RonL Malay You will note the what we call weasel words in my earlier post about the applicability of the rule of signs to polynomials with missing powers. (weasel words - wording which alows the author to subsequently disavow what they wrote). It appears that the rule of signs is applicable . I have looked at the problem that I thought might exist with such polynomials again and it turns out they are not real so we can proceed: $<br /> x^7+x^6-x^4-x^3-x^2+x-1<br />$ Has signature +,+,-,-,+,-. The signs chenge from + to - or - to + three times in this signature so there are at most three positive roots. Now to investigate the negative roots we change the signs of all the odd power terms in the polynomial and then proceed as before: $<br /> -x^7+x^6-x^4+x^3-x^2-x-1<br />$ which has signature -,+-,+,-,-. The signs change 4 times in this signature, so there are at most four negative roots. RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370391964912415, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/tagged/differential-equations
# Tagged Questions The differential-equations tag has no wiki summary. 1answer 243 views ### Connections between random walk and heat equation (Material for ~) I am preparing an undergraduate lecture in quantitative finance and I am looking for material that combines the topics: random walk and heat equation The material should be accessible ... 0answers 126 views ### How to get an analytic result for option price based on this model? I defined such a model for stock price (1).... $$dS = \mu\ S\ dt + \sigma\ S\ dW + \rho\ S(dH - \mu)$$ , where $H$ is a so-called "resettable poisson process" defined as (2).... dH(t) = ... 1answer 169 views ### Can we explain physical similarities between Black Scholes PDE and the Mass Balance PDE (e.g. Advection-Diffusion equation)? Both the Black-Scholes PDE and the Mass/Material Balance PDE have similar mathematical form of the PDE which is evident from the fact that on change of variables from Black-Scholes PDE we derive the ... 3answers 712 views ### What tools are used to numerically solve differential equations in Quantitative Finance? There are a lot of Quantitative Finance models (e.g. Black-Scholes) which are formulated in terms of partial differential equations. What is a standard approach in Quantitative Finance to solve these ... 1answer 262 views ### An equation for European options So, any European type option we can characterize with a payoff function $P(S)$ where $S$ is a price of an underlying at the maturity. Let us consider some model $M$ such that within this model ... 10answers 1k views ### Using Black-Scholes equations to “buy” stocks From what I understand, Black-Scholes equation in finance is used to price options which are a contract between a potential buyer and a seller. Can I use this mathematical framework to "buy" a stock? ... 3answers 584 views ### Deterministic interpretation of stochastic differential equation In Paul Wilmott on Quantitative Finance Sec. Ed. in vol. 3 on p. 784 and p. 809 the following stochastic differential equation: $$dS=\mu\ S\ dt\ +\sigma \ S\ dX$$ is approximated in discrete time by ... 1answer 2k views ### What is the role of stochastic calculus in day-to-day trading? I work with practical, day-to-day trading: just making money. One of my small clients recently hired a smart, new MFE. We discussed potential trading strategies for a long time. Finally, he expressed ... 1answer 1k views ### Transformation from the Black-Scholes differential equation to the diffusion equation - and back I know the derivation of the Black-Scholes differential equation and I understand (most of) the solution of the diffusion equation. What I am missing is the transformation from the Black-Scholes ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9061813354492188, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/99150/testing-isomorphism-of-finitely-generated-algebras
## Testing isomorphism of finitely generated algebras ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $A=\mathbf{Q}[x_1,\ldots,x_n]$ be the polynomial ring in $n$ variables over the rational numbers. Let $B=\mathbf{Q}[f_1,\ldots,f_r]$ and $C=\mathbf{Q}[g_1,\ldots,g_s]$ be two finitely generated $\mathbf{Q}$-subalgebras of $A$ with explicit generators. Q1: Is there a finite time (efficient) algorithm that allows one to say when is $B\simeq C$ as $\mathbf{Q}$-algebra? Q2: Is there a finite time (efficient) algorithm that allows one to say when is $Frac(B)\simeq Frac(C)$? Here $Frac(B)$ denotes the fraction field. In both questions I really mean isomorphic and not equal. - I don't know much about algorithmic questions like this. But: it would probably help to specify whether you are given a set of generators for $B$ and $C$, or whether you merely have some implicit description of these subalgebras. – MTS Jun 9 at 0:06 Since $B$ and $C$ are both subalgebras of the same ring $A$, do you want to know whether they are isomorphic, or whether they are equal? – David Speyer Jun 9 at 0:52 1 @David, here I really mean isomorphic. Equal would be "easy" since one may compute the relation ideal for $B$ and $C$ and then test for equality. – Hugo Chapdelaine Jun 9 at 2:42 @MTS, yes in both cases I have explicit sets of generators. – Hugo Chapdelaine Jun 9 at 2:46 4 mathoverflow.net/questions/21883/… – MP Jun 9 at 3:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8903183937072754, "perplexity_flag": "head"}
http://mathoverflow.net/questions/117904/elementary-examples-of-the-weil-conjectures
## Elementary examples of the Weil conjectures ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm looking for examples of the Weil conjectures---specifically rationality of the zeta function---that can be appreciated with minimal background in algebraic geometry. Are there varieties for which one can easily calculate the numbers of points over finite fields and witness the rationality directly? Of course, there are entirely straightforward examples coming from projective space and Grassmannians (or anything with a paving by affines). - 3 I'm not sure if this is too elementary, but the case of elliptic curves can be worked out pretty concretely. Silverman does this in his first book on elliptic curves, for example. – Ramsey Jan 2 at 21:40 7 Artin computed a bunch of examples by hand in his thesis. In Weil's paper, he does "Fermat" varieties (sum of n-th powers). The latter is reproduced in detail in Ireland and Rosen. None of these examples need any algebraic geometry, but they might require a bit of number theory. – Felipe Voloch Jan 2 at 21:45 3 Even the Grassmannian example is not entirely trivial, since you get almost for free the generating function (generating polynomial, in this case) for the number of affine cells of each dimension. – Noam D. Elkies Jan 2 at 22:14 3 See the exercises at the end of chapter 11 in Ireland and Rosen's "A Classical Introduction to Modern Number Theory" – Stopple Jan 2 at 23:34 There is an elementary proof of the Riemann hypothesis for curves (of any genus) over a finite field by Stepanov. See the Bourbaki talk by Bombieri (June 1973). – Damian Rössler Jan 3 at 18:38 ## 5 Answers Weil himself verifies the conjectures "by hand" for diagonal hypersurfaces, that is, hypersurfaces defined by an equation of the form $$a_0x_0^{n_0}+a_1x_1^{n_1}+\cdots+a_kx_k^{n_k}=b.$$ The argument is pretty elementary--it essentially uses only character theory. It seems to me likely that this argument heavily influenced Dwork's original proof of rationality. The paper is quite readable; I learned of it from Akshay Venkatesh. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One elementary example comes from the theory of elliptic curves, as in Silverman's book. Namely, given an elliptic curve over a finite field $\mathbb{F}_q$, the number of $\mathbb{F}_q$-rational points is the degree of $1-F$ for $F$ the Frobenius -- that is, it's the kernel of the isogeny $1-F$. Now you can compute the degree as $(1-F)(1 - F)^t$ (where $F^t$ is the dual isogeny) and this is $1 - (F + F^t) + q$ since $q$ is the degree of the Frobenius. So the key quantity to compute is $F + F^t$, which is an integer -- it's secretly the trace of $F$ on $l$-adic cohomology. If you replace $F$ by $F^n$, this gives you the rationality of the zeta function of an elliptic curve. You can also get the Riemann hypothesis by purely elementary means by using the fact that the degree is a positive definite quadratic form and a Cauchy-Schwarz type inequality. My guess is that something like this should work with higher-dimensional abelian varieties (the $l$-adic cohomology is an exterior algebra, as with the topological cohomology of a torus) as well. - 1 Indeed this does work in general for Abelian varieties; one may also deduce the Weil conjectures for curves this way by applying similar arguments to their Jacobians. That said, my feeling is that imitating Tate's thesis is the most "elementary" way to see rationality for curves--since developing the theory of Jacobians is pretty involved. – Daniel Litt Jan 2 at 21:46 1 The argument in Mumford's Abelian Varieties book (which I presume was essentially the same as Weil's) does use the trace of Frobenius on the $\ell$-adic Tate module (which is essentially what you are doing); of course this is secretly dual to the $\ell$-adic $H^1$. That said, if all you care about is rationality and the functional equation for curves, you only need the idele class group. – Daniel Litt Jan 2 at 22:09 Very interesting! – Akhil Mathew Jan 2 at 23:19 1 Akhil, the endnotes of Milne's article on Jacobian varieties in the Cornell-Silverman book "Arithmetic Geometry" give the fascinating history. The functional equation and rationality for curves were known via RR before Weil, whose adelic contribution was an adelic proof of RR (pre-Tate!). The real contribution was RH, which he proved via intersection theory on $X \times X$ (and wrote his book to make intersection theory rigorous in positive characteristic). His 2nd proof was via Jacobians, for which he created yet more fundamental ideas (birational group laws, abstract varieties, etc.). – ayanta Jan 3 at 0:25 show 3 more comments The grandfather of all examples is by Gauss: http://en.wikipedia.org/wiki/Weil_conjectures#Background_and_history Of course Gauss didn't mention finite fields other than the prime field. I think it is in the nature of a remark that the method carries over, but I haven't written it down. - It's possible to give a semester course in number theory, free from overt algebraic geometry, that handles the zeta functions of curves over finite fields, proving the functional equation and Weil's RH for them. (And also does Mordell-Weil for elliptic curves over number fields). I taught such a course to second year grad students some 20 or 30 years ago. One gets around the geometry of curves in 19th century Dedekind fashion, working with their function fields--finite extensions of k(t)-- and valuations, just as one does with number fields. Riemann-Roch is done as in Chevalley's book using "repartitions". Rationality and the functional equation follow directly from Riemann-Roch. The RH is done by Bombieri's elegant technique--first one uses Riemann-Roch to get a good upper bound for the number of rational points, then one combines this upper bound with the functional equation to get a good lower bound. (Ayanta thinks the proof is miraculous but uninformative; this may be true of Stepanov's original version, but I find Bombieri's argument to be natural). Of course there are defects to this approach. Algebraic geometry is far more enlightening. But it can be learned later, in whatever form the student finds appropriate. And there are also advantages. To do Weil-style algebraic geometry one would have to worry about "fields of definition". And Grothendieck's version (which continues to intimidate me), would only appeal to the rare student at this level. That such beautiful mathematics can be presented in such an accessible fashion seems to me a boon. - Perhaps the following two examples would be of interest to you; my apologies if they are too simple. Notation: (in accordance with Koblitz's "p-adic Numbers, p-adic Analysis, and Zeta-Functions") Given $f \in \mathbb{F}_{q}[X_1, \ldots, X_n]$ let us define a sequence $N_s = |(H_{f} (\mathbb{F}_{q^s})|$, where $H_f(K) :=${$(x_1, \ldots, x_n) \in \mathbb{A}^{n}_{K} | f(x_1, \ldots, x_n) = 0$}. The zeta function is then defined for a hyperplane $H_f$ and field $\mathbb{F}_q$ by $$Z(T) = \exp\big(\sum_{s=1}^{\infty} N_s T^s /s\big)$$ Before giving a few examples of the rationality of $Z(T)$, we recall the Maclaurin series $$-\log(1 - T) = \sum_{s=1}^{\infty}T^s / s$$ Example 1. $f(x_1, \ldots, x_n) \equiv 0$. Then $N_s =|{\mathbb{A}}_{\mathbb{F}_{q^s}}^{n}| = q^{ns}$, so that we find $Z(T)$ becomes $$\exp\big(\sum_{s=1}^{\infty} N_s T^s /s\big) = \exp\big(\sum_{s=1}^{\infty} (q^n T)^s /s\big) = \exp(-\log(1-q^n T)) = 1/(1 - q^n T)$$ Example 2. Let $f = x_1 x_4 - x_2 x_3 - 1$. We now consider two cases: Case 1. $x_3 = 0$. Then $x_1 x_4 - x_2 x_3 = 1$ becomes $x_1 x_4 = 1$. Since $x_2$ is out of the equation, it can be any element of $\mathbb{F}_{q^s}$. Thus, there are $q^s$ choices for $x_2$. Meanwhile, $x_1$ can be any nonzero element of $\mathbb{F}_{q^s}$, and in each case this will determine $x_4$. Hence there are $q^s(q^s - 1) = q^{2s} - q^s$ points in $H_f$ when $x_3 = 0$. Case 2. $x_3 \neq 0$. Then $x_1$ and $x_4$ can be any elements of $F_{q^s}$, and $x_3$ can be any nonzero element of $F_{q^s}$. But this completely determines $x_2$, so that there are $q^s q^s (q^s - 1) = q^{3s} - q^{2s}$ points in $H_f$ when $x_3 \neq 0$. Therefore, $N_s = q^{3s} - q^{2s} + q^{2s} - q^{s} = q^{3s} - q^{s}$, whence the zeta-function $Z(T)$ is $$\frac{\exp(\sum_{s=1}^{\infty}q^{3s}T^s /s)}{\exp(\sum_{s=1}^{\infty}q^s T^s /s)} = \frac{1 - qT}{1 - q^3 T}$$ Note: The case for an affine variety $H_{f_1, \ldots, f_m}$ follows from the affine hypersurface $H_f$ case by a simple application of the Inclusion/Exclusion Principle. Bearing this in mind, it shouldn't be too hard to construct examples similar to those above over a variety for which the rationality can be witnessed directly. I imagine this would be somewhat tedious, though. - Both of these examples fall into the case of "paving by affines" (or affines minus several points) that Jonathan mentions in his question. Your note is a nice point---this reduction is essential to Dwork's proof of rationality. I see this as an indication that the hypersurface case is quite hard, though. – Daniel Litt Jan 4 at 6:07 I wondered whether that's what he meant by "paving by affines." Using higher level methods, one could work out an example like: $q = 5, n = 2, f(X_1, X_2) = X_{1}^3 - X_{2}^2 + X_{1} + 2$, for which $N_s = 5^s - ((1 + 2i)^s + (1-2i)^s)$ and hence $Z(T) = \frac{1- 2T + 5T^2}{1 - 5T}$, unless I have miscalculated. As another note: I believe Dwork also observed a projective space can be written as a disjoint union of affine hypersurfaces, so that in fact the rationality theorem extends easily to projective hypersurfaces, and by Inclusion/Exclusion extends again to projective varieties. – Benjamin Dickman Jan 4 at 6:37 2 Dear Benjamin, One way to think about your example 2 is that you write your 3-fold as $(\mathbb A^1 \setminus \{0\})\times \mathbb A^1 \coprod \mathbb A^2 \times (\mathbb A^1 \setminus \{0\}) = (\mathbb A^2 \setminus \mathbb A^1) \coprod (\mathbb A^3 \setminus \mathbb A^2) = \mathbb A^3 \setminus \mathbb A^1,$ and so the zeta-function is the ratio of the zeta function of $\mathbb A^3$ by that of $\mathbb A^1$. (In other words, zeta functions behave like a multiplicative version of Euler chars.: they are multiplicative w.r.t. disjoint unions.) Regards, Matthew – Emerton Jan 4 at 6:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9381986260414124, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/22911-minimum-value.html
# Thread: 1. ## the minimum value Find the minimum value of the function $f(x)=|1001+1000x+999x^2+...+2x^{999}+x^{1000}|$. 2. This problem only works for even degree exponents, it is hard to state the details in general so I will do it for special cases to show how to proceede. Let $f_n(x) = x^n + 2x^{n-1}+...+nx+(n+1)$. So you want to know $\min_{x\in \mathbb{R}} \{ |f_{1000}(x)| \}$. Begin by considering the general problem for $f_n(x)$ and treating your case as a special case. First, start of easy, $f_2(x) = x^2+2x+3$ we want to minimize $g(x)=|x^2+2x+3|$. Calculus tells us that the minimum will occur at $g'(x)=0$ or when $g'(x)$ does not exist. Thus, we can look for when $f'_2(x) = 0$ or when $f_2(x)=0$ but $f_2(x)>0$. Now, since $f_2(x)>0$ we can look when $f'_2(x) = 0$ and that happens when $2x+2=0\implies x=-1$. By induction we can prove that $f_n(x)>0$ for even $n$ by re-introducting the previous case into the problem. The above illustrates that we can now only consider the case $f'_n(x)=0$. Say $f_6(x) = x^6+2x^5+3x^4+4x^3+5x^2+6x+7$ then $f'_6(x) = 6x^5+10x^4+12x^3+12x^2+10x+6x$. It should seem clear that $-1$ is a zero by symettry of the coefficients, thus $x+1$ is a factor. By division we get (we can also reach these results by factoring the same coefficient terms but that is too long to type). By division we get $6x^4+4x^3+8x^2+4x+6\geq x^4+4x^3+6x^2+4x+1 = (x+1)^4 \geq 0$. Thus, there are no other real zeros. Which means $x=-1$ is the minimum point so $f_{2007}(-1) = 501$ which is the minimum value. (So to complete this proof we need to show $f'_n(x)$ has no other zero except for $-1$, that should be doable by using the lower bound estimate to get $(x+1)^n\geq 0$ which was specifically demonstrated for $n=6$).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490529894828796, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4254730
Physics Forums Mentor ## Number of pairings in a set of n objects This isn't homework, just a combinatorics problem I came across. However, it is a textbook-style problem. So I'm compromising and posting in the math forums, but using the HH template. I asked a mentor and he said it was acceptable. 1. The problem statement, all variables and given/known data I have a set of n objects, where ##n = 2m, m \in \mathbb{N}##. In other words, I have an even number of objects. How many different ways are there to match up each object in the set with another object from the set, to produce m pairs? 2. Relevant equations Combination without repetition:$$_nC_2 = \left( \begin{array}{c}n\\2\end{array} \right) = \frac{n!}{(n-2)!\,2!}$$ 3. The attempt at a solution I guess n = 2 is the trivial case (one pair). I started with the simplest non-trivial case of n = 4 and solved it by brute force: 4 objects:$$\bigcirc~\bigcirc~\bigcirc~\bigcirc$$3 different pairings:$$\bigoplus~\bigoplus~\bigotimes~ \bigotimes$$ $$\bigoplus~\bigotimes~\bigotimes~\bigoplus$$ $$\bigoplus~\bigotimes~\bigoplus~\bigotimes$$where the plusses and crosses are paired together. Then I asked myself: how would I arrive at a result of 3 combinations by counting? I know that ##_4C_2 = 6##, but as soon as I choose one pair out of the four objects, the remaining pair of objects are together by default. So I guess I have to stick a factor of 1/2 in there$$N_\textrm{pairings} = \frac{1}{2}\left( \begin{array}{c}4\\2\end{array} \right) = 3$$What if I had 6 objects? I would pick my first pair, and there would be ##_6C_2 = 15## ways of doing that. Then 4 objects would remain, and it would reduce to a previously-solved problem of how many ways to choose pairs from the four remaining objects. So:$$N_\textrm{pairings} = \frac{1}{2}\left( \begin{array}{c}6\\2\end{array} \right)\left( \begin{array}{c}4\\2\end{array} \right) = 45$$If I had 8 objects, then I guess I would choose my first pair, and there would be 8 choose 2 ways of doing that. Then six objects would remain, and it would reduce to a previously-solved problem:$$N_\textrm{pairings} = \frac{1}{2}\left( \begin{array}{c}8\\2\end{array} \right)\left( \begin{array}{c}6\\2\end{array} \right)\left( \begin{array}{c}4\\2\end{array} \right) = 1260$$By looking at this sequence of products and thinking about it, I arrived at the general equation for arbitrary n:$$N_\textrm{pairings} = \frac{1}{2}\prod_{i=0}^{m-2}\left( \begin{array}{c}n-2i\\2\end{array} \right)$$However, I then noticed something which is that you can write the equation before last as:$$N_\textrm{pairings} = \frac{1}{2}\left( \begin{array}{c}8\\2\end{array} \right)\left( \begin{array}{c}6\\2\end{array} \right)\left( \begin{array}{c}4\\2\end{array} \right) = \frac{1}{2}\frac{8\cdot 7}{2!} \frac{6\cdot 5}{2!} \frac{4\cdot 3}{2!} = \frac{1}{2^m}\frac{8!}{2!} = \frac{8!}{2^{m+1}}~\textrm{(where m = 4 here)}$$which leads me to believe that the general formula is just $$N_\textrm{pairings} = \frac{n!}{2^{m+1}}$$Question 1: Am I doing this correctly? For n = 100, the case I was originally interested in, the number of pairings is approximately 4e142, which is ridiculously large! On the other hand, I have verified the n = 4 and n = 6 cases by brute force. Question 2: That factor of 1/2 seems kind of ad hoc. I reasoned my way to it by looking at a specific case. Is there a general argument for why it should be there? Question 3: Is there a line of reasoning that I can use to arrive at the last equation, in terms of n!, directly? It's like my probability theory professor used to say: "counting is hard!" PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Hi cepheid! Trying to keep track in dealing with unordered stuff usually gives me a head ache. What I have learned to do, is first count the ordered stuff, and then divide by the number of duplicate countings. In your case you can order the objects in ##n!## ways, yielding ##m## ordered pairs. However, since the pairs are supposed to be unordered, we are counting each pair twice. So we need to divide by ##2^m##. Furthermore, the m pairs can be ordered in m! ways, so we need to divide by ##m!## to eliminate the duplicate countings. I believe the general formula is: $$N_{pairings}={n! \over m! 2^m}$$ I didn't try to brute force count and verify though... To illustrate with n=6, the first ordering is: (1,2),(3,4),(5,6) However, (2,1),(3,4),(5,6) is the same pairing which we would be counting separately. We need to divide by 2 for each pair in the pairing. That is, by 23. Furthermore, (3,4),(1,2),(5,6) is again the same pairing which we would also be counting separately. We need to divide by 3! to compensate. Mentor Quote by I like Serena Hi cepheid! Trying to keep track in dealing with unordered stuff usually gives me a head ache. What I have learned to do, is first count the ordered stuff, and then divide by the number of duplicate countings. In your case you can order the objects in ##n!## ways, yielding ##m## ordered pairs. However, since the pairs are supposed to be unordered, we are counting each pair twice. So we need to divide by ##2^m##. Furthermore, the m pairs can be ordered in m! ways, so we need to divide by ##m!## to eliminate the duplicate countings. I believe the general formula is: $$N_{pairings}={n! \over m! 2^m}$$ I didn't try to brute force count and verify though... To illustrate with n=6, the first ordering is: (1,2),(3,4),(5,6) However, (2,1),(3,4),(5,6) is the same pairing which we would be counting separately. We need to divide by 2 for each pair in the pairing. That is, by 23. Furthermore, (3,4),(1,2),(5,6) is again the same pairing which we would also be counting separately. We need to divide by 3! to compensate. Yeah, I see what I did wrong. You're right, I am over-counting due to the ordering of the pairs (not the ordering within the pairs, which is taken care of by my using n choose 2 instead of n permute 2). The problem first started with my stray factor of 1/2 in the n = 4 step. This wasn't a factor 1/2, it was a factor 1/m, because it was saying that if you four objects, 1 2 3 4, and you choose to pair 12, then the other pair is 34. However, if you choose to pair 34, then the other one is 12, and these are not distinct outcomes. This correction of 1/m needs to occur at every recursion step. For example, with n = 6, I had 6 choose 2, and I said, if the two initially chosen are 12, then you have 12 34 56 12 35 46 12 36 45 these being the three pairings of the remaining four that weren't initially chosen. The problem is, if you multiply these three sets by 6 choose 2, you're including, for example, the case where you choose 34 initially, which leads to 34 12 56 which is not distinct from one of the above sets. So it's clear that number of times a duplicate set will appear is just going to be equal to m, the number of pairs (3 in this case). That's because this set of pairs will appear again in the "56" series, when those are the two initially chosen). So, recasting my product, I end up with:$$N_\textrm{pairings} = \prod_{i=2}^m \frac{1}{i}\left (\begin{array}{c}2i\\2\end{array}\right)$$And for m = 4 this is $$\frac{1}{2}\left (\begin{array}{c}4\\2\end{array}\right)\frac{1}{3}\left (\begin{array}{c}6\\2\end{array}\right)\frac{1}{4}\left (\begin{array}{c}8\\2\end{array}\right) = \frac{1}{4\cdot 3 \cdot 2}\frac{8 \cdot 7}{2}\frac{6\cdot 5}{2}\frac{4\cdot 3}{2} = \frac{1}{4!}\frac{1}{2^3}\frac{8!}{2!}$$ $$= \frac{8!}{4! \cdot 2^4}$$ reproducing your formula. However, your method is WAY better. Mine is tricky and prone to error. ## Number of pairings in a set of n objects Hi cepheid and Serena's fan, I got to the same result you got but with yet a different way (closer to cepheid's way to make it through) Here is how I went through it. I have $u_1$=1 (only one way to make a pair out of just one pair) Then when adding a new pair to go to $u_{n+1}$, you have to chose one of the two items fixed, and use the other one to "play" with all the previous results. So for instance for n=2 (4 items) you have 3 ways to play with the previous set of size 1 Then 5 ways to play with the previous set of 3, so $u_n$=(2n-1)!! (double factorial, the product of odd numbers, 1x3x5x7x9x...x2n-1) Well, I am calling n what you called m in fact but it is clear, so the formula is the same: $u_n=(2n-1)!!=\frac{(2n)!}{2^n n!}$ Thread Tools | | | | |---------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Number of pairings in a set of n objects | | | | Thread | Forum | Replies | | | General Math | 4 | | | Introductory Physics Homework | 0 | | | Calculus & Beyond Homework | 1 | | | Calculus & Beyond Homework | 3 | | | Biology | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9559556841850281, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/172080/int-0-infty-frac-sin2n1xx-mathrm-dx-evaluate-integral
# $\int_{0}^{\infty}\frac{\sin^{2n+1}(x)}{x} \mathrm {d}x$ Evaluate Integral Here is a fun integral I am trying to evaluate: $$\int_{0}^{\infty}\frac{\sin^{2n+1}(x)}{x} \ dx=\frac{\pi \binom{2n}{n}}{2^{2n+1}}.$$ I thought about integrating by parts $2n$ times and then using the binomial theorem for $\sin(x)$, that is, using $\dfrac{e^{ix}-e^{-ix}}{2i}$ form in the binomial series. But, I am having a rough time getting it set up correctly. Then, again, there is probably a better approach. $$\frac{1}{(2n)!}\int_{0}^{\infty}\frac{1}{(2i)^{2n}}\sum_{k=0}^{n}(-1)^{2n+1-k}\binom{2n}{k}\frac{d^{2n}}{dx^{2n}}(e^{i(2k-2n-1)x})\frac{dx}{x^{1-2n}}$$ or something like that. I doubt if that is anywhere close, but is my initial idea of using the binomial series for sin valid or is there a better way?. Thanks everyone. - 1 Have you tried it for small $n$, like $n=0$ and $n=1$? – Thomas Andrews Jul 17 '12 at 18:56 2 @Arjang: Please try to space your edits. The front page looks like a lot of the same question. – Asaf Karagila Mar 1 at 22:33 @AsafKaragila : Thank you, by space do you mean time wise? I was just thinking I should wait between edits, or something else? – Arjang Mar 1 at 22:34 ## 2 Answers Using $$\sin^{2n+1}(x) = \sum_{k=0}^n \frac{(-1)^k }{4^n} \binom{2n+1}{n+k+1} \sin\left((2k+1)x\right)$$ We get $$\begin{eqnarray} \int_0^\infty \frac{\sin^{2n+1}(x)}{x}\mathrm{d} x &=& \sum_{k=0}^n \frac{(-1)^k }{4^n} \binom{2n+1}{n+k+1}\int_0^\infty \frac{\sin\left((2k+1)x\right)}{x}\mathrm{d} x\\ &=& \sum_{k=0}^n \frac{(-1)^k }{4^n} \binom{2n+1}{n+k+1}\int_0^\infty \frac{\sin\left(x\right)}{x}\mathrm{d} x \\ &=& \frac{\pi}{2^{2n+1}}\sum_{k=0}^n (-1)^k \binom{2n+1}{n+k+1} = \frac{\pi}{2^{2n+1}} \binom{2n}{n} \end{eqnarray}$$ The latter sum is evaluated using telescoping trick: $$\sum_k (-1)^k \binom{2n+1}{n+k+1} = \sum_k (-1)^k \frac{2n+1}{n+k+1} \binom{2n}{n+k} = (-1)^{k+1} \binom{2n}{n+k} =: g(k)$$ meaning that $$g(k+1) - g(k) = (-1)^k \binom{2n+1}{n+k+1}$$ Hence $$\sum_{k=0}^n (-1)^k \binom{2n+1}{n+k+1} = \sum_{k=0}^n \left(g(k+1)-g(k)\right) = g(n+1) - g(0) = -g(0) = \binom{2n}{n}$$ - Wht did you do to get from the first line to the second? (after "We get") – Dennis Gulko Jul 17 '12 at 19:27 This was a simple change of variables, for $c>0$, $\int_0^\infty \frac{\sin(c x)}{x} \mathrm{d} x = \int_0^\infty \frac{\sin(c x)}{c x} \mathrm{d} (c x) \stackrel{y=cx}{=} \int_0^\infty \frac{\sin(y)}{y} \mathrm{d} y$. – Sasha Jul 17 '12 at 19:33 Yeah, sorry. I figured that out, but couldn't delete my comment (from my phone) :-) – Dennis Gulko Jul 17 '12 at 19:43 Wow, thanks Sasha. Very nice and elegant. – Cody Jul 17 '12 at 20:41 My method is essentially the same, but steps have been rearranged. With yours, you get to use $\large\int_0^\infty\frac{\sin(x)}{x}\,\mathrm{d}x$. (+1) – robjohn♦ Jul 17 '12 at 23:38 Since $\dfrac{\sin^{2n+1}(x)}{x}$ is an even function, we can integrate over the whole real line and divide by $2$. Write $\sin(x)=\dfrac{e^{ix}-e^{-ix}}{2i}$. Since there are no singularities and the integrand vanishes as $|x|\to\infty$, we can move the path of integration in the direction of $-i$. Expand using the binomial theorem, and close the paths of integration in two ways: for the integrands with $e^{+ikx}$ circle back counter-clockwise around the upper half-plane ($\gamma^+$); for the integrands with $e^{-ikx}$ circle back clockwise around the lower half-plane ($\gamma^-$). Note that $\gamma^-$ contains no poles, so those integrals can be ignored. We will use the identity $$\begin{align} \sum_{k=0}^m(-1)^k\binom{n}{k} &=\sum_{k=0}^m(-1)^k\binom{n}{k}\binom{m-k}{m-k}\\ &=(-1)^m\sum_{k=0}^m\binom{n}{k}\binom{-1}{m-k}\\ &=(-1)^m\binom{n-1}{m} \end{align}$$ Finally, to the point: $$\begin{align} \int_0^\infty\sin^{2n+1}(x)\frac{\mathrm{d}x}{x} &=\frac12\int_{-\infty}^\infty\sin^{2n+1}(x)\frac{\mathrm{d}x}{x}\\ &=\left(-\frac14\right)^{n+1}i\int_{-\infty}^\infty\left(e^{ix}-e^{-ix}\right)^{2n+1}\frac{\mathrm{d}x}{x}\\ &=\left(-\frac14\right)^{n+1}i\sum_{k=0}^{n}(-1)^k\binom{2n+1}{k}\int_{\gamma^+}e^{ix(2n-2k+1)}\frac{\mathrm{d}x}{x}\\ &+\left(-\frac14\right)^{n+1}i\sum_{k=n+1}^{2n+1}(-1)^k\binom{2n+1}{k}\int_{\gamma^-}e^{ix(2n-2k+1)}\frac{\mathrm{d}x}{x}\\ &=\left(-\frac14\right)^{n+1}i\sum_{k=0}^{n}(-1)^k\binom{2n+1}{k}2\pi i\\ &=\left(-\frac14\right)^{n}\frac{\pi}{2}\sum_{k=0}^{n}(-1)^k\binom{2n+1}{k}\\ &=\left(-\frac14\right)^{n}\frac{\pi}{2}(-1)^n\binom{2n}{n}\\ &=\frac{1}{4^n}\frac{\pi}{2}\binom{2n}{n} \end{align}$$ - There it is RobJohn!!!. :):) That is along the lines I was thinking, but I got discombobulated in all of that. Thanks much. Your use of contours was clever. – Cody Jul 17 '12 at 23:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275665283203125, "perplexity_flag": "middle"}
http://physics.aps.org/articles/large_image/f1/10.1103/Physics.4.82
(a) APS/Alan Stonebraker; (b) Y. Zhang et al. [7] Figure 1: (a) The magnetic dipole field produced by an electron spin. Note that the magnetic field experienced by the nuclear spin has a transverse component along the $x$-axis, even though the electron spin is aligned with the external field along the $z$-axis. Therefore, flipping the electron spin would lead to changes in both the magnitude and the direction of the magnetic field experienced by the nuclear spin. This is the essence of the anisotropic hyperfine interaction (AHF). (b) A malonic acid molecule. The red spheres are oxygen nuclei, the larger dark gray spheres are carbon nuclei, and the smaller light gray spheres are protons. One proton has been knocked off by x-ray irradiation, to leave a radical electron localized at the central carbon-13 nucleus. AHF is used in Ref. [7] to couple the nuclear spins of the carbon-13 and the proton via the electron spin.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8983988165855408, "perplexity_flag": "middle"}
http://asmeurersympy.wordpress.com/2010/06/11/
# Aaron Meurer's SymPy Blog My blog on my work on SymPy and other fun stuff. ## Integration of rational functions June 11, 2010 So for this week’s blog post I will try to explain how the general algorithm for integrating rational functions works. Recall that a rational function is the quotient of two polynomials. We know that using common denominators, we can convert the sum of any number of rational functions into a single quotient, $\frac{a_nx^n + a_{n-1}x^{n-1} + \cdots + a_2x^2 + a_1x + a_0}{b_nx^n + b_{n-1}x^{n-1} + \cdots + b_2x^2 + a_1x + a_0}$. Also, using polynomial division we can rewrite any rational function as the sum of a polynomial and the quotient of two polynomials such that the degree of the numerator is less than the degree of the denominator ($F(x) = \frac{b(x)}{c(x)} = p(x) + \frac{r(x)}{g(x)}$, with $deg(r) < deg(g)$). Furthermore, we know that the representation of a rational function is not unique. For example, $\frac{(x + 1)(x - 1)}{(x + 2)(x - 1)}$ is the same as $\frac{x + 1}{x + 2}$ except at the point $x = 1$, and $\frac{(x - 1)^2}{x - 1}$ is the same as $x - 1$ everywhere. But by using Euclid’s algorithm for finding the GCD of polynomials on the numerator and the denominator, along with polynomial division on each, we can cancel all common factors to get a representation that is unique (assuming we expand all factors into one polynomial). Finally, using polynomial division with remainder, we can rewrite any rational function $F(x)$ as $\frac{a(x)}{b(x)} = p(x) + \frac{a(x)}{d(x)}$, where $a(x)$, $b(x)$, $c(x)$, $d(x)$, and $p(x)$ are all polynomials, and the degree of $a$ is less than the degree of $d$. We know from calculus that the integral of any rational function consists of three parts: the polynomial part, the rational part, and the logarithmic part (consider arctangents as complex logarithms). The polynomial part is just the integral of $p(x)$ above. The rational part is another rational function, and the logarithmic part, which is a sum of logarithms of the form $a\log{s(x)}$, where $a$ is an algebraic constant and $s(x)$ is a polynomial (note that if $s(x)$ is a rational function, we can split it into two logarithms of polynomials using the log identities). To find the rational part, we first need to know about square-free factorizations. An important result in algebra is that any polynomial with rational coefficients can be factored uniquely into irreducible polynomials with rational coefficients, up to multiplication of a non-zero constant and reordering of factors, similar to how any integer can be factored uniquely into primes up to multiplication of 1 and -1 and reordering of factors (technically, it is with coefficients from a unique factorization domain, for which the rationals is a special case, and up to multiplication of a unit, which for rationals is every non-zero constant). A polynomial is square-free if this unique factorization does not have any polynomials with powers greater than 1. Another theorem from algebra tells us that irreducible polynomials over the rationals do not have any repeated roots, and so given this, it is not hard to see that a polynomial being square-free is equivalent to it not having repeated roots. A square-free factorization of a polynomial is a list of polynomials, $P_1P_2^2 \cdots P_n^n$, where each $P_i$ is square-free (in other words, $P_1$ is the product of all the factors of degree 1, $P_2$ is the product of all the factors of degree 2, and so on). There is a relatively simple algorithm to compute the square-free factorization of a polynomial, which is based on the fact that $gcd(P, \frac{dp}{dx})$ reduces the power of each irreducible factor by 1. For example: (Sorry for the picture. WordPress code blocks do not work) It is not too hard to prove this using the product rule on the factorization of P. So you can see that by computing $\frac{P}{gcd(P, \frac{dP}{dx})}$, you can obtain $P_1P_2\cdots P_n$. Then, by recursively computing $A_0 = P$, $A_1 = gcd(A_0, \frac{dA_0}{dx})$, $A2 = gcd(A_1, \frac{dA_1}{dx})$, … and taking the quotient each time as above, we can find the square-free factors of P. OK, so we know from partial fraction decompositions we learned in calculus that if we have a rational function of the form $\frac{Q(x)}{V(x)^n}$ , where $V(x)$ is square-free, the integral will be a rational function if $n > 1$ and a logarithm if $n = 1$. We can use the partial fraction decomposition that is easy to find once we have the square-free factorization of the denominator to rewrite the remaining rational function as a sum of terms of the form $\frac{Q}{V_k^k}$, where $V_i$ is square-free. Because $V$ is square-free, $gcd(V, V')=1$, so the Extended Euclidean Algorithm gives us $B_0$ and $C_0$ such that $B_0V + C_0V'=1$ (recall that $g$ is the gcd of $p$ and $q$ if and only if there exist $a$ and $b$ relatively prime to $g$ such that $ap+bq=g$. This holds true for integers as well as polynomials). Thus we can find $B$ and $C$ such that $BV + CV'= \frac{Q}{1-k}$. Multiplying through by $\frac{1-k}{V^k}$, $\frac{Q}{V^k}=-\frac{(k-1)BV'}{V^k} + \frac{(1-k)C}{V^{k-1}}$, which is equal to $\frac{Q}{V^k} = (\frac{B'}{V^{k-1}} - \frac{(k-1)BV'}{V^k}) + \frac{(1-k)C-B'}{V^{k-1}}$. You may notice that the term in the parenthesis is just the derivative of $\frac{B}{V^{k-1}}$, so we get $\int\frac{Q}{V^k}=\frac{B}{V^{k-1}} + \int\frac{(1-k)C - B'}{V^{k-1}}$. This is called Hermite Reduction. We can recursively reduce the integral on the right hand side until the $k=1$. Note that there are more efficient ways of doing this that do not actually require us to compute the partial fraction decomposition, and there is also a linear version due to Mack (this one is quadratic), and an even more efficient algorithm called the Horowitz-Ostrogradsky Algorithm, that doesn’t even require a square-free decomposition. So when we have finished the Hermite Reduction, we are left with integrating rational functions with purely square-free denominators. We know from calculus that these will have logarithmic integrals, so this is the logarithmic part. First, we need to look at resultants and PRSs. The resultant of two polynomials is defined as differences of the roots of the two polynomials, i.e., $resultant(A, B) = \prod_{i=1}^n\prod_{j=1}^m (\alpha_i - \beta_j)$, where $A = (x - \alpha_1)\cdots(x - \alpha_n)$ and $B = (x - \beta_1)\cdots(x - \beta_m)$ are monic polynomials split into linear factors. Clearly, the resultant of two polynomials is 0 if and only if the two polynomials share a root. It is an important result that the resultant of two polynomials can be computed from only their coefficients by taking the determinant of the Sylvester Matrix of the two polynomials. However, it is more efficiently calculated using a polynomial remainder sequence (PRS) (sorry, there doesn’t seem to be a Wikipedia article), which in addition to giving the resultant of A and B, also gives a sequence of polynomials with some useful properties that I will discuss below. A polynomial remainder sequence is a generalization of the Euclidian algorithm where in each step, the remainder $R_i$ is multiplied by a constant $\beta_i$. The Fundamental PRS Theorem shows how to compute specific $\beta_i$ such that the resultant can be calculated from the polynomials in the sequence. Then, if we have $\frac{A}{D}$, left over from the Hermite Reduction (so $D$ square-free), let $R=resultant_t(A-t\frac{dD}{dx}, D)$, where $t$ is a new variable, and $\alpha_i$ be the distinct roots of R. Let $p_i=\gcd(A - \alpha_i\frac{dD}{dx}, D)$. Then it turns out that the logarithmic part of the integral is just $\alpha_1\log{p_1} + \alpha_2\log{p_2} + \cdots \alpha_n\log{p_n}$. This is called the Rothstein-Trager Algorithm. However, this requires finding the prime factorization of the resultant, which can be avoided if a more efficient algorithm called the Lazard-Rioboo-Trager Algorithm is used. I will talk a little bit about it. It works by using subresultant polynomial reminder sequences. It turns out that the above $gcd(A-\alpha\frac{dD}{dx}, D)$ will appear in the PRS of $D$ and $A-t\frac{dD}{dx}$. Furthermore, we can use the PRS to immediately find the resultant $R=resultant_t(A-t\frac{dD}{dx}, D)$, which as we saw, is all we need to compute the logarithmic part. So that’s rational integration. I hope I haven’t bored you too much, and that this made at least a little sense. I also hope that it was all correct. Note that this entire algorithm has already been implemented in SymPy, so if you plug a rational function in to `integrate()`, you should get back a solution. However, I describe it here because the transcendental case of the Risch Algorithm is just a generalization of rational function integration. As for work updates, I found that the Poly version of the heursitic Risch algorithm was considerably slower than the original version, due to inefficiencies in the way the polynomials are currently represented in SymPy. So I have put that aside, and I have started implementing algorithms from the full algorithm. There’s not much to say on that front. It’s tedious work. I copy the algorithm from Bronstein’s book, then try make sure that it is correct based on the few examples given and from the mathematical background given, and when I’m satisfied, I move on to the next one. Follow my integration branch if you are interested. In my next post, I’ll try to define some terms, like “elementary function,” and introduce a little differential algebra, so you can understand a little bit of the nature of the general integration algorithm.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 76, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9446985125541687, "perplexity_flag": "head"}
http://mathhelpforum.com/math-challenge-problems/3359-rubik-s-cube-riddle.html
# Thread: 1. ## Rubik's Cube Riddle Anyone who has ever played with a rubik's cube knows that it is possible to only complete 1 of the 6 sides. Anyone who has ever played with a rubik's cube knows that it is NOT possible to only complete x of the 6 sides. find x (I am proud of this one I made it up last night!) 2. Originally Posted by MathGuru Anyone who has ever played with a rubik's cube knows that it is possible to only complete 1 of the 6 sides. Anyone who has ever played with a rubik's cube knows that it is NOT possible to only complete x of the 6 sides. find x (I am proud of this one I made it up last night!) The only way that I ever solved a Rubik's cube is by levering the thing apart, and reassembling it in a solved configuration (better still reassembling it in a configuration which cannot be solved) RonL 3. Originally Posted by MathGuru Anyone who has ever played with a rubik's cube knows that it is possible to only complete 1 of the 6 sides. Anyone who has ever played with a rubik's cube knows that it is NOT possible to only complete x of the 6 sides. find x I personnally have spent a lot of wasted time on my Rubik's Cube. And through the many minutes/hours spent on it I had discovered that two adjacent sides control all the other ones. If two adjacent sides are correct, than the whole cube is correct. Therefore, it is possible to get two opposite sides completed without completing the cube, but a third completed side would automatically be adjacent to both opposite sides, so that side could not be completed unless the cube is. So, $3 \leq x \leq 5$ on a side note, I think there are 1.69267*10^13 different combinations of a Rubik's Cube. 4. I can solve a Rubik in less approximately 5 minutes. I believe the solution algorithm was obtained thought group theory. Because it is a puzzle my intuition tells me it has something to do with premutational groups. I was always interested to see how mathematicians derived an algorithm. 5. Originally Posted by Quick I personnally have spent a lot of wasted time on my Rubik's Cube. And through the many minutes/hours spent on it I had discovered that two adjacent sides control all the other ones. If two adjacent sides are correct, than the whole cube is correct. Therefore, it is possible to get two opposite sides completed without completing the cube, but a third completed side would automatically be adjacent to both opposite sides, so that side could not be completed unless the cube is. So, $3 \leq x \leq 5$ on a side note, I think there are 1.69267*10^13 different combinations of a Rubik's Cube. I have solved my cube for 3 sides, so this is possible. Note: The Rubik's Revenge (4 squares to a side) CAN be solved for 4 sides. And though I loath to suggest that I compete with ThePerfectHacker in anything , I can solve the original Rubik's cube in less than 2 and a half minutes. BTW: If you can solve a Rubik's cube and a Revenge, you can also solve the Professor's cube (5 squares) and any other that they might make with a higher number of cubes. (The guy who sold me the Professor's cube would be mortified to know that I've had it out of the box. ) -Dan 6. Originally Posted by ThePerfectHacker I can solve a Rubik in less approximately 5 minutes. I believe the solution algorithm was obtained thought group theory. Because it is a puzzle my intuition tells me it has something to do with premutational groups. I was always interested to see how mathematicians derived an algorithm. I know of two ways to argue a solution to the Rubik's cube. The first is cheating: figure out how to undo what the person did to mix it up. Obviously if you do it backward, it will be solved. (This is not as trivial as it sounds...there are many physical systems that you can't do this with.) Another one is to do what the person did to mix it up over and over and over and over... I forget what theorem of group theory this refers to, but if you perform a series of moves on the cube and repeat this series long enough, the cube will eventually (several life-times of the Universe, no doubt!) come back to a solved state. I have verified this for a few simple patterns. Obviously, since we have to know what the person did to mess up the cube in the first place, neither of these methods of proof for the existence of a solution are very satisfying! -Dan 7. Originally Posted by CaptainBlack The only way that I ever solved a Rubik's cube is by levering the thing apart, and reassembling it in a solved configuration (better still reassembling it in a configuration which cannot be solved) RonL Which as those who know me will tell you that is my approach to most puzzles/problems. (Don't tell the puzzle editors of the Sunday Times or NewScientist or they may ask for prizes back) RonL 8. Originally Posted by Quick Therefore, it is possible to get two opposite sides completed without completing the cube, but a third completed side would automatically be adjacent to both opposite sides, so that side could not be completed unless the cube is. So, $3 \leq x \leq 5$ This is not correct. you can see a jave applet with a rubik's cube that has 3 sides completed here. No one has recognized the obvious answer yet. . . 9. well you obviously can't only complete five sides, as then the only piece that would have a chance of being incorrect on the 6th side would be the center square, and these don't even move. if one square is in the wrong place another must also be. hence if 5 sides are complete then the 6th must also be. i don't think four sides are possible either, though.. since if you rotate one corner, you cause 3 sides to be incorrect, and you then must rotate another corner to do this. also if you switch one edge piece with another edge piece you effect three sides. but not sure on this one. 10. Originally Posted by MathGuru This is not correct. you can see a jave applet with a rubik's cube that has 3 sides completed here. Where does it show three sides completed? I couldn't see any that didn't have all the other sides completed as well. 11. Originally Posted by Quick Where does it show three sides completed? I couldn't see any that didn't have all the other sides completed as well. Are you sure? I have never been able to get three sides, and believe me I've tried. Picture attached, this is two views of the same cube. It has to be possible because you can play the video that will solve this cube from this configuration. Aradesh is right about 5 not being possible because 5 means all 6 are correct. I am not sure about 4 . . . Attached Thumbnails 12. Originally Posted by MathGuru Picture attached, this is two views of the same cube. It has to be possible because you can play the video that will solve this cube from this configuration. Aradesh is right about 5 not being possible because 5 means all 6 are correct. I am not sure about 4 . . . Darn! 13. By the way Quick, I love your signature. . . maybe it's time to add an avatar! 14. Originally Posted by MathGuru Picture attached, this is two views of the same cube. It has to be possible because you can play the video that will solve this cube from this configuration. Aradesh is right about 5 not being possible because 5 means all 6 are correct. I am not sure about 4 . . . The only time I have ever solved the cube for 4 sides is when one of the pieces popped out and I put it back in the wrong way. I don't have a proof, but 4 sides are impossible. (That is to say I couldn't solve the cube from the starting position of when it was in a state of 4 sides solved, so the initial state is "unnatural" in some sense.) -Dan 15. ## Is it a good riddle? As far as the riddle is concerned . . . with allowing popping out pcs and rearranging them only 4 sides is possible. so 5 sides is the answer to the riddle. Is it possible to only complete 4 sides of a natural rubik's cube? I do not know. Is it a good riddle? It is the first one I ever came up with. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9704135060310364, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/209365-change-variables-needed-print.html
# Change of variables needed? Printable View • December 8th 2012, 01:26 PM cubejunkies Change of variables needed? Let there be a surface bounded below by the cone ${ z }^{ 2 }={ x }^{ 2 }+{ y }^{ 2 }$ and above by $1={ x }^{ 2 }+{ y }^{ 2 }+{ z }^{ 2 }$. Calculate the flux through this surface due to a vector field F(x, y, z), whose divergence cleanly works out to be 2z. This turns out to be a triple integral $\int _{ 0 }^{ 2\pi }{ \int _{ 0 }^{ \frac { \sqrt { 2 } }{ 2 } }{ \int _{ r }^{ \sqrt { 1-r^{ 2 } } }{ ...\quad dz\quad dr\quad d\theta } } }$ BUT I'm not sure if I should integrate 2z or 2z*r, with the r coming from changing coordinate systems from rectangular to polar. I don't know if I'm even supposed to be changing coordinate systems since z is a polar coordinate. Do I need to treat this as if I changed variables, or should I just plainly integrate 2z? Thanks Anthony All times are GMT -8. The time now is 01:49 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9157367944717407, "perplexity_flag": "middle"}
http://calculus7.org/2012/01/31/another-orthonormal-basis-hermite-functions/
being boring ## Another orthonormal basis: Hermite functions Posted on 2012-01-31 by This is an orthonormal basis for $L^2(\mathbb R)$. Since the measure of $\mathbb R$ is infinite, functions will have to decay at infinity in order to be in $L^2$. The Hermite functions are $\displaystyle \Phi_n(x)=(2^n n! \sqrt{\pi})^{-1/2} H_n(x)e^{-x^2/2}$ where $H_n$ is the nth Hermite polynomial, defined by $\displaystyle H_n(x)=(-1)^n e^{x^2} \left(\frac{d}{dx}\right)^n e^{-x^2}$. The goal is to prove that the functions $\Phi_n$ can be obtained from $x^n e^{-x^2/2}$ via the Gram-Schmidt process. (They actually form a basis, but I won’t prove that.) One can observe that the term $e^{-x^2/2}$ would be unnecessary if we considered the weighted space $L^2(\mathbb R, w)$ with weight $w(x)=e^{-x^2}$ and the inner product $\langle f,g\rangle=\int_{\mathbb R} fg\,w\,dx$. In this language, we orthogonalize the sequence of monomials $\lbrace x^n\rbrace\subset L^2(\mathbb R, w)$ and get the ON basis of polynomials $\{c_n H_n\}$ with $c_n = (2^n n! \sqrt{\pi})^{-1/2}$ being a normalizing constant. But since weighted spaces were never introduced in class, I’ll proceed with the original formulation. First, an unnecessary graph of $\Phi_0,\dots,\Phi_4$; the order is red, green, yellow, blue, magenta. Hermite Functions Claim 1. $H_n$ is a polynomial of degree $n$ with the leading term $2^n x^n$. Proof by induction, starting with $H_0=1$. Observe that $\displaystyle H_{n+1}=- e^{x^2} \frac{d}{dx}\left(e^{-x^2} H_n\right) =2x H_n - H_n'$ where the first term has degree $n+1$ and the second $n-1$. So, their sum has degree exactly $n+1$, and the leading coefficient is $2^{n+1}$. Claim 1 is proved. In particular, Claim 1 tells us that the span of the $\Phi_0,\dots,\Phi_n$ is the same as the span of $\lbrace x^k e^{-x^2/2}\colon 0\le k\le n\rbrace$. Claim 2. $\Phi_m\perp \Phi_n$ for $m\ne n$. We may assume $m<n$. Must show $\int_{\mathbb R} H_m(x) H_n(x) e^{-x^2}\,dx=0$. Since $H_m$ is a polynomial of degree $m<n$, it suffices to prove (*) $\displaystyle \int_{\mathbb R} x^k H_n(x) e^{-x^2}\,dx=0$ for integers $0\le k<n$. Rewrite (*) as $\int_{\mathbb R} x^k \left(\frac{d}{dx}\right)^n e^{-x^2} \,dx=0$ and integrate by parts repeatedly, throwing the derivatives onto $x^k$ until the poor guy can't handle it anymore and dies. No boundary terms appear because $e^{-x^2}$ decays superexponentially at infinity, easily beating any polynomial factors. Claim 2 is proved. Combining Claim 1 and Claim 2, we see that $\Phi_n$ belongs to the $(n+1)$-dimensional space $\mathrm{span}\,\lbrace x^k e^{-x^2/2}\colon 0\le k\le n\rbrace$, and is orthogonal to the $n$-dimensional subspace $\mathrm{span}\,\lbrace x^k e^{-x^2/2}\colon 0\le k\le n-1\rbrace$. Since the “Gram-Schmidtization'' of $x^n e^{-x^2/2}$ has the same properties, we conclude that $\Phi_n$ agrees with this “Gram-Schmidtization'' up to a scalar factor. It remains to prove that the scalar factor is unimodular ($\pm 1$ since we are over reals). Claim 3. $\langle \Phi_n, \Phi_n\rangle=1$ for all $n$. To this end we must show $\int_{\mathbb R} H_n(x)H_n(x)e^{-x^2}\,dx =2^n n! \sqrt{\pi}$. Expand the first factor $H_n$ into monomials, use (*) to kill the degrees less than n, and recall Claim 1 to obtain $\int_{\mathbb R} H_n(x)H_n(x)e^{-x^2}\,dx = 2^n \int_{\mathbb R} x^n H_n(x)e^{-x^2}\,dx = (-1)^n 2^n\int_{\mathbb R} x^n \left(\frac{d}{dx}\right)^n e^{-x^2} \,dx$. As in the proof of Claim 2, we integrate by parts throwing the derivatives onto $x^n$. After n integrations the result is $2^n \int_{\mathbb R} n! e^{-x^2} \,dx = 2^n n! \sqrt{\pi}$, as claimed. P.S. According to Wikipedia, these are the “physicists’ Hermite polynomials”. The “probabilists’ Hermite polynomials” are normalized to have the leading coefficient 1. This entry was posted in Uncategorized and tagged gram schmidt, hermite polynomial. Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 53, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331111311912537, "perplexity_flag": "head"}
http://mathoverflow.net/questions/109480?sort=votes
## The highest root of an ADE quiver ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\Gamma$ be a finite subgroup of $SL_2({\mathbb C})$, and $Q$ the set of its irreducible representations. McKay makes $Q$ into a directed graph by having $V \to W$ if $W \leq V \otimes {\mathbb C}^2$, where the latter comes from the natural action of $\Gamma$ on ${\mathbb C}^2$. (But since ${\mathbb C}^2 \otimes {\mathbb C}^2$ has an $SL_2$-invariant hence $\Gamma$-invariant vector, the directed graph is actually undirected: each edge comes with its reverse.) In this way we get a graph with a vector space at each edge. McKay observes that the graphs so arising are exactly the simply-laced affine Dynkin diagrams, with the trivial rep as the affine node. (In particular, the extra symmetry of the affine diagram comes here from $(\Gamma / \Gamma')^*$, which is therefore identifiable with $Z(G)$, for $G$ the corresponding simply-connected Lie group. I wonder if there's some larger correspondence there... but that's not my question.) If we toss that node, and orient the edges (i.e. throw out half), we can look at the "roots", or indecomposable representations, of the resulting quiver. McKay observes further (in effect) that the largest such quiver representation has the same-dimensional vector spaces as in the first construction. But now, since it's a quiver representation, there are maps between the spaces. So my question: In McKay's construction, the vertices of a Dynkin diagram are labeled by nontrivial irreps ${V}$ of $\Gamma$. Given an orientation on the diagram and an edge $V \to W$, is there a natural linear map $V \to W$, such that the result is the largest indecomposable quiver representation? Obviously these maps aren't $\Gamma$-equivariant. The natural map is $V \otimes {\mathbb C}^2 \to W$, so maybe these other maps correspond to choosing a vector, or a list of vectors, in ${\mathbb C}^2$. So a more specific version of the question: If $\vec x$ is a generic vector in ${\mathbb C}^2$, e.g. with no $\Gamma$-stabilizer, do the resulting composite maps $V \cong V \otimes \vec x \hookrightarrow V \otimes {\mathbb C}^2 \twoheadrightarrow W$ give the largest indecomposable? If so, what if $\vec x$ isn't generic? Feel free to add tags; I couldn't think of anything other than rt.representation-theory. - This is true (and easy) in the type A case (just avoid the coordinate lines). I think it's also pretty easy in type D (you have avoid all the reflecting lines). Of course, in type E, it's less clear; probably it works, but I wouldn't be that shocked if it failed. – Ben Webster♦ Oct 12 at 20:51 There's an equivalence of derived categories (G-equivariant coherent sheaves on C^2) = (coherent sheaves on a minimal resolution of the Duval singularity). It sends a nontrivial irrep of G (supported at the origin in C^2) to the structure sheaf of a component of the exceptional fiber. I wonder which sheaf on C^2 corresponds to a skyscraper on a node in the exceptional fiber. Whatever it is it has maps to and from the irrep. – David Treumann Oct 12 at 21:32 The structure sheaf of the primary component, or the irreducible component? I'm pretty sure the exceptional fiber isn't reduced for D,E. – Allen Knutson Oct 12 at 23:22 1 I was a little off: a nontrivial irrep matches to O(-1) on a reduced P^1. The trivial irrep matches to the structure sheaf of the scheme-theoretic exceptional fiber, shifted by 1. More stuff like this available here arxiv.org/abs/math/9812016 – David Treumann Oct 13 at 3:39 ## 1 Answer Part of what makes this question more interesting outside type $A$ is that the highest root can't be projective or injective. A test case to consider is $D_4$, say with all three arrows pointing to the central vertex. A representation of $D_4$ with dimension vector (1,1,1,2) will be indecomposable provided you don't choose any zero maps or have any two of the maps be multiples. In your setting, this translates into saying that you shouldn't choose the zero vector at any of the three vertices, and they shouldn't map to the same line in the 2-dimensional vector space over the central vertex either. It's not quite clear to me how to express this as a condition on the choice of vectors at the three vertices, but independent generic choices would work. It's a general fact about quiver representations that if the dimensions come from a real root, then if you choose the maps generically, you will get the indecomposable representation. So one way to view the question is whether choosing a vector $v$ for each vertex is sufficiently generic. For $D_n$ with $n>4$, I find it plausible that someone a bit more comfortable with representation theory of finite groups than I am, could convince themselves that this works fine as well. The situation for $E$ seems more complicated. The following is not an answer to the question, but if it's unfamiliar to you, it might be of interest. This setup is called the "algebraic McKay correspondence", and is due to Auslander. Let $S=\mathbb C[x,y]$. Let $R=S^G$. As an $R$-module, $S$ is a direct sum of the maximal Cohen-Macaulay $R$-modules. The maximal CM $R$-modules can also be constructed from the irreps of $\Gamma$ as follows: for $V$ an irrep, take $(V\otimes_R S)^G$. The endomorphism ring of $S$ as an $R$-module is the preprojective algebra of the corresponding affine type. (If we throw away the node for $R$, we get the finite type.) From this point of view, the arrows of the McKay quiver come with maps for free. In the $A_n$ case, they are just multiplication by $x$ and $y$, so the preprojective relation is just the fact that $xy=yx$. However, I don't see how to get back to finite-dimensional $\mathbb C$-vector spaces in a natural way now. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284155964851379, "perplexity_flag": "head"}
http://www.physicsforums.com/showpost.php?p=2763391&postcount=2
View Single Post Mentor The rest-mass energy of the Z is about 91 GeV = 91000 MeV. If you can find a $\beta^{-}$ emitter and a $\beta^{+}$ emitter with decay energies that add to give 91000 MeV, I will be very surprised. Beta decay energies are generally only a few MeV. I don't know what the maximum is, but I'd be surprised if it's larger than 10 MeV.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419930577278137, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/16243/tetrahedra-with-prescribed-face-angles
## Tetrahedra with prescribed face angles ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am looking for an analogue for the following 2 dimensional fact: Given 3 angles $\alpha,\beta,\gamma\in (0;\pi)$ there is always a triangle with these prescribed angles. It is spherical/euclidean/hyperbolic, iff the angle sum is smaller than /equal to/bigger than $\pi$. And the length of the sides (resp. their ratio in the Euclidean case) can be computed with the sine and cosine law. The analogous problem in 3 dimensions would be: Assign to each edge of a tetrahedron a number in $(0;\pi)$. Does there exists a tetrahedron with these numbers as face angles at those edges. And when is it spherical/euclidean/hyperbolic. Is there a similar Invariant to the angle sum? And are there formulas to compute the length of the edges? - ## 3 Answers The short answer is no - there is no single inequality criterion. Already in $\mathbb{R}^3$ everything is much more complicated. Let me give a sample of inequalities the angles should satisfy. Denote by $\gamma_{ij}, 1\leq i < j \leq 4$ the six dihedral angles of a Euclidean tetrahedron. Then: $$\gamma_{12}+\gamma_{23} + \gamma_{34}+\gamma_{14} \le 2 \pi$$ $$2\pi \le \gamma_{12} + \gamma_{13} + \gamma_{14}+\gamma_{23} + \gamma_{24}+\gamma_{34} \le 3\pi$$ $$0 \le \cos \gamma_{12} + \cos\gamma_{13} + \cos\gamma_{14}+ \cos\gamma_{23} + \cos\gamma_{24}+ \cos\gamma_{34} \le 2$$ (See my book ex. 42.27 for the proofs of these inequalities - they are not terribly difficult, so you might enjoy proving them yourself). This shows that the set of allowed sixtuples of angles is rather complicated (for spherical/hyperbolic tetrahedra with angles close to $\gamma_{ij}$, these angles will have to satisfy these inequalities as well). The "invariant" you mention corresponds to the unique equation the angles satisfy in the Euclidean space. The latter is also rather delicate: it is the Gauss-Bonnet equation $\omega_1+...+\omega_4=4\pi$, where $\omega_i$ is the curvature of $i$-th vertex - you need to use spherical cosine theorem to compute it from dihedral angles (see e.g. Prop. 41.3 in my book). Finally, you might like to take a look at this interesting paper by Rivin, to see that a similar generalization of the triangle inequality is just as difficult. To answer your last question (edge lengths from dihedral angles), yes, this is known. I am not an expert on this, but I would start with this recent paper. - Your third inequality contains a misprint I guess. – Petya Feb 25 2010 at 17:09 Right. Fixed now. – Igor Pak Feb 25 2010 at 17:54 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is an article by K. Wirth and A. Dreiding which you might find helpful: Edge lengths determining tetrahedrons Elmente der Mathematik, volume 64, (2009) 160-170. The the title talks about edge lengths, but the approach taken involves taking a triangle drawn in the plane and placing three triangles along its edges to form a "net" with which to try to fold the result into a tetrahedron. The paper discuss circumstances under which this can be done. - There has been work on Gram matrices that appears relevant, see e.g. Theorems 14-5 on p. 24-5. Also of peripheral interest: there is a hyperbolic generalization of the Dehn invariant. But as far as I can tell this sort of thing can't really be a generalization of any 2D construction. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389889240264893, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/138056/is-the-right-shift-operator-bounded
# Is the right shift operator bounded? I was reading my lecture notes for functional analysis when I came across the following statement: Let $(e_{n})$ be a total orthonormal sequence in a separable Hilbert space H. The right shift operator, defined as the linear operator $T: H\rightarrow{}H$ such that $Te_{n} = e_{n+1}$ for all n, is bounded. The statement seems intuitively correct to me, but I find the proof of it quite confusing. The proof goes like this: Proof: For $\forall x\in{}H$, since $(e_{n})$ is total, write $\displaystyle x=\lim_{n\rightarrow{\infty}}x_{n}$, where $\displaystyle x_{n}=\sum_{k=1}^{n}<x,e_{k}>e_{k}$. Then we have $||Tx_{n}||^{2}=||\sum_{k=1}^{n}<x,e_{k}>Te_{k}||^{2}=||\sum_{k=1}^{n}<x,e_{k}>e_{k+1}||^{2}= \sum_{k=1}^{n}|<x,e_{k}>|^{2}$. Therefore $||Tx||^{2}\stackrel{(\ast)}{=}\lim_{n\rightarrow{\infty}}||Tx_{n}||^{2}=\sum_{k=1}^{\infty}|<x,e_{k}>|^{2}=||x||^{2}$. Thus, $T$ is bounded and isometric. However, I think there is something fishy with the proof: In the equality $(\ast)$, I believe the proof is using that $\displaystyle ||Tx||=||T(\lim_{n\rightarrow\infty}x_{n})||=||\lim_{n\rightarrow\infty}Tx_{n}||=\lim_{n\rightarrow{\infty}}||Tx_{n}||$. But for the second equality to hold, it is already assuming that T is indeed continuous, which implies boundedness. And that makes it a circular reasoning here... Is my judgement about the proof right? If this proof is indeed wrong, can anybody suggest a correct way to prove the statement? Thank you!! - There is a mistake: $Te_n = e_{n+1}$. – Siminore Apr 28 '12 at 14:53 @Siminore Oops :) – Vokram Apr 28 '12 at 14:54 A question: how do you extend the definition of $T$ on the basis to any element of $H$? I mean: what is $Tx$ if $x \in H$? – Siminore Apr 28 '12 at 14:56 @Siminore Yeah your question got me... It seems without continuity you can't really define $Tx$ for arbitrary $x$... Hmm so are you suggesting that if we ever want to define the right shift operator, we should assume already that it is continuous and bounded? – Vokram Apr 28 '12 at 15:01 I think this proof is not so meaningless because with assumptions that $T$ is bounded it is also proves that $T$ is isometric. – Norbert Apr 28 '12 at 15:05 show 11 more comments ## 1 Answer You are right that there is circularity here. The problem is in your definition of the right shift operator as "the" linear operator such that $T e_n = e_{n+1}$. In fact, there are many such linear operators. (Using Zorn's lemma, we can extend $\{e_n\}$ to a Hamel basis for $H$ by adding some additional vectors $\{u_\alpha\}$. Then we can define an operator $T$ by setting $T e_n = e_{n+1}$ and setting $T_{u_\alpha}$ to be whatever we want, and this uniquely defines a linear operator.) So the statement that $T x = \lim T x_n$ will have to be part of the definition of $T$. Following your approach, given $x \in H$, let $x_n = \sum_{k=1}^n \langle x,e_k \rangle e_k$. Then $T x_n$ is unambiguously given by $\sum_{k=1}^n \langle x, e_k \rangle e_{k+1}$. Show that the sequence $\{T x_n\}$ is Cauchy and hence converges to some $y \in H$. Then we can define $Tx$ to be $y$. Now that $T$ is well defined, one can go ahead and check that $T$ is linear, bounded, and an isometry. The moral is that defining a linear operator on a total orthonormal set is only well defined if the operator is assumed to be bounded. - Thank you so much Nate, it is a enlightening explanation. – Vokram Apr 28 '12 at 19:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508402943611145, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/18363/how-to-evaluate-the-sum-over-a-hyperplane
# How to evaluate the sum over a hyperplane I have difficulties in evaluating the following expression: $$\sum_{\small n_1+...+n_{k}=m-k}\; \prod_{i=1}^{k}\frac{1}{(n_i+1)(n_i+2)}$$ I have tried the function `RootSum`, but it doesn't work and gives the following warning: ````RootSum[Sum[Subscript[n, j], {j, 1, k}] - m + k, Product[1/((Subscript[n, i] + 2) (Subscript[n, i] + 1)), {i, 1, k}]] ```` "RootSum::pfn - m + k + $\sum\limits_{j=1}^{k} n_j$ is not a pure function". I will really appreciate it if someone could give me some hints. Thanks! - Could you please write down here the Mathematica expression you've tried to evaluate? – belisarius Jan 24 at 17:00 @belisarius: RootSum[Sum[Subscript[n, j], {j, 1, k}] - m + k, Product[1/((Subscript[n, i] + 2) (Subscript[n, i] + 1)), {i, 1, k}]] – user5551 Jan 24 at 17:27 `RootSum` expects both arguments to be a pure function of a single variable. That's not the case here, and I don't think you can state your problem that way. BTW Any restrictions on the $n_i$'s? Can we assume they are integer or so? For reals it doesn't seem to make sense at all. – Sjoerd C. de Vries Jan 24 at 18:18 ## 1 Answer The formula is a little vague because starting limits for the $n_i$ are not given. However, its form suggests it started out as $$\sum_{(n_1, n_2, \ldots, n_k) \mid 1 \le n_i \wedge n_1+n_2+\cdots+n_k=m} \prod_{i=1}^k \frac{1}{n_i (n_i+1)}$$ for a fixed value of $k$. The relationship to the expression in the question is that we can replace each $n_i$ by $n_i-1$ and allow their values to start at $0$ rather than $1$; their sum is thereby reduced by $k \times 1 = k$ and we obtain precisely the sum in the question, assuming its $n_i$ are allowed to start at $0$ rather than $1$. In any event, the sum appears to be over all ordered partitions $(n_1, n_2, \ldots, n_k)$ of $m$ consisting of exactly $k$ nonzero values or else it can be re-expressed in such a manner. The crux of the problem is to generate these partitions. This solution relies on the one-to-one-correspondence between any such partition and the set of its distinct partial sums $0, n_1, n_1+n_2, \ldots, n_1+n_2+\cdots+n_k=m$. The intermediate terms determine a $k-1$ element subset of $\{1,2,\ldots,m-1\}$ and, conversely, each such subset when ordered can be construed as such a sequence of partial sums from which all the $n_i$ can be recovered by taking successive differences. Mathematica has a function to produce all such subsets, naturally called `Subsets`. All these key elements can be found in the following expression which comprises `Subsets` to generate the partition data (already sorted), `Append` and `Prepend` to tack on the final $m$ and initial $0$, `Differences` to recover all the $n_i$, `Product` to multiply the reciprocals, and finally `Total` to sum the terms: ````f[m_Integer, k_Integer] /; 0 < k <= m := Product[1/(i (i + 1)), {i, #}] & /@ (Differences[Append[Prepend[#, 0], m]] & /@ Subsets[Range[m - 1], {k - 1}]) // Total ```` For example, consider the case $m=5, k=2$. The ordered two-partitions of $5$ are $(n_1,n_2)$ = $(1,4), (2,3), (3,2), (4,1)$, introducing terms $\frac{1}{1 \cdot 2}\frac{1}{4 \cdot 5} = \frac{1}{40}$, $\frac{1}{2 \cdot 3}\frac{1}{3 \cdot 4} = \frac{1}{72}$, $\frac{1}{3 \cdot 4}\frac{1}{2 \cdot 3} = \frac{1}{72}$, and $\frac{1}{4 \cdot 5}\frac{1}{1 \cdot 2} = \frac{1}{40}$, respectively. They sum to $\frac{2}{40} + \frac{2}{72}= \frac{7}{90}$. And indeed, ````f[5,2] ```` $\frac{7}{90}$ - lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9042755961418152, "perplexity_flag": "head"}
http://mathoverflow.net/questions/64510/classes-of-metric-spaces-with-additional-structure/64570
## Classes of metric spaces with additional structure [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) As is often the case in mathematics there is an option of studying a more general topic but this comes with a price of losing some interesting properties which are only present in the more specialized area. In this case, I am interested in learning about relative usefulness of a metric space carrying some additional structure. To give some obvious examples: • Polish space (a separable and completely metrizable space) is very useful in measure and probability theory • Banach space (a complete normed vector space) is a cornerstone of a functional analysis I want to learn about more examples of this kind (not necessarily as important though) to get a better idea about how different kinds of metric spaces look like and also about the fields that are based on them. 1.  What are some other structures one might impose on a metric space to obtain interesting classes of spaces? In a related spirit 2.  Are there properties (e.g. completeness, separability) that are so important that they are almost always required in some application (or even some field)? If so, what is that application, what are those properties and why are they important (or equivalently, what fails when they are not present)? Note: sorry, if this is too vague or too basic. But I don't have a solid mathematical background, so it's hard to know where to look for answers to these questions. I am just trying to learn about topology in general and about metric spaces in particular (mainly because of their applications in probability theory) and I will definitely accept as an answer a reference to a standard literature. - 6 Honestly, have you read the wikipedia page on metric spaces before posting? It seems to me that more or less any reasonable property can be found there en.wikipedia.org/wiki/Metric_space and then by browsing through en.wikipedia.org/wiki/Category:Metric_geometry – Theo Buehler May 10 2011 at 16:07 2 Metric spaces are an extremely useful abstraction of structure found in many interesting examples. It is better to learn about these examples rather than ask "what can we add to the definition of a metric space to obtain interesting things"? – Yemon Choi May 10 2011 at 19:28 2 On looking at that wikipedia entry, I think that the examples given there are worth looking at. Probably the reason why you don't see a mention of Polish spaces is that they are topological spaces that happen to be separable and metrizable; they are not metric spaces per se, because different metrics can give the same topology. Unusurprisingly, Wikipedia's entry on Polish spaces en.wikipedia.org/wiki/Polish_space is a better place to look for those examples. – Yemon Choi May 10 2011 at 19:32 3 A POLISH SPACE IS NOT A METRIC SPACE. It is a separable topological space which can be equipped with a complete metric. There are many different metric spaces on $R^2$ which yield the same (usual) topology, and making a list of all of them will tell you very little of worth about the Polish space $R^2$ that is not already obvious from using the "usual" metric – Yemon Choi May 10 2011 at 23:40 2 @Marek: Your comment "every complete separable metric space is Polish" suggests (to me) that you missed the point of Yemon's pervious comment. Although what you wrote is true, it's also true that some incomplete separable metric spaces are Polish (because the same topology can be induced by a different, complete metric). The property of being Polish is a matter of topology; even if you're given a topology as induced by a metric, you may need to change the metric (but not the topology) to see that your space is Polish. – Andreas Blass May 10 2011 at 23:45 show 9 more comments ## 1 Answer Another class of metric spaces that are of interest are length spaces. Roughly speaking, these are spaces in which you can measure the length of paths. The distance is then the inf of all lengths paths pinned at the two points. Gromov talks about these as does a book by Borago and Borago (the exact citations elude me). - Interesting, thank you. – Marek May 11 2011 at 6:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9472495317459106, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/65227/square-root-of-a-matrix/65257
# Square root of a matrix Under what conditions does a matrix $A$ have a square root? I saw somewhere that this is true for Hermitian positive definite matrices(whose definition I just looked up). Moreover, is it possible that for some subspace $X \subset M_n(\mathbb R)$ of $n\times n$ matrices over $\mathbb R$, the map $A \mapsto \sqrt{A}$ is continuous? People who want to consider more generality can also look at matrices over $\mathbb C$. Thank you. - 5 – J. M. Sep 17 '11 at 6:15 @J.M.: Is looking at the Jordan form a continuous operation? – Derek Sep 17 '11 at 6:35 3 @J.M. That is not true. For all nilpotent Jordan blocks larger than $2\times 2$, squaring them yields a nontrivial endomorphism. This generates tons of nilpotent matrices which have square roots, even though no individual Jordan block has a square root. There is, in fact, a nice combinatorial description of the nilpotent matrices which have square roots, and which gives the Jordan form of all possible square roots (in general, there are several). Namely, if $a_i=\dim \ker A^i-\dim \ker A^{i-1}$, then $A$ will have a square root iff $a_i=a_{i+1}\Rightarrow a_i$ is even. – Aaron Sep 20 '11 at 16:38 I suppose you're right, @Aaron, but would you happen to have an explicit example of a "rootable" nilpotent on hand, just so I can check my intuition? :) – J. M. Sep 20 '11 at 16:46 1 @J.M. Consider the $4\times 4$ matrix with two $2\times 2$ blocks. This is the square of a matrix conjugate to a $4\times 4$ Jordan block, unless I've done my calculations wrong. – Aaron Sep 20 '11 at 16:49 ## 4 Answers Expanding on J.M.'s comment, if we look at a Jordan block $J$ of size $n$ with eigenvalue 0, then I claim that if $n>1$, $J$ has no square root. Suppose $K^2 = J$. Then $K^{2n} = J^n = 0$, but $K$ is an $n$ by $n$ matrix, so in fact $K^n=0$. If $n$ is even, we conclude that $J^{n/2}=0$ (since $K^2=J$), a contradiction since no power of $J$ less than $n$ can be 0. If $n$ is odd, then we conclude $J^{(n-1)/2} K = 0$, and multiplying $K$ on the right gives $J^{(n+1)/2} = 0$. If $n>1$ this is again a contradiction. On the other hand, if we look at a Jordan block $J$ with eigenvalue $\lambda \ne 0$, then we may write $J = \lambda I + N$ where $N$ is a nilpotent matrix. To find a square root of $J$, expand $(\lambda I + N)^{1/2}$ using the usual Taylor series for the square root around the point $\lambda$, with an increment of $N$. Since $\lambda \ne 0$, all derivatives of the square root are defined at $\lambda$ and since $N$ is nilpotent, the Taylor series terminates and no convergence issues arise. - 1 – Marc van Leeuwen Apr 1 at 10:50 Over the complex numbers (or any other algebraically closed field with $\operatorname{char} k\neq 2$), every invertible matrix has a square root. In fact, over $\mathbb C$, since every invertible matrix has a logrithm, we can take a one parameter family of matrices $e^{t\log A}$, and taking $t=1/2$ yields a square root of $A$. To see the existance of matrix logrithms, it suffices to show that $I+N$ has a logrithm, where $N$ is nilpotent, and this follows from Taylor series (similar to Ted's proof of the existence of sqare roots). Thus, we can determine if a matrix $A$ has a square root by restricting to $\displaystyle\bigcup_n \ker A^n$, which is the largest subspace on which $A$ acts nilpotently. In what follows, we will assume that $A$ is nilpotent. Up to conjugation, $A$ is determined by its Jordan normal form. However, equivalent to JNF for a nilpotent matrix is the data $a_i'=\dim \ker A^i$ for all $i$. This is obviously an increasing sequence. Less obvious is that the sequence $(a_i)$ where $a_i=a'_i-a'_{i-1}$ is a decreasing sequence, and hence forms a partition of $\dim V$ where $A:V\to V$. We note that this data is equivalent to the data in JNF, as $a_i-a_{i+1}$ will be the number of Jordan blocks of size $i$. More explicitly, a jordan block of size $k$ corresponds to the partition $(1,1,1,1,1\ldots, 0,0,0,\ldots)$ with $k$ $1's$, and if a nilpotent matrix $A=\oplus A_i$ is written in block form where each block $A_i$ corresponds to a partition $\pi_i$, then $A$ corresponds to the partition $\pi=\sum \pi_i$, where the sum is taken termwise, e.g. $(2,1)+(1,1)+(7,4,2)=(10,6,2)$. Moreover, $A^2$ corresponds to the partition $(a_1+a_2, a_3+a_4,\ldots, a_{2i-1}+a_{2i}, \ldots).$ Because every matrix will be conjugate to a JNF matrix and $\sqrt{SAS^{-1}}=S\sqrt{A}S^{-1}$, we see that a matrix will have a square root if and only if the corresponding partition has a "square root." The only obstruction to a partition having a square root is if two consecutive odd entries are equal. Otherwise, we can take one (of many) square roots by replacing each $a_i$ with the pair $\lceil a_i/2 \rceil, \lfloor a_i/2 \rfloor$. - Have a look at the wikipedia article http://en.wikipedia.org/wiki/Matrix_square_root which I found exceptionally good ---- much better than average for Wikipedia! - Some partial answers to your question can be derived from this answer. Here are two examples: (1) If $A\in M_n(\mathbb C)$ is invertible, then there is an open neighborhood $U$ of $A$ in $M_n(\mathbb C)$ and a holomorphic function $f:U\to M_n(\mathbb C)$ such that $f(B)^2=B$ and $f(B)\in\mathbb C[B]$ for all $B$ in $U$. (2) Let $U$ be the set of all $A$ in $M_n(\mathbb C)$ such that no eigenvalue of $A$ is a nonpositive real number. Then $U$ is open in $M_n(\mathbb C)$. Moreover, there is a holomorphic function $f:U\to M_n(\mathbb C)$ such that we have for all $A$ in $U$: • $f(A)^2=A$, • $f(A)\in\mathbb C[A]$, • if $A$ is a positive definite hermitian matrix, then $f(A)$ is the usual square root of $A$. (Marc van Leeuwen first comment below refers to a former version of the answer.) - Fair enough. But I don't really see which part of the question this answers. – Marc van Leeuwen Apr 4 at 5:41 Dear Marc: Thanks for your comment. I'll answer it as soon as possible, but I'll be very busy today and tomorrow. I'll try to get back to you Saturday (Paris time) at the latest. – Pierre-Yves Gaillard Apr 4 at 12:46 Dear Marc: I've just edited the answer. Thank you again for your interest! – Pierre-Yves Gaillard Apr 6 at 7:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 102, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269235730171204, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/11342/union-and-intersection-of-intervals/11343
# Union and Intersection of intervals I have two sets $A = \{1 \leq x \leq 5\}$ and $B = \{5 \leq x \leq 8\}$. Now I want to find the `Union` and `Intersection` of $A$ and $B$. I tried `Union[A, B]`, I got `{1 <= x <= 5, 5 <= x <= 8}` and for `Intersection[A, B]`, I got `{}`. The correct answer for $A \cup B$ is `[1, 8]` and $A \cap B$ is `{5}`. How do I tell Mathematica to do that? And if $A = \{1 < x < 5\}$ and $B = \{x > 5\}$. Now I want to find the `Union` and `Intersection` of $A$ and $B$. How do I tell Mathematica to do that? - 1 You will want to use `Interval`. – David Carraher Oct 1 '12 at 1:13 ## 1 Answer ````a = Interval[{1, 5}]; b = Interval[{5, 8}]; IntervalUnion[a, b] ```` Interval[{1, 8}] ````IntervalIntersection[a, b] ```` Interval[{5, 5}] - Thank you very much. – minthao_2011 Oct 1 '12 at 1:23 2 `Interval[{1,5},{5,8}]` also gives `IntervalUnion[a,b]` (+1) – kguler Oct 1 '12 at 1:28 Nice. I wasn't aware of that. – David Carraher Oct 1 '12 at 1:30 1 @minthao: `B = Interval[{1, ∞}]`. – J. M.♦ Oct 1 '12 at 1:52 1 @minthao. Mathematica seems to work only with closed intervals. `Interval[{1,5]` includes the numbers 1 and 5, and therefore does not match your original inequality, 1<x<5. I don't know whether there is a way to not include the end points. Perhaps this is a question worth asking. – David Carraher Oct 1 '12 at 2:31 show 10 more comments lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9314320683479309, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/04/06/derivatives-in-coordinates/?like=1&source=post_flair&_wpnonce=a832ba55c5
# The Unapologetic Mathematician ## Derivatives in Coordinates Let’s take the derivative and see what it looks like in terms of coordinates. Say we have a smooth manifold $M$ and a smooth map $f:U\to N$ from an open subset of $M$ to another smooth manifold $N$. If $p\in U$ is any point, we define the derivative $f_{*p}:\mathcal{T}_pM\to\mathcal{T}_{f(p)}N$ as before. Now, if $(U,x)$ is a coordinate patch — even if there isn’t a single coordinate patch on the whole domain of $f$ we can restrict $f$ down to a coordinate patch containing $p$ — we get a basis of coordinate vectors at $p$. Similarly, if $(V,y)$ is a coordinate patch around $f(p)$ we get a basis of coordinate vectors at $f(p)$. We want to write down the matrix of $f_{*p}$ in terms of these two bases. So, the obvious path is to take one of the coordinate vectors at $p$, hit it with $f_{*p}$, and write the result out in terms of the coordinate vectors at $f(p)$. The generic problem, then, is to calculate the $j$th component — the one corresponding to $\frac{\partial}{\partial y^j}(f(p))$ — of $f_{*p}\left(\frac{\partial}{\partial x^i}(p)\right)$. But we know that this coefficient comes from sticking $y^j$ into this vector and seeing what pops out! $\displaystyle\begin{aligned}\left[f_{*p}\left(\frac{\partial}{\partial x^i}(p)\right)\right](y^j)&=\left[\frac{\partial}{\partial x^i}(p)\right](y^j\circ f)\\&=D_i\left(y^j\circ f\circ x^{-1}\right)\\&=D_i\left(u^j\circ(y\circ f\circ x^{-1})\right)\end{aligned}$ We’re taking the $i$th partial derivative of the $j$th component of the function $y\circ f\circ x^{-1}$, which goes from the open set $x(U)\in\mathbb{R}^m$ into $\mathbb{R}^n$, where $m$ and $n$ are the dimensions of $M$ and $N$, respectively. Like we saw for coordinate transforms in place, this is just the Jacobian again. So if we want to write out the derivative $f_{*p}$ in terms of local coordinates, we first write out our local coordinate version of $f$ as a function from one Euclidean space to another, and then we take the Jacobian of that function at the appropriate point. ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 4 Comments » 1. [...] have the identity map on this vector space. And indeed, if we let be any coordinate patch around we know that the matrix of with respect to these local coordinates is the Jacobian of the coordinate [...] Pingback by | April 7, 2011 | Reply 2. [...] Notice now that this does not guarantee that itself is injective. For instance, if and , then we can form the mapping . Using the coordinates on and on , we can calculate the derivative in coordinates: [...] Pingback by | April 18, 2011 | Reply 3. [...] In components, this is , where the are the canonical coordinates on . We can easily calculate the derivative in these coordinates: . This is the zero function if and only if , and so has rank at any nonzero [...] Pingback by | April 25, 2011 | Reply 4. [...] us the basic coordinate vector fields in the patch. If is a coordinate patch around , then we know how to calculate the derivative applied to these [...] Pingback by | September 27, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9028880596160889, "perplexity_flag": "head"}
http://scicomp.stackexchange.com/questions/3564/affecting-the-rank-of-a-gram-matrix-by-configuration-shift
# Affecting the rank of a Gram matrix by configuration shift Let certain configuration of $n$ points exist in $d-$dimensional space, $X\in\mathbb{R}^{n\times d}$, $d<<n$. Also, let the corresponding Gram matrix be defined as $G=XX^T$. Since $X$ exists in Euclidean space, rank of $G$ is, $rank(G)=rank(X)=d$. Now, suppose that the configuration $X$ is shifted to matrix $X'$ (shift corresponds to the origin translation). Could it be possible that the Gram matrix, $G'=X'(X')^T$, has rank different from $G$, ie, $rank(G)\neq rank(G')$? - 1 The statement: "Since $X$ exists in Euclidean space, rank of $G$ is, $rank(G)=rank(X)=d$" is not obvious to me. Are you assuming the rank of the original configuration $G$ is $d$ and interested in if it could ever decrease by linear shift? This is trivially true if there are $d$ points and one of the points is shifted onto the origin. – Aron Ahmadia Oct 27 '12 at 15:25 @AronAhmadia: The wording is a little strange, but $d \le n$ does imply that $\text{rank}(X) \le \min\{n,d\} = d$. – Jack Poulson Oct 27 '12 at 21:00 No. The $n\le d$ points could be linearly dependent, in which case $rank(X)<n\le d$. – Wolfgang Bangerth Oct 27 '12 at 23:20 Interpret ''$X$ exists in Euclidean space'' as $X$ spans the space, and the forumula becomes correct. – Arnold Neumaier Oct 28 '12 at 11:04 @usero, you've effectively written a new, more interesting question, down in your third paragraph. Do you mind splitting this off into a new question? Continuously modifying a question makes it very difficult for people who come by this page later to understand the answers (because they are targetting the first paragraph). – Aron Ahmadia Oct 30 '12 at 8:54 ## 2 Answers As @AronAhmadia mentioned, shifting one of the points to the origin provides a simple example where the rank changes. Consider the following case where $n=2$ and $d=2$: $$X = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = I,$$ where clearly $\text{rank}(X)=2$, and $G=XX^T=I$ has the same rank. But, if we shift the origin to the point $[1,0]^T$ to define $$\tilde X = \begin{pmatrix} 0 & 0 \\ -1 & 1 \end{pmatrix},$$ then $\text{rank}(\tilde X)=\text{rank}(\tilde G)=1$. - Thanks. However, this is a bit confusing: Original Gram matrix has $rank(G)=2$, meaning that it has 2 positive eigenvalues. Configuration $X$ is therefore in 2D. But, by eigendecomposition of $\tilde{G}$ which has only one eigenvalue, the configuration is 1D. Am I missing something? – usero Oct 29 '12 at 10:29 I think your confusion comes from mixing the spatial dimension (in this case, the width of $X$) with the rank of the matrix. Consider the case where all of the points are sampled from the origin: then, no matter what the dimension of the points, the rank of $X$ and $XX^T$ is zero. – Jack Poulson Oct 29 '12 at 15:45 – usero Oct 29 '12 at 18:01 According to that paper, the "dimension" of the distance matrix $D$ is the minimum rank over the entire set of matrices of configurations $X$ which generate $D$. Thus, in the previous case, we know that the dimension of the distance matrix $D$, which is generated by both $X$ and $\tilde X$, is at most $1$, since the rank of $\tilde X$ is 1. If you would like to directly compute the dimension of $D$, it is equivalent to the rank of $F$, which is defined by Equation 1 of the paper you linked. – Jack Poulson Oct 29 '12 at 18:33 ## Did you find this question interesting? Try our newsletter Sign up for our newsletter and get our top new questions delivered to your inbox (see an example). email address With your updated question, the answer still remains the same, you can still lose rank due to configuration shift if the points are trivially distributed. Appealing back to Jack Poulson's answer, imagine now a case where $n >> 2$, but every point except for one is along the axis $(1,0)$. Assume this last point is on the other axis $(0,1)$. If your linear shift moves the point on $(0,1)$ to $(0,0)$, the rank of the shifted Gram operator becomes one-dimensional. - I'll try to simplify the question: Suppose a configuration $X\in\mathbb{R}^{n\times 3}$ is given, and $G=XX^T$, $rank(G)=rank(X)=3$. This would mean that a 3D configuration is obtained by eigendecomp. of $G=(U\Lambda^{1/2})(U\Lambda^{1/2})^T$ because there are 3 eigenvalues. However, the above statements indicate that by just shifting $X$, one could obtain $G_1$ that has rank of 2, hence with a configuration actually being 2D. Does there exist a particular choice of shift (origin) for which $rank(X)=rank(XX^T)$ is the true dimensionality of $X$? – usero Oct 29 '12 at 17:28 As long as we are talking about shifts and not rotations, any shift that does not move one of the points onto the origin is guaranteed to preserve dimensionality of the space. By the same principle, the most dimensionality you could lose is 1. – Aron Ahmadia Oct 30 '12 at 8:52 – usero Oct 30 '12 at 11:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306378960609436, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/307988/cleaning-a-signal-and-computing-period
# Cleaning a signal and computing period I am working with a signal which is a periodic square signal with some kind of noise and some outliers. I would like to know which is the best solution in order to get the period and clean the outliers that can be seen in the image: The final goal is to binarize the signal. Thanks in advance, - 2 You should not ask the same question on two different stackexchanges. Please contact the moderators (click on the link marked `flag` at the bottom of your post) and ask them to migrate this question to dsp.SE which is a better fit. – Dilip Sarwate Feb 19 at 14:11 ## 1 Answer Here's what I would do: 1. Lowpass filter the signal to remove the high-frequency noise. 2. Once you lowpass filter the original signal, it won't look like a square wave anymore. The infinitely sharp transitions will be damped. Thus, apply a $1$-bit quantizer (with hysteresis) to the output of the lowpass filter, so that you "binarize" the lowpass filtered signal. 3. Differentiate the "binarized" signal, in order to obtain a train of impulses. Compute the duration of the time intervals between successive impulses, compile a list of such durations, then compute a histogram, normalize the histogram, and then compute the 1st moment of the normalized histogram (which is an estimate of the expected value of the period). - Dear Rod, I fully understand your answer and I guess it will work well. I will try it immediately. Thanks for your answer. – Dídac Pérez Feb 19 at 11:49 1 A median filter (instead of an average) could be better for step 1. – leonbloy Feb 19 at 14:57 @leonbloy: It's hard to implement a median filter using analog electronics, though. – Rod Carvalho Feb 19 at 19:12 @RodCarvalho Sure - it's also hard to compute histograms. But I don't see anything about analog electronics in the OP. – leonbloy Feb 19 at 20:43 @leonbloy: You're right. The solution I proposed is arguably ugly, but implementable using analog electronics. It's how digital signals are regenerated (if I remember my communication systems classes properly). – Rod Carvalho Feb 19 at 20:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218964576721191, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/tagged/pairings
# Tagged Questions Pairing-based cryptography uses bilinear maps to create a gap group that allows efficient constructions of certain primitives. learn more… | top users | synonyms 1answer 55 views ### Exponentiation In PBC library I need to compute a function $h^l$, where h is an element of G2 and l is a rational number. How can this be done using the PBC library? I have converted the h to ... 1answer 54 views ### How are Elliptic Curve Cryptography and Pairing Based Cryptography related? I have been doing a project that uses the PBC library developed by Ben Lynn. But I am still not clear on how PBC is related to ECC. I know that this is a site for complex crypto QA, but I did not know ... 1answer 45 views ### Is it possible to create a Bilinear Function with Already Assigned “Multiplicative” Input Groups? Assume that we have an already assigned Multiplicative Cyclic Group $\mathbb Z_p^*$ with order $q=p-1$, and $p$ is a prime number, is it possible to create a bilinear function \$\hat{e}: \mathbb Z_p^* ... 1answer 138 views ### Simple example for CP-ABE (Ciphertext policy attribute-based encryption) I'm currently working on Ciphertext Policy Attribute-Based Encryption (CP-ABE). So far I'm only using it with a basic understanding how it actually works. Now I want to understand it a bit better, but ... 0answers 158 views ### Security of pairing-based cryptography over binary fields regarding new attacks In the last week, the discrete logarithm problem was broken for the binary fields $\mathbb{F}_{2^{(14 \times 127)}}$ and $\mathbb{F}_{2^{(27 \times 73)}}$. Pairing-based cryptography using binary ... 0answers 37 views ### Generating non-supersingular elliptic curves for symmetric pairings I am looking into the application of pairings in CPABE in particular. I've notice that the scheme uses a supersingular curve as the basis of the pairing. Looking through Ben Lynn's thesis for the ... 1answer 51 views ### Discrete logs on elliptic curve with embedding degree 3 with the 'MOV' attack The curve $E(\mathbb{F}_{47}):y^2=x^3+x+38$ has order $61$ and $61|47^3-1$ so the embedding degree of $E$ is $3$ and therefore the MOV attack, presumably using some sort of distortion map and a ... 2answers 137 views ### Modulus for elliptic curve point multiplication I want to implement a point multiplication ($k \cdot P$) operation on FPGA. I have a BN curve $y^2=x^3+2$, and a scalar value $k$. The $x$ and $y$ coordinates of point $P$ are of 256 bits. In the ... 1answer 92 views ### Using pairings to verify an extended euclidean relation without leaking the values? Let $P_i(x)$ be polynomials $i=1,...,n$, $s$ some value, and $g$ a generator of a group $G$ where the discrete logarithm is hard. Assume a prover wants to convince a verifier having access to the ... 1answer 75 views ### why $e(g,g)^N=1$ in bilinear pairings holds? I can't get the point of prime order bilinear pairings:$\mathbb{G}\times\mathbb{G}\rightarrow\mathbb{G}_T$,$g=$ generator of $\mathbb{G}$ , $N=p*q$, $p$ and $q$ primes and $e(g,g)^N=1$. why ... 2answers 156 views ### when do we need composite order groups for bilinear maps and when prime order? Why we need bilinear groups of composite order? What's the special security property of the composite order group in comparison with one of prime order?To put it in another way when do we need ... 1answer 129 views ### Why pairing based crypto is suitable for some particular cryptographic primitives? Why pairing based crypto is being widely used in some special crypto primitives as ID based crypto and variations of standard signatures? I mean taking as deep as possible what makes it suitable for ... 1answer 152 views ### Does Identity-Based Encryption actually solve any problem? Identity based encryption schemes [*] seem to have great potential in high-latency Delay-Tolerant and mobile, ad-hoc networks since they apparently seem to avoid the need for key negotiation and ... 1answer 160 views ### Useful pairings for cryptography I've recently looked a bit at pairing based cryptography and I was wondering what properties the groups involved should have in order to be useful for cryptographic purposes? Has anything more exact ... 2answers 250 views ### Alternatives to FHE for secure function evaluation As a followup to a previous question I asked which was more related to Fully Homomorphic Encryption (FHE), what other cryptographic methods are available for computing a private function on public ... 1answer 126 views ### Must the order of the groups in a bilinear map be the same? I've been reading up on bilinear maps and their application to cryptography and one thing I keep seeing hasn't yet clicked. If $e:G_1\times G_2\to G_n$ is a bilinear map, $G_1,G_2,G_n$ are always ... 3answers 436 views ### What is Identity-Based Encryption (IBE) and why is it “better”? Most CS/Math undergrads run into the well-known RSA cryptosystem at some point. But about 10 years ago Boneh and Franklin introduced a practical Identity-Based Encryption system (IBE) that has ... 2answers 244 views ### Pairing-friendly curves in small characteristic fields There are several well-known techniques to generate pairing-friendly curves of degrees 1 to 36 on prime fields GF(p): Cocks-Pinch, MNT, Brezing-Weng, and several others. In extension fields GF(p^n), ... 1answer 395 views ### Mapping points between elliptic curves and the integers My primary question is: Is there an easy way to create a bijective mapping from points on an elliptic curve E (over a finite field) to the integers (desirably to $\mathbb{Z}^*_q$ where $q$ is the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9181519150733948, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/52397?sort=newest
## Are there (-2)-curves on an Enriques surface? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be an Enriques surface. A $(-2)$-curve is an irriducible rational curve on X such that $C^2 = -2$. By Proposition [VIII,16.1] from Barth-Peters-Van de Ven, we have that if $D^2 = -2$, then it is a $(-2)$-curve, but do such curves exist? - ## 3 Answers As explained in J.C. Ottem's answer, the generic Enriques surface contains no smooth rational curves at all. However, it can happen that some special Enriques surface $X$ contains $(-2)$-curves, and also infinitely many of them (see this paper by Cossec and Dolgachev). The maximal number of disjoint $(-2)$ curves on $X$ is eight, and Enriques surfaces with eight disjoint $(-2)$-curves are classified in the article Mendes Lopes, Margarida; Pardini, Rita Enriques surfaces with eight nodes Math. Z. 241 (2002), no. 4, 673–683. The authors first show that, setting $C_1, \dots,C_8$ to be the exceptional $(-2)$-curves of $X$, the divisor $C_1+\dots+C_8$ is divisible by $2$ in the Picard group of $X$, or equivalently there exists a double cover $\widetilde{X} \to X$ branched exactly over them. The main theorem then states that an Enriques surface with eight disjoint $(-2)$-curves is isomorphic to $X=D_1\times D_2/G$, where $D_1,D_2$ are elliptic curves and $G$ is either $\mathbb{Z}_2^2$ or $\mathbb{Z}_2^3$. - 1 Enriques surfaces always have Picard rank equal to 10, as follows from the Lefschetz (1,1) theorem. However, the effective cone is not constant in moduli which is what allows the number of (-2)-curves to vary. – ulrich Jan 18 2011 at 12:16 Yes, you are right. I added an answer below explaining this. – J.C. Ottem Jan 18 2011 at 12:20 You are definitely right. I corrected the answer, thank you – Francesco Polizzi Jan 18 2011 at 13:13 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It is well-known that (at least over $k=\mathbb{C}$) that the generic Enriques surface does not contain any smooth rational curves at all. This can be seen, for example, using the global Torelli theorem for Enriques surfaces. For a complete proof, see Barth, W., Peters, C.: Automorphisms of Enriques surfaces. Invent. Math. 73, 383-411 (1983). However, as Fransesco's answer shows, there are Enriques surfaces containing rational curves. Moreover, it is also known that once $S$ contains a rational curve, then generically it contains infinitely many. The reason for this is basically since the automorphism groups of Enriques surfaces tend to be very large. In fact, Cossec and Dolgatchev proved the following surprising result about rational curves on an Enriques surface: Let $S$ be an Enriques surface of degree $d$ in a projective space $\mathbb{P}^n$. Assume that S contains a smooth rational curve, then it contains such a curve of degree less or equal to $d$. This implies for example that the subset of the Hilbert scheme parametrizing Enriques surfaces of degree $d$ in $\mathbb{P}^n$ containing smooth rational curves is a constructible subset. - I think that the correct statement is: "once $X$ contains a smooth rational curve, then $generically$ it contains infinitely many of them". In fact, the general nodal Enriques surface has a big automorphism group, but some special Enriques surfaces have finite automorphism group (examples were given by Dolgachev, and are contained in the paper of Barth and Peters that you cited). – Francesco Polizzi Jan 18 2011 at 13:23 Good point, fixed. – J.C. Ottem Jan 18 2011 at 13:38 Since an Enriques surface is elliptic, it may have (-2)-curves as a component of singular fibers. You may refer S. Kondo, Enriques surfaces with finite automorphism groups. Japan. J. Math.(N.S.) 12 (1986), no. 2, 191--282. In the paper Kondo constructed explicitly many examples of Enriques surfaces with finitely many (-2)-curves. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183237552642822, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/251558/original-papers-on-the-subject-of-group-actions/251564
# Original papers on the subject of group actions Does anyone if there are any original paper(s) that first introduced the notion of group action or permutation representation, and who the author(s) were? Any references I have found so far on e.g. Wikipedia are for current textbooks. I am interested to find out the original motivation for introducing the concept. That is, whether group action was first introduced in its own right as a binary operation, which is a generalisation of the action of a permutation on a set, or whether the representation of an abstract group by a permutation group was considered first, and then the action defined later as an axiomatisation. (I realise these are equivalent concepts, but I am more interested in precisely which came first - of course this may be a 'chicken and egg' situation, but I just thought I would ask.) Many thanks Just to clarify, the 'picture' I have in mind when explaining the possible origin of the notion is this: someone looked at the behaviour of a permutation $\sigma$ acting in the natural way on an element $x$ in a set $X$ to give $\sigma(x)$, with its properties such as $\sigma_1(\sigma_2(x))=(\sigma_1\sigma_2)(x)$, and then decided to generalise this to an arbitrary group by inventing a binary operation and forcing the group elements to obey the same sort of relations as the permutations, i.e. for the 'composition of maps' property $g.(h.x) = (gh).x$. - Would it be surprising if it went the other way around? It seems possible that "group theory" sprung out of the study of permutations. – Thomas Andrews Dec 5 '12 at 14:13 ## 2 Answers I think it all goes back to Abel, Ruffini, Lagrange and of course Galois, although the notion of a group was not totally formalized at this time. What I am 90% sure about, is that the notion of group action (that is transformation groups) came before the notion of abstract group. - Thanks, I hadn't heard of Ruffini before. I suppose I am thinking more about the more formal definition of a group action as a map $G \times X \rightarrow X$ with the two usual axioms, as opposed to the general idea, which I am sure existed previously in a less formal form. – user50229 Dec 5 '12 at 14:32 I found a link to Arthur Cayley's first paper on group theory with comments, in an answer to a different question on MSE. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9510745406150818, "perplexity_flag": "head"}
http://mathoverflow.net/questions/59413/convergence-of-eigenvectors/59542
## Convergence of eigenvectors ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $T$ be a compact operator on $l^2$. Let $T_n$ be finite rank operators and $T_n \to T$ in the operator norm. Is it true that the eigenvalues and eigenvectors of $T_n$ converge to eigenvalues and eigenvectors of $T$? - 1 Which eigenvectors of $T_n$ are supposed to converge to eigenvectors of $T$? All of them? Some of them? – Yemon Choi Mar 24 2011 at 10:15 1 Presumably you are motivated by the self-adjoint case. Which other cases have you tried, or heard of? – Yemon Choi Mar 24 2011 at 10:18 This looks like a homework problem that was slightly open ended. I vote to close. – Bill Johnson Mar 24 2011 at 11:26 1 I am not so sure this is homework; but I think a more precise question would be better received. Perhaps the original question is motivated by particular examples that have extra structure not present for general compact operators on Hilbert space? (E.g. integral kernel, Toeplitz or band structure.) – Yemon Choi Mar 24 2011 at 19:20 1 This is not a homework. This comes from expansion of a solution to PDE with Robin boundary condition in some basis. I've obtained an infinite system of equations Av=v, where A is a band matrix, is square summable (entries in k-th row are of the order 1/k) so A is compact operator on $l^2$. In general I want to show that the eigenvector of truncated matrix A is good approximation of the solution to original problem if the truncation rank is large enough. – Szopa Mar 25 2011 at 8:25 ## 2 Answers For any compact set K of complex numbers disjoint from the spectrum of T, there is $\epsilon > 0$ such that for every operator S with $\|S-T\| < \epsilon$, K is disjoint from the spectrum of $S$. Namely, you can take $\epsilon = \inf_{\lambda \in K} \|(T-\lambda)^{-1}\|^{-1}$. So the eigenvalues of $T_n$ do converge in that sense to the spectrum of $T$ (not necessarily eigenvalues, because $T$ may not have any). - 1 If T has nonzero eigenvalues, then convergence of the resolvent and Dunford calculus can be used to show that the spectral projections converge. – Michael Renardy Mar 24 2011 at 18:19 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For eigenvectors there is no chance. One may approximate the identity map $T$ on $\Bbb R^2$ with a symmetric matrix $T_n$ whose eigenvalues are $1$ and $1-1/n$. The eigenvectors are perpendicular to each other, but otherwise their direction is entirely optional. So by choosing directions erratically one can avoid convergence. Of course some subsequence will converge. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9158838987350464, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/85268-exponentials-i-need-help.html
# Thread: 1. ## exponentials....i need help I have two problems I am stuck on. I missed lecture and need help with both 1. 256^x=4 2. 1/9x=6561 Thanks to anyone who can help me. Leilani 2. Originally Posted by leilani I have two problems I am stuck on. I missed lecture and need help with both 1. 256^x=4 2. 1/9x=6561 Thanks to anyone who can help me. Leilani 1. Take the log of both sides (you can pick any base you like) You can use the rule $log_b(a^k) = k \times log_b(a)$ ( $b >0 , \neq 1$) to bring x to the front and remove it as a power: As 256 = 2^8 and 4 = 2^2 we can write the equation as $2^{8x} = 2^2$ Spoiler: Take logs of both sides (I will use base e): $8x \times ln(2) = 2ln(2)$ $x = \frac{1}{4}$ (note that using base 2 in your logarithm will get $log_2(2) = 1$ so they get cancelled a step earlier) 2. Do you mean $\frac{1}{9}x$ or $\frac{1}{9x}$ ? 3. Originally Posted by leilani I have two problems I am stuck on. I missed lecture and need help with both 1. 256^x=4 2. 1/9x=6561 Thanks to anyone who can help me. Leilani Hi leilani, $2^8=256$ (1) $256^x=4$ $2^{8x}=2^2$ $8x=2$ $x=\frac{1}{4}$ (2) Did you mean to write this: $\left(\frac{1}{9}\right)^x=6561$? I can't see your picture for some reason. If so, know that $3^8=6561$ and $\frac{1}{9}=\left(\frac{1}{3}\right)^2=3^{-2}$ Then, $\left(\frac{1}{3}\right)^{2x}=3^8$ $3^{-2x}=3^8$ $-2x=8$ $x=-4$ 4. (1/9)x=6561 It doesn't have the x as an exponent just as a multiplier 5. Originally Posted by leilani It doesn't have the x as an exponent just as a multiplier Like this $\frac{1}{9}x=6561$?? This one is easier than the other one, then. Just multiply both sides by 9. 6. Thanks, I got it. I set it up just like the other one. I think there was a mistake how they wrote the problem. When I worked it out just like the other one I got the right answer. Thanks very much! Leilani 7. Originally Posted by leilani Thanks, I got it. I set it up just like the other one. I think there was a mistake how they wrote the problem. When I worked it out just like the other one I got the right answer. Thanks very much! Leilani What was the right answer since we don't know how it was meant to be written in the first place?? Just for yuks. 8. Originally Posted by masters What was the right answer since we don't know how it was meant to be written in the first place?? Just for yuks. X=-4 Thanks for the help
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9769041538238525, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/61802/projecting-the-unit-cube-onto-a-very-special-subspace
## Projecting the unit cube onto a (very special) subspace ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $n>1$ be an integer, and $a>1$ a real number. Consider the subspace `$L<R^{2^n}$` generated by the $n$ possible tensor products of the $n-1$ copies of the vector $(1,a)$ and one copy of $(a,-1)$. (The $n$ generators of $L$ correspond to the $n$ positions where the factor $(a,-1)$ can be inserted into the product. Say, if $n=2$, then $L$ is generated by the two vectors $(a,-1,a^2,-a)$ and $(a,a^2,-1,-a)$.) Now for a vector $l\in L$ and a positive integer `$k\le 2^n$`, choose arbitrarily $k$ coordinates of $l$ and let $\sigma$ denote their sum. Since $\sigma$ is the scalar product of $l$ and a vector of norm $\sqrt k$, we have $$|\sigma| \le \|l\| \sqrt k.$$ My question is whether this trivial estimate can be improved by a growing factor; say, Is it true that for any $l\in L$ and `$k\le 2^n$`, the sum of any $k$ coordinates of $l$ is at most `$C_a\|l\| \sqrt k / \log\log n$` in absolute value, with a constant $C_a$ depending only on $a$? - 1 What are the coordinates of a vector in $R^{2^n}$ (that should be defined uniformly in $n$, for your question to make sense), do you consider it as the $n$-th tensor power of $R^2$ and generated by the elements $e_{i_1}\otimes e_{i_2}\otimes \dots \otimes e_{i_n}$? – Maurizio Monge Apr 15 2011 at 12:03 @Maurizio: the vectors, generating $L$, are vectors of the form `$e_1\otimes\dots\otimes e_n$`, where $n-1$ of the vectors $e_i$ are equal to $(1,a)$, and one of these vectors is equal to $(a,-1)$. So, all vectors $e_i$ lie in $R^2$, and their product lies in `$R^{2^n}$`. – Seva Apr 15 2011 at 14:28 ## 1 Answer I don't think so. Each generator $l$ of $L$ is a vector of length $2^n$ with $\binom{n}{i}$ entries equal to $\pm a^i$ for $0\le i \le n.$ So $\|l\|=(a^2+1)^{n/2}$ and there is only one entry which is $a^n$. If $a$ is large enough, then wouldn't that $k=1$ entry be larger than $\|l\| \sqrt 1 / \log\log n ?$ - Correct - but, in fact, I didn't mean uniformity in $a$ here. I have inserted a minor, but important refinement. – Seva Apr 17 2011 at 6:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9336177110671997, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/general-relativity+black-holes
# Tagged Questions 1answer 9 views ### Can we build a synthetic event horizon? If we imagine ourselves to be a civilization capable of manipulating very heavy masses in arbitrary spatial and momentum configurations (because we have access to large amounts of motive force, for ... 0answers 51 views ### Gravitational redshift of Hawking radiation How can Hawking radiation with a finite (greather than zero) temperature come from the event horizon of a black hole? A redshifted thermal radiation still has Planck spectrum but with the lower ... 1answer 48 views ### Why does the Kruskal diagram extend to all 4 quadrants? Why is it that the Kruskal diagram is always seen extended to all 4 quadrants when the definitions of the $U,V$ coordinates don't seem to suggest that the coordinates are not defined in, say, the 3rd ... 2answers 94 views ### Is time going backwards beyond the event horizon of a black hole? For an outside observer the time seems to stop at the event horizon. My intuition suggests, that if it stops there, then it must go backwards inside. Is this the case? This question is a followup ... 2answers 104 views ### What is a sudden singularity? I've seen references to some sort of black hole (or something) referred to as a sudden singularity, but I haven't seen a short clear definition of what this is for the layman. 1answer 42 views ### “WLOG” re Schwarzschild geodesics Why, when studying geodesics in the Schwarzschild metric, one can WLOG set $$\theta=\frac{\pi}{2}$$ to be equatorial? I assume it is so because when digging around the internet, most references seem ... 0answers 49 views ### Singularities in Schwarzchild space-time Can anyone explain when a co-ordinate and geometric singularity arise in Schwarzschild space-time with the element ... 1answer 50 views ### Does non-mass-energy generate a gravitational field? At a very basic level I know that gravity isn't generated by mass but rather the stress-energy tensor and when I wave my hands a lot it seems like that implies that energy in $E^2 = (pc)^2 + (mc^2)^2$ ... 1answer 105 views ### General definition of an event horizon? Horizons are in general observer-dependent. For example, in Minkowski space, an observer who experiences constant proper acceleration has a horizon. Black hole horizons are usually defined as ... 1answer 52 views ### What is mathematical definition of a strong gravity? Mathematical definition of a weak gravity is simple $g=\frac{GM}{r^2}$ but what is mathematical definition of a strong gravity? (blackhole-like or close to a blackhole-like object) 3answers 124 views ### Why are black holes special? A black hole is where it's mass is great enough that light can't escape at a radius above the surface of the mass? I've been told that strange things happen inside the event horizon such as ... 3answers 113 views ### What would happen to the Moon if Earth is turned into a black hole? Assume that all of sudden the Earth is turned into a black hole. And the moon revolves around the Earth (before turning into a black hole). What would happen to the Moon after earth changes to black ... 1answer 307 views ### Overcharging a black hole Hubeny's 1998 paper got a lot of people interested in determining whether cosmic censorship can be violated by dropping too much charge onto a black hole. It suggested that you might be able to get a ... 4answers 203 views ### Time inside a Black hole If time stops inside a black hole, due to gravitational time dilation, how can it's life end after a very long time? If time doesn't pass inside a black hole, then an event to occur inside a black ... 0answers 78 views ### Do semiclassical GR and charge quantisation imply magnetic monopoles? Assuming charge quantisation and semiclassical gravity, would the absence of magnetically charged black holes lead to a violation of locality, or some other inconsistency? If so, how? (I am not ... 0answers 157 views ### Spacetime around a Black Hole If we consider the sun, then space-time is curve around it. My question is that what is the kind of curvature of space and time around the black hole. Is that space and time more curved around the ... 1answer 52 views ### Can you enter a timelike hypersurface? As I understand it, a timelike hypersurface is one that has only spacelike normal vectors. But does this not imply that a the geodesic of a particle crossing it must be spacelike at that point? But ... 1answer 86 views ### Diving into a charged (Reissner-Nordstrom) Black hole Apparently there are two event horizons in this type of black hole, where the second one is known as the Cauchy horizon. According to Carroll, if you go into the first one, you will fall until you ... 2answers 106 views ### Future light cones inside black hole In Caroll's Spacetime and Geometry, page 227, he says that from the Schwarzschild metric, you can see than from inside a black hole future events all lead to the singularity. He says you can see this ... 0answers 109 views ### Going through a ring of black holes Mathematician here with a speculative physical question -- feel free to boot me if the level isn't right. Suppose one finds, or builds, a constellation of several black holes arranged in a circle. ... 0answers 23 views ### Why photons can't escape black hole? [duplicate] Photons do not have (rest) mass. Then why are they attracted by the black hole? And is it possible that a photon crossing a black hole from a little distance could get accelerated due to the force? 2answers 321 views ### The Uncertainty Principle and Black Holes What are the consequences of applying the uncertainty principle to black holes? Does the uncertainty principle need to be modified in the context of a black hole and if so what are the implications ... 1answer 141 views ### Does the curvature of space-time cause objects to look smaller than they really are? What's the difference between looking at a star from a black hole and looking at it from empty space? My guess is that the curvature of space-time distorts the wavelength of light thus changing the ... 0answers 89 views ### Falling into a black hole emitter vs observer Let's say we are working with the Schwarzschild metric and we have an emitter of light falling into a Schwarzschild black hole. Suppose we define the quantity $$u=t- v$$ where dv/dr= ... 1answer 77 views ### Spacelike slicing of Schwarzschild geometry I am having trouble understanding how to obtain a spacelike slicing of the Schwarchild black hole. I understand there is not a globally well defined timelike killing vector, so we can define t=cte ... 1answer 251 views ### Gravitational Redshift around a Schwarzschild Black Hole Let's say that I'm hovering in a rocket at constant spatial coordinates outside a Schwarzschild black hole. I drop a bulb into the black hole, and it emits some light at a distance of $r_e$ from the ... 1answer 142 views ### Hawking Radiation: how does a particle ever cross the event horizon? The heuristic argument for Hawking Radiation is, that a virtual pair-production happens just at the event horizon. One particle goes into the black hole, while the other can be observed as radiation. ... 2answers 90 views ### about the 1D singularity of black hole I saw some responses here saying that the singularity into the black hole is one dimension object so my question is : is it possible that the singularity is simply a merger of the 4 dimensions of the ... 1answer 59 views ### An infalling object in a black hole looks “paused” for a far away observer, for how long? As I understand, to an observer well outside a black hole, anything going towards it will appear to slow down, and eventually come to a halt, never even touching the event horizon. What happens if ... 1answer 124 views ### Could an ultra-relativistic particle tunnel directly through a stellar mass black hole? It occurred to me in passing that the Lorentz contraction of a black hole from the perspective of an ultra-relativistic (Lorentz factor larger than about 10^16) particle could reduce the thickness of ... 3answers 281 views ### What is the capture cross-section of a black hole region for ultra-relativistic particles? What is the capture cross-section of a black hole region for ultra-relativistic particles? I have read that it is $$\sigma ~=~ \frac{27}{4}\pi R^{2}_{s}$$ for a Schwarzschild BH in the geometric ... 4answers 197 views ### The bigger the mass, the more time slows down. Why is this? If I were to stand by a pyramid, which weighs about 20 million tons, I would slow down by a trillion million million million of second. Don't know if that's exactly right, but you get the point. Also, ... 0answers 115 views ### Is it mathematically possible or topologically allowable for cutouts, or cavities, to exist in a 3-manifold? A few weeks back, I posted a related question, Could metric expansion create holes, or cavities in the fabric of spacetime?, asking if metric stretching could create cutouts in the spacetime manifold. ... 2answers 205 views ### How does the evaporation of a black hole look for a distant observer? Let's assume an observer looking at a distant black hole that is created by collapsing star. In observer frame of reference time near black hole horizon asymptotically slows down and he never see ... 0answers 67 views ### Black hole entropy from collapsed entangled pure light Consider the following scenario, very similar to the one proposed in this question, but this time, the pure quantum radiation used for the black hole collapse, is now being split with down-converter ... 1answer 71 views ### Killing Vectors of BTZ black hole and their calculation in general I was wondering what are the Killing vectors of BTZ black hole and how to guess them easily? Will it be the same as of AdS? What then will be Killing vectors for AdS-Schwarzschild e.g.? 1answer 101 views ### Would dense matter around a black hole event horizon eventually form a secondary black hole? [duplicate] Possible Duplicate: Black hole formation as seen by a distant observer Given that matter can never cross the event horizon of a black hole (from an external observer point of view), if a ... 0answers 36 views ### Can a black hole actually grow, from the point of view of a distant observer? [duplicate] Possible Duplicate: Black hole formation as seen by a distant observer I've read in several places that from the PoV of a distant observer it will take an infinite amount of time for new ... 1answer 112 views ### How would you detect Hawking radiation? Hawking theorized that a black hole must radiate and therefore lose mass (Hawking radiation). According to classical relativity though, nothing can escape a black hole, the hawking radiation would ... 1answer 209 views ### In general relativity (GR), does time stop at the event horizon or in the central singularity of a black hole? I was reading through this question on time and big bang, and @John Rennie's answer surprised me. In the immediate environment of a black hole, where does time stop ticking if one were to follow a ... 2answers 134 views ### Is Brian Cox right to claim that Gravity is a strong force for large masses, is it wrong, or is it only a matter of interpretation? I watched a program of his in which it was claimed that since mass bends space in accordance to General Relativity, then in the case of very large stars it becomes a strong force to the point of being ... 3answers 179 views ### Black hole formation as seen by a distant observer [duplicate] Possible Duplicate: How can anything ever fall into a black hole as seen from an outside observer? Is black hole formation observable for a distant observer in finite amount of time? ... 2answers 400 views ### Extremal black hole with no angular momentum and no electric charge A black hole will have a temperature that is a function of the mass, the angular momentum and the electric charge. For a fixed mass, Angular momentum and electric charge are bounded by the extremality ... 0answers 89 views ### Alternate geodesic completions of a Schwarzschild black hole The Kruskal-Szekeres solution extends the exterior Schwarzschild solution maximally, so that every geodesic not contacting a curvature singularity can be extended arbitrarily far in either direction. ... 1answer 62 views ### does the background spacetime of a black hole affects its thermodynamic properties? The question is this: will the thermodynamic properties of a black hole (Hawking radiation spectra and temperature, entropy, area, etc.) depend if the black hole sits in a DeSitter or an Anti-DeSitter ... 0answers 54 views ### Why don't black holes have magnetic hair? [duplicate] Possible Duplicate: What happens to an embedded magnetic field when a black hole is formed from rotating charged dust? It is well stablished that the only hair a black hole can have is: ... 2answers 212 views ### What happens to orbits at small radii in general relativity? I know that (most) elliptic orbits precess due to the math of general relativity, like this: source: http://en.wikipedia.org/wiki/Two-body_problem_in_general_relativity I also know that something ... 0answers 45 views ### Kerr solution for finite collapse time The Kerr black hole solutions gives an analytic continuation that is asymptotically flat. Some people have argued that this is another universe, but others state that the analytic continuation ... 0answers 43 views ### transition between extremal and nonextremal black hole states Extremal black holes are at zero temperature, hence they do not radiate. my question is twofold: 1) is extremality of micro black holes a stable property? electric charge is quickly emitted from ... 2answers 216 views ### Cosmology questions from a novice These ideas/questions probably represent a lack of understanding on my part, but here they are: 1) Cosmologists talk about the increasing speed of expansion of the universe and talk of dark energy as ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9189189076423645, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/105304/criteria-for-irreducibility-of-polynomial/105323
## Criteria for irreducibility of polynomial ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) If $f, g\in \mathbb C[a,b]$ are polynomials in two variables, are there easy criteria that allow to see if $f(x,y)-g(t,z)\in \mathbb C[x,y,t,z]$ is irreducible? Thank you very much, best - @Thomas, what you say holds only for polynomials in one variable. – KotelKanim Aug 23 at 9:50 1 I added the tag gr.group-theory since investigations on this problem are intimately linked to group theoretic questions. I thus thought it could be good to give it this extra visibility to the relevant experts. – quid Aug 23 at 14:44 2 see this related question mathoverflow.net/questions/14076/… and the nice answer provided for it. – Camilo Sarmiento Aug 24 at 7:22 ## 2 Answers There has been a lot of work on establishing when polynomials of the form `$$f(x_1, \dots, x_r) - g(y_1, \dots, y_s)$$` are reducible (over the complex numbers, but also over other fields). (Negating them, or special cases thereof would thus yield criteria, in the sense of conditions, when such a polynomial is irreducible.) To a considerable extent one can reduce this problem to the case $r=s=1$. Namely, Davenport and Schinzel (Two Problems Concerning Polynomials, J. reine angew. Math., 1964) proved (this is Theorem 2, up to signchange to match the question, of the paper. The polynomial, over a field of charateristic $0$, $$f(x_1, \dots, x_r) - g(y_1, \dots, y_s)$$ is reducible if and only if `$f(x_1, \dots, x_r) = F(R(x_1, \dots, x_r))$` and `$g(y_1, \dots, y_s) = G(S(y_1, \dots, y_s))$` with polynomials $F,G,R,S$ over the same field and $$F(u) - G(v)$$ is reducible over the same field. Note that the main case the authors care about is indeed the complex one. In particular, it follows that if one cannot write $f,g$ in such a way with nontrivial (that is degree at least 2) $F$ and $G$ then the original polynomial is irreducible [except if one of $f,g$ is constant, but this should not be the case in view of the question]. (I am not sure how to check this most efficiently, but if the polynomials are not too complex, just starting from maximal terms and inferring conditions on the rank and then working ones way down could be a viable, though likely not optimal, strategy.) If one wishes to have more complete information one is now faced with the question when a polynomial $$F(u)-G(v)$$ is (ir)reducible. Various interesting results on this problem where obtained (see below for some recent of them), but if one wishes to have an answer for specific polynomials one migt get by via using not these results, but general irreducibilty criteria for polynomials in two variables or easier to apply criteria for this polynomial. For the former the question pointed out in a comment by Camilo Sarmiento seems like a good resource (I reproduce the link for simplicity http://mathoverflow.net/questions/14076/irreducibility-of-polynomials-in-two-variables ). For example the Ehrenfeucht criterion mentioned there might allow to directly exclude further cases. Also the Eisenstein (like) criteria could be quite useful. Also the paper by Davenport, Lewis, Schinzel mentioned in my comment below should contain some test, but as I have no access to the paper I do not know what exactly. Now for more recent results specifically on this problem: Under the assumption that the $F$ and $G$ are indepecomposable (indepcomposable meaning the polynomial is not the composition of two polynomials) there is a complete answer know (over the complex numbers). In particular, Pierrette Cassou-Noguès and Jean-Marc Coveignes (Factorisations explicites de $g(y)-h(z)$, Acta Arith. 87 (1999)) based on earlier work by Fried and Feit and others established an explict finite set of pairs of polynomials such that any pair $(F,G)$, with $F,G$ indecomposable and not linearly related (this means $F(x)$ is not of the form $AG(ax+b)+B$ with constants $A,a,B,b$ and $A,a$ non-zero, to avoid corner cases), with $F(u)-G(v)$ reducible is weakly linearly realted (I skip the def, but similar to lin related) to one of them. This sets is too large to give here (yet the paper is linked anyway) but what can be said briefly (and was already known before) is that the degrees of both polynomials are equal, and equal to one of $7, 11, 13, 15, 21, 31$. Note that this result uses the Classification of Finite Simple Groups. There are also other results related to this. For example Yuri Bilu (Acta Arith. 90 (1999)) studied when $F(u)-G(v)$ has a factor of degree at most two (where there is no assumption of indecomposability). Roughly, there can essentially only be a quadratic factor if both are Chebyshev polynolmials of degree a power of two. - That's quite surprising! What was the motivation for the Davenport/Schinzel work? – Igor Rivin Aug 23 at 15:23 @Igor Rivin: The direct motivation given in the paper is that this question was raised by one of them (Schinzel) earlier: Some unsolved problems on polynomials, Matematicka Biblioteka 25 (1963), 63-70. Now, for the actual motivation, I think that this derives ultimately mainly from Diophantine Equations/Geometry. For example Davenport, Lewis, Schinzel wrote together somewhat earlier (1961) a paper "Equations of the form f(x)=g(y)" where the f,g are polynomials (with integral coefficients), and while I cannot see the paper the MR review says roughly that if irreducible and genus cond... – quid Aug 23 at 16:00 1 ...then by Siegel's theorem only finitely many solutions; thus the question when reducible; and then they study this; giving a nontrivial case of reducibility and conditions when irreducible. So, I think this how they came to this type of problems (but this is a guess). Several classical Dioph Equ fall into this category f(x)=g(y). – quid Aug 23 at 16:12 2 "Under the assumption that the F and G are indepecomposable (indecomposable meaning the polynomial is not the composition of two polynomials) there is a complete answer know (over the complex numbers)." Mike Zieve's REU (an incredible group of 6 undergrads) recently announced that they have a classification for all $(F,G)$, without the indecomposable hypothesis. If the indecomposability issue turns out to be crucial for you, you might want to e-mail Zieve. – David Speyer Aug 24 at 11:52 1 My above comment is not quite right. The question Zieve's group was working on is when $F(x)-G(y)$ has a factor of genus $\leq 1$. I don't know whether they know when there is a factorization with both factors of high genus. – David Speyer Aug 28 at 19:33 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. David is correct: my REU students have determined the complex polynomials $F(x)$ and $G(y)$ for which $F(x)-G(y)$ has an irreducible factor defining a curve of genus 0 or 1. By Faltings' theorem (a.k.a. Mordell's conjecture), this lets us write down all $F(x)$ and $G(y)$ with algebraic coefficients for which there is a number field $K$ such that $F(K)$ and $G(K)$ have infinite intersection. By one of Picard's theorems, it also means we've solved the functional equation $F\circ A = G \circ B$ in complex polynomials $F,G$ and meromorphic functions $A,B$. This last problem has been studied in the context of finding variants of Nevanlinna's theorem that a nonconstant meromorphic function is uniquely determined by its preimages at each of five points: our equation implies that the preimage under $A$ of the multiset of zeroes of $F(x)$ comprises the same multiset as the preimage under $B$ of the multiset of zeroes of $G(x)$. We crucially use the genus condition. If one asks for reducibility of $F(x)-G(y)$ without imposing any hypothesis on the genus, then (as noted previously) one can find all examples when $F$ and $G$ are indecomposable: this was the goal of the paper by Cassou-Nogues and Couveignes, although it should be noted that their list of pairs $(F,G)$ is incomplete, so another thing we did this summer was to find the full list of pairs $(F,G)$ in this situation. In the decomposable case, much remains to be discovered about reducibility of $F(x)-G(y)$, although progress has been made. In particular, Mike Fried showed that, if $F$ is indecomposable but $G$ is decomposable, then reducibility of $F(x)-G(y)$ implies that $G=A \circ B$ for some polynomials $A$ and $B$ such that $A$ is indecomposable and $F(x)-A(y)$ is reducible. It follows that the pair $(F,A)$ occurs on the (corrected) Cassou-Nogues--Couveignes list, and in particular, we may assume that either $F = A$ or $\deg(F)=\deg(A)\le 31$. This is shown in [Michael Fried, The field of definition of function fields and a problem in the reducibility of polynomials in two variables], via a novel argument combining Galois theory and representation theory (if you're interested in this, please ask me, since one of my REU students found a very simple proof of Fried's result that I'd love to share). I note that, while the Cassou-Nogues--Couveignes result depends on the classification of finite simple groups (via the classification of finite groups $G$ that have a cyclic subgroup $C$ which acts transitively in two inequivalent doubly transitive permutation representations of $G$). However, Fried's result is elementary. When both $F$ and $G$ are decomposable, in the same paper Fried proved the following result: if $F(x)-G(y)$ is reducible, then we can write $F = A \circ B$ and $G = C \circ D$ in such a way that $A(x)-C(y)$ is reducible and the splitting field of $A(x)-t$ over $\mathbb{C}(t)$ equals the splitting field of $C(x)-t$ over $\mathbb{C}(t)$ (where $t$ is transcendental over $\mathbb{C}$). The condition on equal splitting fields is extremely restrictive -- for instance, it implies that $\deg(A)=\deg(C)$ (by considering the size of the inertia groups at infinite places), and also that $A$ and $C$ have the same critical values, or more precisely that, for each complex number $\theta$, the least common multiple of the multiplicities of the roots of $A(x)-\theta$ equals the corresponding least common multiples for $C(x)-\theta$. Fried's proof is remarkably simple; a nice exposition of it is Theorem 8.1 in [Yuri Bilu and Robert Tichy, The Diophantine equation $f(x)=g(y)$, Acta Arith. 95 (2000), 261--288]. Surprising phenomena in the decomposable case can be found in Peter M\"uller's papers [Kronecker conjugacy of polynomials, Trans. Amer. Math. Soc. 350 (1998), 1823--1850] and [An infinite series of Kronecker conjugate polynomials, Proc. Amer. Math. Soc. 125 (1997), 1933--1940]. The most recent work on the decomposable case (to my knowledge) is [Michael Fried and Ivica Gusic, Schinzel's problem: Imprimitive covers and the monodromy method, arXiv:1104.1740]. I have not digested everything in these three papers, so I would be thrilled if someone wanted to summarize their main achievements. - @Rurik: in practice, even if one only has partial or approximation information about two polynomials $F(x)$ and $G(x)$, one can usually prove that $F(x)-G(y)$ is irreducible. The reason is that one can usually show that both $F$ and $G$ are indecomposable, for instance just by writing $F = A \circ B$ where $A,B$ have undetermined coefficients, and then solving for the coefficients of $A$ and $B$. Once this has been done, then Fried's "same splitting field" result will imply irreducibility. Please feel free to email me about how to do this for your specific polynomials. – Michael Zieve Aug 29 at 12:37 Thank you very much. As a matter of fact I think I managed: infact my 27 polynomials, even if scary-looking in the whole, turned out to be quite simple. For example, for many of them both $F$ and $G$ are of prime degree, and for the others, they have a degree written as the product of just to prime... this allowed me to esclude at once many decompositions... – Rurik Sep 4 at 6:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9314209818840027, "perplexity_flag": "head"}
http://mathhelpforum.com/math-challenge-problems/117052-i-identity.html
Thread: 1. i identity Demonstrate that $\arccos^i(\cosh1) = e^{-\frac{\pi}{2}}$ I am not a mathematician so sorry if this is not very challenging. Basically I just found this identity to be interesting and wonder how obvious/obscure it is to real mathematicians. 2. Note that $cosh(1)=cos(i)$ $cos^{-1}(cos(i))=i$ Therefore, we have $i^{i}$ Now, show this equals $e^{\frac{-\pi}{2}}$ 3. Originally Posted by galactus Note that $cosh(1)=cos(i)$ $cos^{-1}(cos(i))=i$ Therefore, we have $i^{i}$ Now, show this equals $e^{\frac{-\pi}{2}}$ Yep. Just to tidy up a bit: $\cos^{i}(\cosh1)^{-1}=i^i=e^{\frac{-\pi}{2}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8830580711364746, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/73145?sort=newest
## “Measuring” how far is one Banach space from being surjectively isometric to another ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Bonjour/bonsoir à toutes et à tous. Assume that `$\mathbf{V} \equiv (V, \|\cdot\|_V)$` and `$\mathbf{W} \equiv (W, \|\cdot\|_W)$` are Banach spaces (over the real or complex field). Question 1. What are some appropriate indices you might use to "measure" how far is `$\mathbf{V}$` from being i) surjectively isometric (see note N1) or ii) isometrically isomorphic to `$\mathbf{W}$` (see note N2)? I am conscious that the question may sound a little vague, so take the Banach-Mazur distance as a practical example of what I (am trying to) mean. Added later. After an answer by Bill Johnson (see below), I'm adding here that another index (in the sense of Question 1) is given, for the non-linear case, by the Lipschitz distance (or Lipschitz distorsion). This is known to be the same as the Banach-Mazur distance so far as $\mathbf{V}$ and $\mathbf{W}$ are (isomorphic and) finite-dimensional. Yet, as still pointed out by BJ, the same question, when raised in the infinite-dimensional setting with regard to the separable case, is an open problem to date. A further possibility, when $\dim(V) = \dim(W) < \infty$, is given by the so-called weak Banach-Mazur distance (see my comment to Bill's first answer for a reference). Question 2. Could you provide some concrete examples illustrating why, depending on the case, the one index should be preferred to the others (if any)? My apologies in advance if the question has been already asked. Notes. (N1) Following a comment by Yemon Choi, I emphasize that, unless differently stated, I am using the term isometry to refer to both linear and non-linear isometries. (N2) Of course, in the real case, there is no true need to distinguish between conditions i) and ii) in the statement of Question 1 (by the Mazur-Ulam theorem). - I am slightly confused about terminology. In Question 1, are you interested in surjective, isometric, non-linear maps? – Yemon Choi Aug 18 2011 at 18:47 @Yemon Choi. Yes, I will edit the original post to make this definitely clear. – Salvo Tringali Aug 18 2011 at 19:44 ## 2 Answers Rather than talk about the weak distance and distance, it is better to discuss the weak factorization constant and the factorization constant of an operator $u$ through an operator $T$. The factorization constant of $u: X\to Y$ through $T:Z\to W$, `$\gamma_T(u)$`, is the infimum of `$\|\alpha\|\cdot \|\beta\|$` over all `$\alpha:X\to Z$` and `$\beta:W\to Y$` for which `$\beta T \alpha =u$`. This measurement of the size of $u$ is generally not a norm, but you can convexify it to get the weak factorization constant, `$\hat{\gamma}_T(u)$`, of $u$ through $T$, which is defined to be the infimum of `$\sum_i \gamma_T(u_i)$` s.t. `$u=\sum_i u_i$`. The (weak) factorization constant of $u$ through a space $Z$ is just the (weak) factorization constant of $u$ through $I_Z$. Obviously you can write down the distance and weak distance in terms of factorization and weak factorization constants. One classical situation in which these parameters differ a lot is in my Studia Math. 89 (1988), 79--103 paper with Figiel and Schechtman. Let $u$ be the basis to basis mapping from $\ell_2^n$ to the first $n$ Rademacher functions in $L_1$. The factorization constant of this operator through $\ell_1^{Cn}$ is large for any fixed $C$, but the weak factorization constant through $\ell_1^n$ is bounded independently of $n$. That is, you cannot well factor this embedding of $\ell_2^n$ through a low dimensional $L_1$ space, but you can well weakly factor it through $\ell_1^n$ (in fact, any operator from $\ell_2^n$ into $L_1$ well weakly factors through $\ell_1^n$; see Proposition 5.5 of the paper I mentioned above). We also show that if you want to well factor this Rademacher embedding $u$ through $\ell_1^k$, then $k$ must be at least exponential in $n$. - Bill, I suspect that there is a typo in your answer, in particular the year of publication of your Studia paper (I don't have editing privileges to fix it myself). – Philip Brooker Aug 19 2011 at 21:47 Thanks, Phil. I corrected the date. – Bill Johnson Aug 19 2011 at 22:05 Thank you, Bill, this is very useful. Just let me add a link to your Studia paper (through Project Euclid): projecteuclid.org/… – Salvo Tringali Aug 22 2011 at 7:40 Salvo, that link is to a follow up paper in PJM which improves many of the results of part I but not the weak factorization one. I could not find the Studia paper online. – Bill Johnson Aug 22 2011 at 14:57 Ops! I just missed the "II" in the title... This is supposed to explain why I couldn't find a lot of the things that you had mentioned in your (second) answer. :) – Salvo Tringali Aug 22 2011 at 15:26 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For (i) the usual thing is to take the Lipschitz analogue of the Banach-Mazur distance; namely, the infimum over injective and surjective maps $T$ from $V$ to $W$ of the Lipschitz constant of $T$ times the Lipschitz constant of $T^{-1}$. Whether this is equivalent to the Banach-Mazur distance for separable Banach spaces is a well known open problem. See the book by Benyamini and Lindenstrauss. - @Bill Johnson. Thank you for your contribution and the reference. Just for the record, a further possibility, in the finite-dimensional case, is provided by the so-called weak Banach-Mazur distance as given in N. Tomczak-Jaegermann, "The weak distance between Banach spaces", Math. Nachr., 119 (1984), pp. 291-307. – Salvo Tringali Aug 18 2011 at 21:16 Well, sure, but the weak distance does not well measure farness from being isometric (nor does, e.g., the Gromov-Hausdorff distance). – Bill Johnson Aug 19 2011 at 0:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295255541801453, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/20685/in-differential-calculus-why-is-dy-dx-written-as-d-dx-y
# In differential calculus, why is dy/dx written as d/dx ( y)? In differential calculus, We know that dy/dx is the ratio of between the rate of change in y and the rate of change in x. In other words, the rate of change in y with respect to x. Then, why is dy/dx written as d/dx ( y)? That is, why and how d/dx is considered as an operator? - ## 2 Answers It is productive to regard $D = \frac{d}{dx}$ as a linear operator, say from the space of smooth functions on $\mathbb{R}$ to itself, for several reasons. The simplest reason I can think of is that it makes the theory of linear homogeneous differential equations very simple. For a linear homogeneous differential equation is nothing more than an attempt to find a nullspace of the operator $p(D)$ where $p$ is some polynomial. To do this we need to find the spectrum of $D$. It's not hard to see that there is a unique eigenvector with eigenvalue $\lambda$ given by $e^{\lambda x}$, and from here it follows that the nullspace of $p(D)$ at least contains (and, if $p$ has distinct roots, is entirely made of) the functions $e^{\lambda x}$ where $p(\lambda) = 0$. Said another way, if $p(x) = \prod_{i=1}^n (x - \lambda_i)$ then we can factor the operator $p(D)$ as $\prod_{i=1}^n (D - \lambda_i)$, and it's not hard to see that $f$ is in the nullspace of this operator whenever $(D - \lambda_i) f = 0$, or $f(x) = e^{\lambda_i x}$ (up to initial conditions). In fact, we get a solution $f$ whenever $(D - \lambda_i)^{e_i} f = 0$ where $e_i$ is the multiplicity of $\lambda_i$, and studying this condition readily leads to the complete set of solutions. In other words, thinking of $D$ as an operator in its own right essentially reduces the study of linear homogeneous differential equations to linear algebra (modulo some existence and uniqueness arguments), specifically the study of the Jordan decomposition. Of course one can go much, much further with this idea: for example we can factor differential operators in more than one variable in the same way. The Laplacian $D_x^2 + D_y^2$ where $D_x$ is the derivative with respect to $x$ and $D_y$ the derivative with respect to $y$ factors as $\left( D_x + D_y i \right) \left( D_x - D_y i \right)$ and this immediately gives the connection between harmonic functions and holomorphic functions via the Cauchy-Riemann equations. And the Dirac equation in quantum mechanics was discovered through a similar factorization process, but with matrix rather than merely complex coefficients. - The way you're phrasing it, $x$ and $y$ play similar roles, and the question naturally arises why they should be treated differently, as in $\mbox{d}/\mbox{d}x (y)$. However, in calculus, one usually considers functions of variables, such as $f (x)$ or $y (x)$ -- here the symbols for the independent variable and the function play quite different roles, and in order to be able to think of differentiation more abstractly as an operation applied to functions (and yielding new functions), it is helpful to "factorize" the notation so that the function stands alone at the right and "what is being done to it", the operator, is separate and applied from the left -- hence this notation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9616913199424744, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/19/hows-the-energy-of-particle-collisions-calculated
# How's the energy of particle collisions calculated? Physicist ofter refer to the energy of collisions between different particles. My questions is how is that energy calculated? Is that kinetic energy? Also, related to this question, I know that the aim is to have higher and higher energy collisions (e.g to test for Higgs Boson). My understanding is that to have higher energy you can either accelerate them more or use particles with higher mass. Is that correct? - 2 Pardon the retag. Accelerators aren't necessarily what we're talking about, and electron-volt is more of an answer than a question tag. – Nick Nov 2 '10 at 21:05 ## 3 Answers I think your question is divided in two parts. 1. When talking about energy, in the field of HEP or accelerator physics we can talk about • total energy • kinetic energy • momentum As for elementary particles relativistic effects manifest themselves almost all the time, you need to use a relativistic form for the energy: $$E = \sqrt{p^2c^2+m^2c^4}$$ Instead of a bare $E = p^2/2m$ (as in classical mechanics, where $p = m v$). This relation as two parts: one depending on the momentum and one (constant) given by the mast of the particle. It should be also noted that for ultra-relativistic cases, where $E \gg E_0$, we have $E = p c$. Usually for low energy applications, like linear accelerators or low energy experiments we talk about the kinetic energy, which is $E_k = \sqrt{p^2c^2+m^2c^4} - mc^2$. For example if you talk about protons of 160 MeV obviously it is kinetic energy, as the rest mass of the proton is rhougly 1 GeV. For higher energy application usually you can make the ultra-relativistic approximation and you then talk about total energy (in eV) or about the momentum in $\mathrm{eV}/c$; taking $c=1$, both are numerically equal. When you are not sure about which approximation you can take, it is better to explain which one you take. Example: For a proton in the LHC with a momentum of 3.5 TeV/c you can calculate its total energy which is ... 2. How is this energy "calculated"? (I assume you meant "experimentally" or something like that.) In HEP physics we use what we call electron-volt as a unit of energy. A particle of unit charge will have an energy of 1 eV if it descends from rest a potential difference of 1 V. So for example, when you accelerate protons in the LHC, if you have cavities giving you 10 MV, the particle will gain 10 MeV every turns. - I was referring to the case of LHC. So in your example of a proton with an energy of 3.5 TeV, what speed should the proton have? – Albert Nov 3 '10 at 17:11 – Cedric H. Nov 3 '10 at 17:18 OK, so if you use another particle, let's say with a rest mass twice as that of the proton, you achieve a bigger energy right? – Albert Nov 3 '10 at 17:49 No, because you won't be able to accelerate it to higher energy, because this other particle is heavier and then if you accelerate it with say the LHC, you would need a higher magnetic field, which you don't have. – Cedric H. Nov 3 '10 at 17:51 yes I understand that...sorry for the confusion. My question was, if we use a particle with twice the rest mass and "somehow" with the same speed. – Albert Nov 3 '10 at 18:01 show 3 more comments In experimental high energy physics, the beam energy is well known. For example, if you have a beam of protons, you know the energy because the engineers controlling the accelerator make sure that the beam is very well collimated, and goes on track, etc. If it weren't so, the beam would hit the pipe walls and you'd lose it... There are also instruments along the beam pipe which measure the current, so all that is used to control the total energy of the beam. Secondly, experimental particle physicists seldomly look into individual collisions — because there are millions of collisions per second, with millions of electronic channels, the pile up of collision events is not negligible (typically 10-20 events per "frame"). It's simply too complicated (and error prone) to look at individual events. To make stand out the potentially interesting events from the manifestly uninteresting background events (those too common and already studied, such as low-mass-particle decays), they make specific cuts in quantities that they know (from numerical simulations) will exclude manifestly uninteresting events, and in the end you have events which are likely to be the "type" you're looking for. On example is this: if you only accept reconstructed tracks which have a linear momentum higher than a certain value, you exclude a lot of particles ("good" and "bad") in the direction of the collision (beam-wise) but those who are scattered perpendicularly to that direction (i.e. away from the beam), and have a high energy, are likely to be interesting (as expected from numerical simulations, i.e. "Monte Carlo" as it is called). Bottom line (to make this short): They know very well what goes in, but they know not too well what goes out for individual events. When you only count high-energy particles (i.e. those who don't bend very much under the detector's strong magnetic field) and you start superimposing (piling up!!) all the events which have a couple (in about 100 or so) of "promissing" particle tracks, they start accumulating around the "correct value". That's how they know that "when two particles collided, a heavy one was created momentarily and it decayed into lighter ones". You can have a clue about how piling up individual events might give the approximate good answer from this illustration: Suppose you have a glass of sand and you let it drop slowly into the floor. You then ask a friend to come into the room and tell him to make an estimation of where in the room you dropped the sand from. That should be easy. He might even make a more or less good prediction of how high you dropped the sand by how much it spreads on the floor (it should spread wider if dropped higher). Even if your friend knows exactly how much sand was dropped (from knowing how much sand was in the glass), he only has a "good enough" hint at what happened (where in the room, how high) when the sand was dropped from the glass. - 1 Not a bad discussion, and you get my vote, but to say that particle physicist rarely look at single collisions is to mistake the collider guys for all of particle physics. When we're talking about neutrinos or ultra-high energy cosmic rays or the non-perturbative energy regime (as at JLAB) we tend to look at one event at a time. – dmckee♦ Mar 6 '11 at 1:16 It is worth remembering that in at velocities close to c. A particle kinetic energy is inter-wind with its rest mass. So the actual energy equation is $E = \sqrt{p^2c^2+m^2c^4}$ So the energy of a collision is the sum of the above energy for the two colliding particles. That is why you construct improved accelerators. As it is the only way to tune up the energy. To the extend of my knowledge we don't how to tune up mass. - Isn't p = m*c ? – Albert Nov 2 '10 at 20:21 sorry... I mean p = m * v – Albert Nov 2 '10 at 20:25 "To the extend of my knowledge we don't how to tune up mass." : just by taking another particle – Cedric H. Nov 2 '10 at 20:53 @Albert: No when the speed of particle is very fast. (p = mv/√(1-v^2/c^2), m is rest mass.) – KennyTM Nov 2 '10 at 20:57 @Robert: you're missing a closing brace in your formula, I thought you might want to correct it. (I can delete this comment afterwards) – David Zaslavsky♦ Nov 3 '10 at 22:38 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485034942626953, "perplexity_flag": "middle"}
http://nrich.maths.org/349/solution
### Dodecawhat Follow instructions to fold sheets of A4 paper into pentagons and assemble them to form a dodecahedron. Calculate the error in the angle of the not perfectly regular pentagons you make. # Platonic Planet ##### Stage: 4 Challenge Level: Elijah sent us a clear explanation of how he tackled this problem, along with some diagrams showing his solutions. I made a dodecahedron and made my paths with string and Blu-Tac. My first assumption was that Glarsynost was not extremely freakishly tall so she can't see over the horizon. Each edge is 1 flib long (a real alien word). From the middle of a face she can see $\frac{1}{12}$ of the planet. From an edge she can see $\frac{1}{6}$ of the planet. From a vertex she can see $\frac{1}{4}$ of the planet. FIRST PATH: Start at a vertex and keep going along a new edge so you can see a new face every time. The blue path can be seen in the picture - Path 1. Each face has at least one blue edge. The green lines show edges that are actually joined up to blue ones. The path is 12 flibs long. After that I wanted to cut across the faces so the path would be shorter and she could see new faces more quickly. So I wondered how long a diagonal of a pentagon is. My mum told me that it is $\frac{1 + \sqrt5}{2}$ flibs. SECOND PATH: Start at a vertex and go across a diagonal so you can see two new faces. When you have seen them all, get back to the start. This is Path 2 in the picture. Each face has at least one blue vertex. The path is 6 diagonals long, $3(1 + \sqrt5)$ flibs which is about 9.7 flibs. This is the shortest path I could find, but there are other routes the same length. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498947262763977, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/4944/regime-switching-in-mean-reverting-stochastic-process/4949
# Regime switching in mean reverting stochastic process Let you have a mean reverting stochastic process with a statistically significant autocorrelation coefficient; let it looks like you can well model it using an $ARMA(p,q)$. This time series could be described by a mean reverting stochastic process like $dS=k(\theta-S_{t})dt+\sigma S_{t}^{\beta}dz$ where $\theta$ is the mean reversion level, $k$ is the speed of mean reversion and $\beta$ determines the structural form of diffusion term (so $\beta=0$ yields the normally distributed mean reversion model, aka the Ornstein-Uhlenbeck process). Regardless of the actual value of $S_{0}$, we know $S_{t}$ will go $\theta$ in the long run, right? Now let there's an unlikely event which can drastically change $\theta$'s value: e.g. let $\theta=100$, you model the process, ok, then... bang! Starting from $t=\tau$ it happens that $\theta=30$ and you will have to deal with this new scenario. My question: is there any model which can deal with such a situation? - – gnometorule Jan 7 at 17:08 ## 1 Answer As far as I can tell, you've essentially written the model that you are concerned with. The only difference is that you would instead have $\theta_{i}$ when $s_{t}=i$ where $s_{t}$ is a latent variable that reflects the probability of being in state $i$. You would also need to include the dynamics that drive the probability transitions as another part of the model. You could set them up as standard Markov Regime-Switching models are set up, though there are other options. So the question becomes what do you want to do with the model? If you are concerned with estimating the parameters of such a model, you would begin by setting this up as a regime-switching AR(p) model (these are more popular to use than ARMA models). You could set it up in levels and allow the coefficients on all the variables (and the variance) to switch between states. You could also set it up in differences and include the lag of the level as an independent variable. To estimate the parameters, the simplest approach is to apply maximum likelihood using the Hamilton filter. There is a Matlab implementation that I have used to implement this approach. You could also estimate the regime-switching model by Bayesian MCMC. - I like your answer, John, and I think your approach may produce suitable results. Let me answer your question: «So the question becomes what do you want to do with the model?». I'm working with a 2-regime model and my final goal is to estimate the $S_{t}$ value which is the threshold between the first and the second regime. E.g.: if $S_{t}>60$ it's likely it will go $\theta=100$ BUT if $S_{t}<60$ it's likely it will go $\theta=25$. What should I do? – Lisa Ann Jan 10 at 14:03 You might want to learn more about regime-switching. Also, try to fit one of the models using that Matlab package to get a better sense of what you're dealing with. – John Jan 10 at 17:04 I'm used to `R` for quantitative analysis. Please, give a look at : it is a blog article on regime switching detection. Is that the kind of analysis you're suggesting to perform? – Lisa Ann Jan 11 at 14:26 This is closely related, but I'm not sure whether that R package can fit autoregressive models that you are looking to fit. – John Jan 11 at 17:55 The short name usually used to describe such a model is "TAR" (Threshold Autoregressive Model), isn't it? If so, how a TAR model can perform if the time series sample shows just the first regime (but we know also the second one does exist)? – Lisa Ann Jan 14 at 7:36 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9523858428001404, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/38118/parallel-lines-on-residual-vs-fitted-plot
Parallel lines on residual vs fitted plot I have a multiple regression problem, which I tried to solve using simple multiple regression: ````model1 <- lm(Y ~ X1 + X2 + X3 + X4 + X5, data=data) ```` This seems to be explaining the 85% of variance (according to R-squared) which seems pretty good. However what worries me is the weird looking Residuals vs Fitted plot, see below: I suspect the reason why we have such parallel lines is because the Y value has only 10 unique values corresponding to about 160 of X values. Perhaps I should use a different type of regression in this case? Edit: I've seen in the following paper a similar behavior. Note it's a one-page only paper so when you preview it you can read it all. I think it explains pretty well why I observe this behavior but I'm still not sure if any other regression would work better here? Edit2: The closest example to our case I can think of is the change in interest rates. FED announces new interest rate every few months (we don't know when and how often). In the meantime we gather our independent variables on the daily basis (such as daily inflation rate, stock market data, etc.). As a result we will have a situation where we can have many measurements for one interest rate. - 1 You almost certainly do need some other form of regression. If the Y data are ordinal (which I suspect) then you probably want ordinal logistic regression. One `R` package that does this is `ordinal`, but there are others as well – Peter Flom Sep 27 '12 at 15:20 Actually the Y is the price we try to predict, which changes every few months. We have weekly-recorder variables (X) for the corresponding price (Y) that changes every few months. Would logistic regression work in this case when we don't know future price? – Radek Sep 27 '12 at 15:51 2 You're right about the explanation; your reference nailed it. But your situation looks unusual: it appears you have only ten or so independent responses (which lie on a continuous scale, not a discrete one) but you are using multiple explanatory variables that vary over time. This is not a situation contemplated by most regression techniques. More information about what these variables mean and how they are measured might help us identify a good analytical approach. – whuber♦ Sep 27 '12 at 16:09 1 Answer One possible model it one of a "rounded" or "censored" variable : let $y_1,\ldots y_{10}$ being your 10 observed values. One could suppose that there is a latent variable $Z$ representing the "real" price, which you do not fully know. However, you can write $Y_i=y_j\Rightarrow{}y_{j-1}\leq{}Z_i\leq{}y_{j+1}$ (with $y_0=-\infty, y_{11}=+\infty$, if you forgive this abuse of notation). If you are willing to risk a statement about the distribution of Z in each of these intervals, a Bayesian regression becomes trivial ; a maximum likelihood estimation needs a bit more work (but not much, as far as I can tell). Analogues of this problem are treated by Gelman & Hill (2007). - 1 This is a good idea. It takes care of the phenomenon but I wonder whether it might miss a bigger problem: even if the prices can be considered censored, they most likely are highly serially correlated. – whuber♦ Sep 27 '12 at 20:10 I've tried the censReg R package but wasn't able to make it working. It's possible that I didn't understand your idea though. The thing is that we know all dependent variable so we don't have a situation where Y = 0 (censored), it's just that the Y stays stable for few months. I just made another edit so hopefully this explains better our use case. – Radek Sep 28 '12 at 15:18 1 Radek, I think the idea is this: suppose the price $Y(t)$ depends on time but only changes at discrete times $t_1,t_2,\ldots$. We conceive of this as the manifestation of some unobserved underlying variable (the "real price") $Z(t)$ and we hope that between times $t_i$ and $t_{i+1}$ $Z(t)$ will always lie between $Y(t_i)$ and $Y(t_{i+1})$. In effect, then, we view the observed price at any time $t$ in this interval as being $Z(t)$ as censored both at the left and the right by $Y(t_i)$ and $Y(t_{i+1})$. (I must emphasize "hope": this is the "risky statement" referred to.) – whuber♦ Sep 28 '12 at 17:00 1 whuber : you are right. The original post didn't allude to a time series, so I overlooked that. I think that in order to answer the question, we have to risk two statements : one about the distribution of $Z$ in the intervals $(y_{j-1}, y_{j+1}$, and one about the shape of the temporal model, i. e. the function f binding $Z(t)$ to $f(Z(1), Z(2,\ldots,Z(t-1))$. In a BUGS model, both of these aspects would be expressed in statements about $Z$. Not so simple anymore... – Emmanuel Charpentier Sep 28 '12 at 18:55 default
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9533042311668396, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/200926-help-modular-arithmetic-print.html
# Help with modular arithmetic Printable View • July 12th 2012, 08:23 PM joatmon Help with modular arithmetic I'm doing some self-study in cryptography and have come across something that I don't understand. Hoping somebody can help me with the gap in my knowledge. I'm trying to follow an algorithm to compute exponentiation using Euler (really Fermat), and am stuck on some modular arithmetic. The problem before me is to compute $2^{{{110001}^{11}}^{1100001}} mod 23$ The example that I am studying right now says that this is equal to: $2^{{{110001}^{11}}^{1100001} mod 22} mod 23$ , which I can see is an application of Fermat's theorem (since 23 is prime). The next step is what I don't understand. They go on to apply Fermat again, saying that: $110001^{{11}^{1100001}} \equiv 110001^{{11}^{1100001} mod 10} mod 22$ I understand why they switched to mod 10 (there are 10 elements in the set phi(22)). What I don't understand are the following: 1) How can they drop the base 2? 2) Where did the mod 23 go? (these might be the same property) Clearly, there is some property of modular arithmetic that I don't understand in applying these theorems. Can anyone explain this to me? Thank you! • July 12th 2012, 08:41 PM joatmon Re: Help with modular arithmetic (Doh)Never mind, I figured it out. They weren't saying that all of these are equivalent. In the second step they are just isolating the exponent for the purpose of simplifying it by way of Fermat's theorem. As the solution goes on, once they get the exponent simplified all the way, they go on to build back up sequentially to the original problem, which is then easily solvable. Thanks anyway. All times are GMT -8. The time now is 04:38 PM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9627841711044312, "perplexity_flag": "head"}
http://mathoverflow.net/questions/109339?sort=oldest
## Commuting Linear Operators In Hilbert Spaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $V$ be a finite dimensional vector space over the complex field $\mathbb C$. Let $L:V\rightarrow V$ be a linear operator. Using the matrix of $L$ and the Jordan canonical form it is easy to find all the linear operators that commute with $L$. Now suppose that $H$ is a Hilbert space and let $L:H\rightarrow H$ be a continuous linear operator. There is some method to determine all the continuous linear operatores that commute with $L$? - The general case seems to be hard, but there would be specific types of operators where you can get answers. – Amritanshu Prasad Oct 11 at 5:21 ## 2 Answers Well ... yes if $L$ is normal (meaning $LL^* = L^*L$; in particular, if $L$ is self-adjoint). Assuming that $H$ is separable, we have a structure theorem which says that $H$ is isomorphic to the $L^2$ sections of a bundle over $[-\|L\|, \|L\|]$ whose fibers are Hilbert spaces, in such a way that $L$ goes to multiplication by $x$. The operators that commute with $L$ are then morally just the operators which preserve each fiber, though one has to be a little careful with measurability issues when making this precise. If $L$ is not normal then at least you can say any weak operator limit of polynomials in $L$ commutes with $L$. But I don't know if you can say much more than that in general. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I am far from being an expert, but there is a list of results for special cases in a book by Radjavi and Rosenthal, especially in Chapter 9. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441720247268677, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/pigeonhole-principle?sort=unanswered&pagesize=15
# Tagged Questions Questions involving the pigeonhole principle in Combinatorial Analysis. 1answer 52 views ### Consider a set A of 100, 000 arbitrary integers. Prove that there is some subset of 22 integers that end in the same last three digits. Consider a set A of 100, 000 arbitrary integers. Prove that there is some subset of 22 integers that end in the same last three digits. I'm new to this principle and need help on this problem. 1answer 52 views ### Another version of PP Prove the following version of the pigeonhole principle. Let $m$ and $n$ be positive integers. If $m$ objects are distributed in some way among $n$ containers, then at least one container must hold at ... 0answers 111 views ### How small parallelograms are we guaranteed to get, when we select the two sides from different plane lattices? This a shortened version (motivation from telecommunications stripped away) of a question I asked in MO in late May (no answers). I am mostly checking, if somebody has seen this or a related question ... 0answers 241 views ### PigeonHole Principle how to apply this? This problem was suggested to me by one of the students. Imagine you are one of four players. Each player gets two cards from a regular deck of cards. Your hand is 10 10. You lose only if some other ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9283854365348816, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/24468/rotational-kinetic-energy-during-vertical-circular-motion-of-a-particle/24476
# Rotational kinetic energy during vertical circular motion of a particle Why is it not necessary to take into account rotational kinetic energy when using the Law of Conservation of Mechanical Energy to solve vertical circular motion problems? After all, the particle is rotating about the centre of the circle and does have rotational KE, doesn't it? All the examples I have seen just use KE= 1/2 $mv^2$, e.g. here: http://www.physicsforums.com/showpost.php?p=2312566&postcount=4 - I think I figured it out. 1/2 $mv^2$ in the circular motion examples clearly does refer to the rotational kinetic energy of the particle, not its translational kinetic energy, since $v$ here is the particle's tangential speed (tangential to the circle). The confusion arose due to 1/2 $mv^2$ being expressed in linear form instead of angular form 1/2 $mr^2ω^2$. In short, while any quantity (e.g. velocity,momentum,KE) expressed in angular form necessarily describes rotational motion, when expressed in linear form, the quantity may be describing either translational or rotational motion. – Ryan Apr 27 '12 at 5:09 ## 2 Answers For a point particle, Translational KE is rotational KE: $$\frac12I\omega^2=\frac12mr^2\omega=\frac12mv^2$$ The formula for rotational KE ($\frac12I\omega^2$) is derived by adding up the KE of each particle in a rigid body in pure rotation. When body has both rotation and translation, we can derive that: $$KE=KE_R+KE_T=\frac12I_{com}\omega_{com}^2+\frac12mv_{com}^2$$ In this case, the point particle has no $\omega$ about its center of mass, so no problem. Though we can still apply the pure rotation formula. Just that we can't say it has both rotational and translational mortion. - Thanks, Manishearth. – Ryan Apr 27 '12 at 5:11 When dealing with point particles, rather than bulk (extended) matter, there is no need for the concept of angular kinetic energy, (regular) $\frac{1}{2}mv^2$ kinetic energy (in addition to potential energy) is the relevant conserved quantity. More accurately, when we deal with bulk matter in classical kinematics problems, we ignore the internal forces between the particles that constitute the matter, and consider their orientation fixed. This lets us separate the energy deriving from the motion of the center of mass from the rotation of that mass about it's center. In this case you have two terms for the energy because you have two parameters for it's "speed", velocity of the center of mass, and rate of angular rotation. - Thanks your your answer! BTW, after having answered my own question (see my comment in the main question), I now dislike the term "angular kinetic energy" because of its analogue "linear kinetic energy", which refers to non-rotational motion. But if one were careless like me, one would be quick to conclude that 1/2 $mv^2$ (where $v$ = linear velocity) also necessarily refers to non-rotational motion, which would be a mistake and a source of momentary confusion. – Ryan Apr 27 '12 at 5:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9254724383354187, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/54569/question-about-proper-time-in-general-relativity?answertab=oldest
# Question about proper time in general relativity I think I may have some fundamental misunderstanding about what $dt, dx$ are in general relativity. As I understand it, in special relativity, $ds^2=dt^2-dx^2$, we call this the length because it is a quantity that is invariant under Lorentz boosts. If a ball is moving in space and I want to calculate the $ds$ for the ball to travel from point A to point B, then $d\tau=ds$ (where $\tau$ is proper time) because according to the ball, $dx=0$, since point A and point B are at the same place in its own frame. Assuming this is correct, this is all fine with me. Now fast forward to general relativity. $ds^2=g_{\mu\nu}dx^{\mu}dx^{\nu}$...so if a ball is travelling in spacetime under whatever metric, then to me it would seem like, in the ball's frame, we should set $dx^i=0$, and then we get $ds^2=g_{00}d\tau^2$...however, from what I am told and from what I have read, $ds^2=d\tau^2$. When I look at the Schwarzschild metric, for example, $g_{00}$ doesn't appear to be 1 in a travelling ball's frame. What have I misunderstood about the interval $ds$?. Is it that $ds$ only purely has to do with geometry of spacetime, and doesn't quite represent the distance between events? - I'm not sure what "only purely has to do with geometry of spacetime, and doesn't quite represent the distance between events?" is supposed to mean, but quite apart from that: the time component of the metric is generally written $\mathrm{d}s^2 = g_{00} \mathrm{d}t^2+\cdots$, reserving the symbol $\tau$ for the proper time, which is given by $\mathrm{d}s^2=\mathrm{d}\tau^2$ (in your metric sign convention). In a frame comoving with an object the time coordinate is the proper time and you will have $g_{00}=1$. – Michael Brown Feb 20 at 23:05 If you have coordinates such that $\mathrm{d}s^2 = g_{00} \mathrm{d}t^2 + \cdots$ then the time experienced by an observer whose $x,y,z$ are constant is $\tau = \int \sqrt{g_{00}} \mathrm{d}t$. – Michael Brown Feb 20 at 23:08 @MichaelBrown that could be the basis of an answer ;-) – David Zaslavsky♦ Feb 20 at 23:17 ## 2 Answers The problem arose when you wrote $ds^2 = g_{00} d\tau^2$. Generally one of your coordinates $x^\mu$ will be timelike, and the others spacelike, but the timelike one is not in general the proper time of someone whose spatial coordinates are not changing. That is, $t \neq \tau$. Using your sign convention,1 $d\tau^2 = ds^2$, so the (arbitrarily large) lapse in proper time between two events A and B is $$\Delta\tau = \int\limits_\text{path} \sqrt{ds^2} = \int_{\sigma_\mathrm{A}}^{\sigma_\mathrm{B}} \sqrt{g_{\mu\nu} \frac{\mathrm{d}x^\mu}{\mathrm{d}\sigma} \frac{\mathrm{d}x^\nu}{\mathrm{d}\sigma}} \ \mathrm{d}\sigma.$$ Here $\sigma$ is any parameter that parametrizes the specific path (required to be timelike) taken from A to B, and your location along the path at parameter $\sigma$ has coordinates $x^\mu(\sigma)$. Suppose $x^i \equiv 0$ along the path. Also call $x^0$ by the name $t$. Then most of the terms in the sum vanish and we have $$\Delta\tau = \int_{t_\mathrm{A}}^{t_\mathrm{B}} \sqrt{g_{00}} \ \mathrm{d}t.$$ In fact we could have gotten this just by examining $$d\tau^2 = g_{\mu\nu} \mathrm{d}x^\mu \mathrm{d}x^\nu.$$ Now often in GR we do call that timelike coordinate $t$, as I have done. Usually when this is done, as in the Schwarzschild metric $$ds^2 = \left(1 - \frac{2M}{r}\right) \mathrm{d}t^2 - \left(1 - \frac{2M}{r}\right)^{-1} \mathrm{d}r^2 - r^2 \left(\mathrm{d}\theta^2 + \sin^2(\theta) \ \mathrm{d}\phi^2\right),$$ that $t$ becomes arbitrarily close to the $\tau$ of a local observer "at rest" in these coordinates as you move into the flat regime. In this case, as you move away from the mass at small $r$, $t \to \tau$. In other words, you can only neglect the $g_{00}$ term in some asymptotic cases (which you'll note correspond to $g_{00} \to 1$). In your SR example, you happened to use coordinates that matched those of the observer whose proper time you cared to measure, but you could have boosted to a new frame, in which case the $\mathrm{d}t^2$ part of the metric would have had some prefactor involving $\gamma$. 1 I will point out that most often in GR proper, $ds^2$ is negative for timelike separations, with $d\tau^2 = -ds^2$. The convention you adopted is more common in particle physics. - What's going on with the index mismatch in the equation $\Delta\tau = \int\sqrt{g_{\mu\nu}} dt$? – joshphysics Feb 21 at 0:32 @joshphysics oops – Chris White Feb 21 at 1:20 I think I get it. The $dt, dr, d\theta, d\phi$ in the Schwarzschild metric refer to $(t,r,\theta,\phi)$, which are the coordinate functions of your chart. These coordinates don't cover the whole spacetime, but at every point in spacetime there is a neighbourhood homeomorphic to $\mathbb{R}^4$, and in each neighbourhood we can use those coordinates so that the metric takes that form. But in general $(t,x,y,z)$, don't have to represent time and space, so unless you happen to be using a coordinate system where t is the time of an observer, you can't set dx=0 to find $d\tau$. Is that right? – JLA Feb 21 at 6:49 @JLA Right. And $t$ in these coordinates is the proper time only for an "observer at infinity" ($r\to\infty$), at which point $r$ also behaves more like proper distance to the center. – Chris White Feb 21 at 6:55 One last thing...will an observer necessarily be at rest in his own frame, if he uses coordinates (to describe the spacetime around him) that make the metric take that form? By at rest, I mean his spacelike coordinates will be zero. – JLA Feb 21 at 16:47 show 3 more comments Expanded on Michael Brown's comments, you seem to have confused $dt$ with $d\tau$. They are not the same. It is true that $ds^2 = dt^2$, but also true that $ds^2 = g_{tt} dt^2 + \ldots$ (other terms omitted). What this means physically is that, even for two events that take place at the same location but different coordinate times (according to a given frame), there is a difference between elapsed coordinate time and actual (or proper) elapsed time. This can happen in special relativity, too. Choose a time coordinate $\bar t$ such that $dt = \alpha \, d\bar t$, and the metric would look like $ds^2 = \alpha^2 \, d\bar t^2 = d\tau^2$. What this should emphasize is that one's choice of time coordinate can end up being quite poor, not corresponding well (if at all!) to the physically meaningful quantity, proper time. While it may be confusing for the time coordinate to not correspond exactly to proper time, it nevertheless gives an enormous amount of freedom to recast the metric in a way that is convenient--you do not have to have proper time be your time coordinate if you don't want it to be, and it will always have a well-defined expression regardless of your choice of coordinates, as all physical quantities must have. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 57, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9536523818969727, "perplexity_flag": "head"}
http://mathoverflow.net/questions/14866?sort=votes
## slice=ribbon generalization to higher genus + potential counterexamples to slice=ribbon. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have two questions about the slice=ribbon conjecture. (1) If a knot $K \hookrightarrow S^3$ has smooth slice genus $g$, you can ask if it bounds a smooth genus $g$ surface in $S^3 \times [0, -\infty)$, with the function defined by restriction to $[0, -\infty)$ being Morse on the surface without index=0 critical points (maximal points). When $g=0$ this is just asking if the slice knot $K$ has a ribbon disc. I was wondering if there are any knots known with $g \geq 1$ for which such a surface cannot exist. If there are none such known, is there a topological reason why the truth of the slice=ribbon conjecture would also imply the existence of such surfaces? (2) Are there any potential counterexamples to slice=ribbon (in the same way that there are potential counterexamples to smooth 4-d Poincare [until Akbulut kills them])? Thanks, Andrew. - I added a few more tags. Regarding (2) hopefully someone like Ruberman will enter the conversation. I don't know any interesting ways of generating knots that I know to be slice but for which I have reason to suspect they maybe aren't ribbon. IMO the smooth Poincare conjecture is in the same situation. We appear to have a deficit of good ways to identify the standard smooth $S^4$. – Ryan Budney Feb 10 2010 at 6:53 1 I believe (based on a conversation with Sylvain Cappell) that the answer to (2) is no. – Daniel Moskovich Feb 10 2010 at 14:45 ## 2 Answers There is a paper by Gompf and Scharlemann: Fibered knots and Property 2R, II, which gives an infinite family of two component links which are smoothly slice but not obviously ribbon. - See also msp.warwick.ac.uk/gt/2010/14-04/b050.xhtml – Daniel Moskovich Dec 2 2010 at 15:50 Looks like they combined two papers together. – Jim Conant Dec 2 2010 at 16:07 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This paper by Gompf exhibits a potential counter-example. Has it been established that the candidate given by Gompf is not slice? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321852326393127, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/127986-limit-problem.html
# Thread: 1. ## A Limit Problem Dear friends, I need help in showing the following. $\lim_{\substack{\lambda\in\mathbb{R}\\ \lambda\to0}}\bigg(\frac{1}{\lambda}\log|1+z\lambd a|\bigg)=\mathrm{Re}(z)$ Thanks! bkarpuz 2. Originally Posted by bkarpuz Dear friends, I need help in showing the following. $\lim_{\lambda\to0}\bigg(\frac{1}{\lambda}\log|1+z\ lambda|\bigg)=\mathrm{Re}(z)$ Thanks! bkarpuz (you should specify that $\lambda\in\mathbb{R}$) After expanding you get $|1+z\lambda|^2=1+2{\rm Re}(z)\lambda+o(\lambda)$ when $\lambda\to 0$, $\lambda\in\mathbb{R}$, from which the result follows quickly using $\log (1+u)=u+o(u)$ when $u\to 0$, and $\log u=\frac{1}{2}\log u^2$. 3. Originally Posted by Laurent (you should specify that $\lambda\in\mathbb{R}$) After expanding you get $|1+z\lambda|^2=1+2{\rm Re}(z)\lambda+o(\lambda)$ when $\lambda\to 0$, $\lambda\in\mathbb{R}$, from which the result follows quickly using $\log (1+u)=u+o(u)$ when $u\to 0$, and $\log u=\frac{1}{2}\log u^2$. How did I miss this?! :S Thanks for your reply Laurent, it has been a long time not heard from you. :] Actually I am working with the quotient when $\lambda\in\mathbb{C}$ and its making me confused! :S 4. Originally Posted by bkarpuz How did I miss this?! :S Thanks for your reply Laurent, it has been a long time not heard from you. :] Actually I am working with the quotient when $\lambda\in\mathbb{C}$ and its making me confused! :S Actually, the limit can be computed when $\mathrm{Re}(\lambda)\neq0$. In this case, we have $\lambda=r\mathrm{e}^{i\theta}$ with $\theta\neq\pm\pi/2$. So that, for the function $f(z):=|z|^{-1}\log|1+z|$ for $z\in\mathbb{C}\backslash\{-1\}$, we have $\lim_{r\to0^{+}}f(r\mathrm{e}^{i\theta})=\lim_{r\t o0^{+}}\frac{1}{2r}\log\big(1+2r\cos(\theta)+r^{2} \big)=\cos(\theta)$ by using the fact mentioned previously by Laurent ( $\lim\nolimits_{\lambda\in\mathbb{R},\ \lambda\to0}f(\lambda)=1$). On the other hand if $\mathrm{Re}(\lambda)=0$, it can be easily computed as $\lim_{r\to0^{+}}f(r\mathrm{e}^{\pm i\pi/2})=\lim_{r\to0^{+}}\frac{1}{2r}\log\big(1+r^{2}\b ig)=0.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309449195861816, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/80323/list
Return to Question 2 added 5 characters in body I am sorry if the answer to my question is well-known. I am quite new in this topic, so it will be also nice to have a reference, if it exists. I was wondered if there exists a nice closed formula for a logarithm of an arbitrary hypergeometric series in term the terms of, say, a linear combination of some other hypergeometric series. The reason that makes me believe in the existence of such a formula is the following. It is an easy exercise to show that the derivative of a hypergeometric series can be expressed as follows: $\frac{d}{dx} {}_nF_m (a_1,\ldots, a_n;b_1\ldots b_m; x) = \frac {a_1\cdots a_n}{b_1\cdots b_m} {}_nF_m (a_1+1,\ldots, a_n+1;b_1+1\ldots b_m+1; x)$. From the other hand, for an arbitrary function $G(x)$ we have $(\log G(x))' = \frac {G'(x)}{G(x)}$. It follows that it suffices to find a ratio of two hypergeometric series to find a logarithmic derivative. In some cases this ratio is known to be a hypergeometric series again. So after the integration we'll obtain the desired result. Thank you in advance for any help. 1 Logarithm of a hypergeometric series I am sorry if the answer to my question is well-known. I am quite new in this topic, so it will be also nice to have a reference, if it exists. I was wondered if there exists a nice closed formula for a logarithm of an arbitrary hypergeometric series in term of, say, a linear combination of some other hypergeometric series. The reason that makes me believe in the existence of such a formula is the following. It is an easy exercise to show that the derivative of a hypergeometric series can be expressed as follows: $\frac{d}{dx} {}_nF_m (a_1,\ldots, a_n;b_1\ldots b_m; x) = \frac {a_1\cdots a_n}{b_1\cdots b_m} {}_nF_m (a_1+1,\ldots, a_n+1;b_1+1\ldots b_m+1; x)$. From the other hand, for an arbitrary function $G(x)$ we have $(\log G(x))' = \frac {G'(x)}{G(x)}$. It follows that it suffices to find a ratio of two hypergeometric series to find a logarithmic derivative. In some cases this ratio is known to be a hypergeometric series again. So after the integration we'll obtain the desired result. Thank you in advance for any help.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9639390707015991, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/159410/purpose-of-linear-algebra/159427
# Purpose of Linear Algebra How much emphasizes should be on proof on a first course in Linear Algebra? I sometimes feel that they (proofs) crowd out a coherent vision for linear algebra. However I also think a central theme of a Linear Algebra course is to learn reasoning even though it does not always succeed. The audience is first year undergraduate students studying mathematics and physics but maybe extended to engineers. They generally struggle with the idea of proof. - 7 Have you ever noticed how a baby struggles with its first steps? It is inevitable. We do not require freshmen to run a marathon and come up with new proofs to new concepts and new ideas. We feed them carefully chewed proofs of carefully chewed theorems. This is a good first step. – Asaf Karagila Jun 17 '12 at 11:07 ## 5 Answers Proofs will let you find out if the students understand mathematics on a deeper (or higher?) level. I think this insight is valuable in any student and that proofs need a some emphasis in any linear algebra course. I don't think I have to tell anybody that this insight is more vital in mathematics students than in others, and that therefore proofs need have more weight in their grading. I just mention this in order to make myself clearer to the downvoter. - The question can be modified: what is the purpose of teaching any proof at all, in any mathematical course? The answer of course is heavily dependent on the institute, the crowd, and the point behind the course. My experience is that non-mathematics students often see mathematics as a tool and that's it. In their minds you just need to know some basic facts and then use that for the sake of engineering or physics or other mission to accomplish in a mathematical fashion. Those students will mostly misunderstand proofs, misunderstand definitions, and will generally be unable to see the full depth of theorems (due to lack of proper definitions, due to the way they study, or due to the fact they simply don't care). However some of the students will be very receptive and will understand the proofs and their inherent beauty. In those courses one can say that there is little to no point in teaching proofs. However the idea is that you teach reasoning, and you give these proofs as an example of a proper mathematical reasoning. This reasoning is very important because it allows you later on to examine things others will tell you. Of course, if a student cares little about this reasoning and only wants to learn the names for the black boxes which solve problems - it will not stick. On the other hand, if the course is for mathematics undergrad students then they have to see the proofs and they have to learn the reasoning. Often, too, they will have other courses in which proofs are presented and reasoning is discussed and this will help to engrave these processes deeper into their minds. Another very important reason to teach proofs is to get students used to the fact that in mathematics you don't usually rely on others in this aspect, you have to understand the proof given to you in order to truly understand something. You don't accept things, you find out why they are true on your own. For these reasons in my university engineering students take only one course in linear algebra but mathematics students have to take two. - It's useless to explain the proof of a theorem to engineers, who hardly know how to define $\mathbb{N}$ - or do any axiomatic mathematics at all, for that matter. Either give examples to show that it's true, or start from scratch entirely (which I suppose you do not have time for). Mathematicians obviously need to see the truth. What physicists need to see depends on what kind of physicist you want them to be. If they'll become string theorists, a strong mathematical background is useful, but you don't need to know any specific proofs of theorems for applied classical mechanics. - 3 I'd wager plenty of successful mathematicians don't know how to define $\mathbb{N}$, either. – user31559 Jun 17 '12 at 11:17 I do not agree, that it is useless to explain proofs to engineers. I am an engineering student myself, and where I study, all engineering students go through a large set of proofs in linear algebra. – utdiscant Jun 17 '12 at 11:20 @utdiscant: I'm not claiming that it is useless to know, but unless they have a strong mathematical background, engineers won't really understand mathematical proofs anyway. – akkkk Jun 17 '12 at 11:21 1 @utdiscant: then either you are at a very good university, or ignorant of the actual level of understanding of your fellow engineers. understanding what the professor said is one thing, but understanding what constitutes a proof (and more importantly, what not), takes a serious understanding of mathematics. – akkkk Jun 17 '12 at 11:48 1 I claim having a good understanding of approximation is rather important to engineers.... – Hurkyl Jun 17 '12 at 14:37 show 6 more comments In my first year engineering math we did the $\epsilon-\delta$ definition of a limit, but only in multivariable calculus. We went over the proper definition of the Riemann integral as well. Yeah, it was weird. The idea seems to be that proofs are presented when they fit nicely into an understanding of how something works, but if the concept can be intuitively worked with and 'understood' without proof (like in the single variable case), they don't bother. Our linear algebra exposure was just some matrix mechanics, and mushed into the second half of our multivariable calculus course. Not well planned. So I would use that as a guiding principle. How much do these proofs help you understand how to use the concepts, and how much are they simply for rigor? I think that proofs can have a place in such a course, but that their use should be more carefully justified when the course isn't for math students. - I would say that all engineering students here, have the background for understanding mathematical proofs. Really? In my experience, many students have trouble with induction, so you will forgive me for not quite buying that first-year undergraduate students at your institution have the background for understanding mathematical proofs. I think an introductory linear algebra course doesn't need much in the way of proofs - the basic stuff is really quite easy to prove. In my opinion, a course specifically targeting proofs should be offered if the department considers proofs to be important enough (and they certainly are, especially if you're offering a degree in mathematics). Now, what constitutes a decent proof for such a course is up to debate. - Hey Nik, this was not my comment, but utdiscant's. I fully agree with your lack of faith in engineers' understanding of math. – akkkk Jun 17 '12 at 14:37 Oops - my bad. Corrected. – Nik Bougalis Jun 18 '12 at 2:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9667389988899231, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/218553/extension-of-a-smooth-function-on-a-set-of-manifold
# Extension of a smooth function on a set of manifold I encountered the following proposition: If a function is smooth on an arbitrary set $S\in M$, where $M$ is a smooth manifold, then it has a smooth extension to an open set containing $S$. It seems the proof needs the partition of unit, but I don't think this proposition is correct. First of all, I don't know what does smooth on an arbitrary set mean. Does it mean $f$ is smooth at every point $P$ in $S$, which by definition means there exists a chart $(U, \varphi)$ with $p\in U$ and $f\circ\varphi^{-1}$ if smooth? If so, does the proposition implies that $S$ is an open set because each point in $S$ should have a neighborhood on which $f$ is defined and smooth? If so, why emphizie an arbitrary set $S$? Consider the function $f:\mathbb{Q}\rightarrow\mathbb{R}$ defined as $f(x)=x$ when $x\geq 0$ and $f(x)=-x$ when $x<0$, is it smooth on $\mathbb{Q}$? - I know this as the definition of smoothness on an arbitrary $S \subseteq M$: A function $f \colon S \to N$ is called smooth, if there is an open $U \supseteq S$ and a smooth $g \colon U \to N$, extending $f$. – martini Oct 22 '12 at 7:23 ..........but it appears as an exercise in Lee's book Manifold and differential geometry, in the chapter of partition of unit..so it should be able to be proved. – hxhxhx88 Oct 22 '12 at 7:25 ... how does Lee define smoothness on an arbitrary $S \subseteq M$ then? – martini Oct 22 '12 at 7:26 1 Ok, on page 25, Lee defines: A continuous map $f \colon S \to N$, $S \subseteq M$ arbitrary is called smooth, if each point $s \in S$ has an open neighbourhood $U_s$ and a smooth $g_s \colon S \to N$ such that $g_s|_{U_s \cap S} = f|_{U_s \cap S}$. For a function $f \colon S \to \mathbb R$, you can take these local extensions and glue them with a partition of unity. – martini Oct 22 '12 at 7:32 Oh yes... I didn't notice it.. Thank you! – hxhxhx88 Oct 22 '12 at 10:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370934367179871, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/53861/would-a-laser-with-four-possible-energy-levels-be-better-than-three?answertab=votes
# Would a laser with four possible energy levels be better than three? I'm wondering about achieving population inversion for a laser. I learned that without an active medium, it's not possible to create a laser with only two energy levels, but it would be possible with three. What is the advantage of having four levels then? - ## 2 Answers When you have a three-level system, the laser transition is between the ground and first excited levels (see figure). In this scheme, it is rather challenging to get population inversion because all atoms tend to stay on the lowest level. With a four-level scheme, you have an extra level so that the laser transition does not end in a ground state. Thus, if the bottom level gets depopulated faster than the top level in the laser transition the population inversion will be guaranteed independent of how fast or efficiently you pump the system. Many lasers, such as Nd:YAG or HeNe laser to name a few, actually use a four-level scheme exactly for this reason. - I don't understand what happens after the "laser" from E3 -> E2 in the second picture. What happens at the transition from the second level to the ground state? – Chris Harris Feb 13 at 23:30 That is just some non-radiative transition (e.g. through collisions) that, if faster than decay from E$_3$ to E$_2$, guaranties that the population inversion will be achieved. – Ondřej Černotík Feb 14 at 8:14 If you have a fast non-radiative transition to the ground state from the bottom laser level, that would help with population inversion because you're emptying the bottom laser level. - What you mentioned is correct for three-level scheme. What is the advantage of fourth level? – Misha Feb 14 at 10:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214708209037781, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/kinematics+suvat-equations
# Tagged Questions 1answer 45 views ### Projectile's angle in midflight For a missile travelling from (0,0) at angle $\theta$ (to the horizontal) and initial velocity $u$, the y (vertical) position at time t is given by $s_{y} = u\sin (\theta) t - 0.5gt^{2}$ and the x ... 2answers 90 views ### Doubt in Kinematics I know that this isn't the place for such basic questions, but I didn't find the answer to this anywhere else. It's pretty simple: some particle moves in straight line under constant acceleration from ... 1answer 138 views ### Calculate displacement in position from knowing constant acceleration I have recently started studying physics at school, and my teacher went over the following equation without explaining about it too much: $$d~=~vt+\frac{1}{2}a t^2.$$ I have wondered, why would this ... 2answers 139 views ### Equation of motion for average acceleration I am trying to solve the following: A man of mass 83 kg jumps down to a concrete patio from a window ledge only 0.48 m above the ground. He neglects to bend his knees on landing, so that his ... 0answers 54 views ### At an acceleration of 2ft/s^2, how fast could I reach 9.8MPH? [closed] Question title is self explanatory. If I'm accelerating at $2ft/s ^{2}$, how long does it take to reach $9.8MPH$? 1answer 96 views ### What's the difference between these two formulas and how are these called? I just want to know the differences between these two formulas: $h = h_0 + v_0 t ± \frac{1}{2} g t^2$ and $y = y_0 + v_{0y} t + \frac{1}{2} g t^2$ Also, how are these called in English? 1answer 140 views ### Projectile motion in two and three dimensions question? So I bought this book in the library and physics fascinates me and I found this quote in the book " Galileo has proved that when any effects due to air resistance are ignored, the ranges for ... 0answers 54 views ### Accelerated motions of a car and a truck [closed] A car and a truck start from rest at the same instant with the car initially at same distance behind the truck. The truck has constant acceleration of 3.4 m/s^2. The car overtakes the truck within the ... 2answers 165 views ### Calculating vertical velocity component of a particle with mass, given the hit point of parabolic motion Consider the following situation: I have a particle with a given mass that at a given instant of time (let's say $t_{0}$) is placed at the system origin. The particle has a constant velocity ... 2answers 386 views ### How do I find minimum constant deceleration so that object does not pass distance d in time t? I'm working on a problem for an online judge site. I've boiled down the problem to this calculation: given a vehicle with an initial velocity $v$, how can one calculate the minimum constant ... 1answer 300 views ### How do I find the initial velocity in this problem? An X-ray tube gives electrons constant acceleration over a distance of $20\text{ cm}$. If their final speed is $2.0\times 10^7\text{ m/s}$, what are the electrons' acceleration? I know this ... 0answers 226 views ### Calculating the time given initial position, initial velocity, current position and constant acceleration [closed] I'm trying to write a simple physics engine (a bit different than what is already out there) and I've got the following problem; (For simplicity let's imagine this problem in one dimension;) I have ... 1answer 404 views ### Calculate acceleration and time given initial speed, final speed, and travelling distance? [closed] A motorcycle is known to accelerate from rest to 190km/h in 402m. Considering the rate of acceleration is constant, how should I go about calculating the acceleration rate and the time it took the ... 2answers 88 views ### Proof of $T=\sqrt{2y/a}$ in uniformaly accelerating object [closed] Suppose that there is a object that does a y-axis-only free fall to ground. The initial distance from the ground is defined as $H$. How does one prove that time the object takes to reach the ground ... 0answers 149 views ### Basic Kinematics Question [closed] I have a pretty basic problem regarding kinematics, but I'm new to physics. So, I'm having trouble with it. No, it is not homework. These are review problems, but this particular one is troubling me. ... 1answer 126 views ### Proving $t=(1+\sqrt{1+2hg/v^2 } ) (v/g)$ for a thrown ball If we throw a ball from the hight point $h$ from the earth, with initial velocity $v’$, how to prove that the time it takes the ball to reach the earth is given by: ... 0answers 74 views ### Jumping on a landing pad [closed] I'm trying to make a character jump on a landing pad who stays above him. Here is the formula I've used (everything is pretty much self-explainable, maybe except character_MaxForce that is the total ... 0answers 200 views ### Projectile Motion I know the angle at which a projectile is launched, how far it needs to go, and also the maximum height. How can I find the initial velocity needed (disregarding air resistance)? Currently, I am ... 2answers 820 views ### How do you calculate angle of projection? At what angle the projectile should throw with initial velocity v in order to reach distance d? discard the air resistance, only gravitation acts. So far I got the equations for horizontal and ... 1answer 1k views ### Question on Projectile Motion equation [closed] A golf ball is shot into the air from the ground. If the initial horizontal velocity is 20m/s and the initial vertical velocity is 30m/s, what is the horizontal distance the ball will travel ... 3answers 361 views ### A freefalling body problem, only partial distance and time known Well, I've been trying to figure out a problem which I imposed on myself, so no literal values included. Unfortunately, my brain is not cooperating. The problem states: What is the height from ... 1answer 369 views ### The acceleration of a particle moving only on a horizontal plane is given by a= 3ti +4tj, [closed] The acceleration of a particle moving only on a horizontal plane is given by a= 3ti +4tj, where a is in meters per second-squared and t is in seconds. At t = 0s, the position vector r= (20.0 m)i + ... 2answers 74 views ### Acceleration: Value Disparity? If we consider a ball moving at an acceleration of $5ms^{-2}$, over a time of 4 seconds, the distance covered by the ball in the first second is $5m$. In the 2nd second will $5 + 5 = 10m$. In the ... 2answers 775 views ### What do I need to do to find the stopping time of a decelerating car? [closed] The question is: A car can be stopped from initial velocity 84 km/h to rest in 55 meters. Assuming constant acceleration, find the stopping time. Sorry for my ignorance, but I need to review ... 2answers 177 views ### Why wouldn't this system of equations determine where two balls meet? A ball is thrown vertically upwards at $5\text{ m/s}$ from a roof top of $100\text{ m}$. The ball B is thrown down from the same point $2\text{ s}$ later at $20\text{ m/s}$. Where and when will ... 1answer 122 views ### I think I disprove this with kinematics, but energy says it is right ! Here is my kinematics argument. For now I am only going to look at ball 2 and ball 3. Make note of the following data. $|v_0| = 10m/s$, $y_0 = 10m$, $\theta_2^0 = 30^0$, \$\theta_3^0 = -45^0, g = ... 2answers 213 views ### Using acceleration to plot position Sorry if this question is dumb, and I know is physics 101, but I'm not that good with physics. I'm writing an iPhone program that by collecting the acceleration data of the device tries to replicate ... 3answers 581 views ### Very basic question: When to use $s=vt$, $s=1/2vt$, $s=at$ and $s=a/t^2$? Very basic question: When to use $s=vt$, $s=\frac{1}{2}vt$, $s=at$ and $s=\frac{a}{t^2}$? What was the difference between those?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427345395088196, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Unrestricted_grammar
# Unrestricted grammar In formal language theory, an unrestricted grammar is a formal grammar on which no restrictions are made on the left and right sides of the grammar's productions. This is the most general class of grammars in the Chomsky–Schützenberger hierarchy, and can generate arbitrary recursively enumerable languages. ## Formal definition An unrestricted grammar is a formal grammar $G = (N, \Sigma, P, S)$, where $N$ is a set of nonterminal symbols, $\Sigma$ is a set of terminal symbols, $N$ and $\Sigma$ are disjoint (actually, this is not strictly necessary, because unrestricted grammars make no real distinction between nonterminal and terminal symbols, the designation exists purely so that one knows when to stop when trying to generate sentential forms of the grammar), $P$ is a set of production rules of the form $\alpha \to \beta$ where $\alpha$ and $\beta$ are strings of symbols in $N \cup \Sigma$ and $\alpha$ is not the empty string, and $S \in N$ is a specially designated start symbol. As the name implies, there are no real restrictions on the types of production rules that unrestricted grammars can have. ## Unrestricted grammars and Turing machines It may be shown that unrestricted grammars characterize the recursively enumerable languages. This is the same as saying that for every unrestricted grammar $G$ there exists some Turing machine capable of recognizing $L(G)$ and vice-versa. Given an unrestricted grammar, such a Turing machine is simple enough to construct, as a two-tape nondeterministic Turing machine. The first tape contains the input word $w$ to be tested, and the second tape is used by the machine to generate sentential forms from $G$. The Turing machine then does the following: 1. Start at the left of the second tape and repeatedly choose to move right or select the current position on the tape. 2. Nondeterministically choose a production $\beta \to \gamma$ from the productions in $G$. 3. If $\beta$ appears at some position on the second tape, replace $\beta$ by $\gamma$ at that point, possibly shifting the symbols on the tape left or right depending on the relative lengths of $\beta$ and $\gamma$ (e.g. if $\beta$ is longer than $\gamma$, shift the tape symbols left). 4. Compare the resulting sentential form on tape 2 to the word on tape 1. If they match, then the Turing machine accepts the word. If they don't go back to step 1. It is easy to see that this Turing machine will generate all and only the sentential forms of $G$ on its second tape after the last step is executed an arbitrary number of times, thus the language $L(G)$ must be recursively enumerable. The reverse construction is also possible. Given some Turing machine, it is possible to create an unrestricted grammar. ## Computational properties The decision problem of whether a given string $s$ can be generated by a given unrestricted grammar is equivalent to the problem of whether it can be accepted by the Turing machine equivalent to the grammar. The latter problem is called the Halting problem and is undecidable. The equivalence of unrestricted grammars to Turing machines implies the existence of a universal unrestricted grammar, a grammar capable of accepting any other unrestricted grammar's language given a description of the language. For this reason, it is theoretically possible to build a programming language based on unrestricted grammars (e.g. Thue). ## References • Hopcroft, John; Ullman, Jeffrey D. (1979). (1st ed.). Addison-Wesley. ISBN 0-201-44124-1.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8691073656082153, "perplexity_flag": "head"}
http://mathoverflow.net/questions/55824/is-there-any-efficient-way-to-compute-the-follow-matrix-equations-easily/55827
## is there any efficient way to compute the follow matrix equations easily ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $A$ and $D$ are $n\times n$ diagnal matrices, and $B$ is an $n\times n$ orthogonal matrix. Is there any efficient way to compute the follow matrix equations easily? $\sum_{i=0}^{k} A^i \cdot B^T \cdot D \cdot B \cdot A^i$ - ## 3 Answers OK here's my take on this calculation: first of all, observe that if $E$ is any $n \times n$ square matrix, and $\Lambda$ is the diagonal matrix `$diag(\lambda_{1}, \lambda_{2}, . . ., \lambda_{n})$`, then $E \Lambda$ simply multiplies the $j$-th column of $E$ by `$\lambda_{j}$` for each $j$, $1 \le j \le n$. Likewise, $\Lambda E$ multiplies the $l$-th row of $E$ by `$\lambda_{l}$`. Taking `$A = diag(a_{1}, a_{2}, . . ., a_{n})$`, we have `$A^{k} = diag(a_{s}^{k})$`, where I have made the (hopefully) obvious abbreviation of notation. So $A^{m}EA^{m}$ multiplies the $lj$ entry of $E$, call it `$e_{lj}$` (so that `$E = [e_{lj}]$`), by `$a_{l}^{m}a_{j}^{m} = (a_{l}a_{j})^m$`: `$A^{m}EA^{m} = [e_{lj}(a_{l}a_{j})^{m}]$`, so `$\sum_{i=0}^{i=k}A^{k}EA^{k} = [e_{lj}\sum_{i=0}^{i=k}(a_{l}a_{j})^i]$`. A further simplification may be obtained by observing that `$\sum_{i=0}^{i=k}(a_{l}a_{j})^{i} = ((a_{l}a_{j})^{k+1} -1)/(a_{l}a_{j} - 1)$` if `$a_{l}a_{j} \ne 1$` and `$\sum_{i=0}^{i=k}(a_{l}a_{j})^{i} =k + 1$` if `$a_{l}a_{j} = 1$` Now consider $B^{T}DB$; since $D$ is diagonal, we can group it with either $B^{T}$ or $B$ and use the same trick applied above to $\Lambda E$, $E \Lambda$ to simplify one of the matrix multiplications. But it seems like our luck ends there, you'll just have to do plain old-fashioned matrix multiplication on either $(B^{T}D)B$ or $B^{T}(DB)$ (your choice). Then take $E = B^{T}DB$ and proceed as described in previous to get the sum. Not sure about the complexity of this method, there's at least one full matrix multiply involved in calculating $B^{T}DB$, which is $O(n^{3})$; the rest of it looks like it might be $O(kn^{2})$, since we have eliminated the for complete matrix multiplication in favor of multiplying each element by one value. Just guessing here, but it's pushing 3AM and I've had to wrestle with MathJax tonight, so this took longer to type than anticipated. Too sleepy to think much more . . . but, complexity issues aside, there's another way to look at it. BTW, it seems that Gerry Myerson's idea can be patched up by setting $C = A^{k+1}B^{T}DBA^{k+1} -B^{T}DB$, i.e., subtracting off the $i = 0$ term. @Peter: $AXA - X = C$ is a linear system in the entries of $X$; the solution is standard. The technique I described avoids some of the problems which might arise in linear system solution, i.e. ill-conditioning of the coefficient matrices. There's a ton of literature on this; I suggest googling around a bit. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let S(j) denote the sum of the first j terms of your sum. Then S(2j-1) = S(j-1) + A^j S(j-1) A^j. So you can arrange the work so that it requires O(log k) multiplications. - I'm going to assume that the problem is that $k$ might be large (and that "effective" was supposed to be "efficient", and "digonal" was supposed to be "diagonal"). Let's call the sum $X$. Then $AXA-X=A^{k+1}B^TDBA^{k+1}=C$, say, is easy to calculate. Furthermore, it's easy to relate the entries of $AXA$ to those of $X$, so it's easy to solve $AXA-X=C$ for $X$. - 1 I think you forgot to account for the $i=0$ term. To see this, note that the equation you write down has no solution when $A$ is the identity matrix. – alex Feb 18 2011 at 6:42 Is it easy to solve $AXA-X=C$ ? Could you give materials related to this formula? – Peter Feb 18 2011 at 9:11 @alex, absolutely right, I should have written $A^{k+1}B^TDBA^{k+1}-B^TDB$. @Peter, this site caters for mathematicians doing research. I expect any research mathematician to be able to work out how $AXA$ relates to $X$ when $A$ is diagonal and go from there. If you can't, you've come to the wrong site. Have a look at the faq, and the alternative websites mentioned therein. – Gerry Myerson Feb 18 2011 at 11:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550514817237854, "perplexity_flag": "head"}
http://mathoverflow.net/questions/500/finite-groups-with-the-same-character-table/502
## Finite groups with the same character table ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Say I have two finite groups G and H which aren't isomorphic but have the same character table (for example, the quaternion group and the symmetries of the square). Does this mean that the corresponding categories of finite dimensional complex representations are isomorphic (ignoring the forgetful functor to vector spaces), or just that the corresponding representation rings are? - 1 Isomorphic categories would be surprising. Maybe what you want to ask is whether they have equivalent categories of representations? – Mikael Vejdemo-Johansson Oct 14 2009 at 17:58 in the particular case of $D_8$ and $Q_8$ if you look at the values of the Adams $\psi^k$ operations on the characters you get different results. so as mentioned below, the character table is a shadow of what is going on. Note that those operations do tell you something about the category of finite dimensional representations, which is obvious from their definition. – Sean Tilson Nov 18 2010 at 0:47 ## 5 Answers In the particular case of the non-abelian groups of order 8, their categories of modules are not equivalent as monoidal categories. That they're not equivalent as pivotal categories can be proved by looking at the Frobenius-Schur indicator (I learned this from a paper of Susan Montgomery). That they're not equivalent even as monoidal categories can be proved by counting the fiber functors to vector spaces and seeing that one has more in one case (I can't remember which paper I saw this in, but almost surely Pavel Etingof was one of the coauthors). - 2 "(I can't remember which paper I saw this in, but almost surely Pavel Etingof was one of the coauthors)." Right, because that narrow it down a lot. – Ben Webster♦ Oct 14 2009 at 18:46 I'm also willing to bet that the coauthors on the paper are a nonempty strict subset of Gelaki, Nikshych, and Ostrick. – Noah Snyder Oct 14 2009 at 19:41 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is a great question, and the answer leads to one of the best arguments for why category theory should be studied at all! Every undergraduate mathematician should discover for themselves that character tables alone don't determine finite groups --- and then, just as their faith in the beauty of mathematics is about to shatter, they should be reassured that character tables are just a 'shadow' of the group's compact monoidal category of representations, and that DOES determine the group (or in general, groupoid). The procedure for reconstructing a groupoid, up to equivalence, from its category of unitary complex representations, is stunningly beautiful: if G is our groupoid and Rep(G) is its representation category, then construct the groupoid which has objects given by symmetric monoidal functors Rep(G)-->Rep(1), and morphisms given by monoidal natural transformations between them. Here, Rep(1) is just the category representations of the trivial group --- in other words, just the category of finite-dimensional Hilbert spaces, with monoidal structure given by tensor product. This is known as "Doplicher-Roberts style" reconstruction, and the best reference is Muger's appendix to this paper. It's more elegant than "Tannakian" reconstruction, as there's no need to start with a given fiber functor (i.e., a specified functor Rep(G)-->Rep(1)). This should remind you strongly of the way you recover a compact topological space from the commutative C*-algebra of functions from that space into the complex numbers ... and there are indeed deep connections! - They're not necessarily equivalent as tensor categories. However, there are examples of finite groups (smallest of order 64) with representation categories which are equivalent as tensor categories but not as symmetric tensor categories (see e.g. http://arxiv.org/abs/math/0007196). In other words, in some cases the same abstract tensor category might be endowed with inequivalent symmetric structures (you can think of these as the pullback of the standard symmetry of the category of vector spaces through inequivalent embedding functors). - What structure do you want to remember on the categories? If you just remember they're abelian categories, any groups with the same number of conjugacy classes will have equivalent categories. On the other hand, if you remember the forgetful functor to vector spaces, you can get the group back: it's the automorphism group of the forgetful functor itself. By the way, you get more than just that the representation rings are isomorphic (if you tensor with Q, the isomorphism type of the representation ring also only depends on the number of conjugacy classes), but with the same basis, which is much stronger. - As far as I know, If you consider the corresponding categories as Tannakian categories, they are not isomophic, for you can rediscover the group from the Tannakian category (as its fundamental gp?) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318860769271851, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/3122/showing-frac1x-left-frac1x-right-is-riemann-integrable
# Showing $\frac{1}{x}-\left[\frac{1}{x}\right]$ is Riemann Integrable Some days ago as i had asked as to how to test the Riemann Integrability of the function. Now i was recently given this question about proving that the given function is Riemann Integrable. How can i show that this function is Riemann Integrable or not in the interval $[0,1]$. I tried using partitions, but it didn't work. I here don't want to use the Riemann Lebesgue lemma as i want to understand the methodology behind selecting the partitions. - 2 @Chandru1: Riemann-Lebesgue Lemma has nothing to do with this. – AD. Aug 23 '10 at 14:12 1 – damiano Aug 23 '10 at 14:28 2 – J. M. Aug 23 '10 at 14:29 9 @Chandru1: Here's a hint. First, choose $\epsilon>0$ small. Then partition $[0,1]$ into $[0, \epsilon] \cup [\epsilon, 1]$. The difference of the lower and upper sums on the first interval is at most $\epsilon$ because this is a function taking values in the unit interval. On $[\epsilon, 1]$, you have finitely many jump discontinuities, so one may construct a fine partition of this where the upper and lower sums are close. – Akhil Mathew Aug 23 '10 at 14:30 4 @damiano, usually Rieman integrals are defined w.r.t. finite partitions, and the function in the question has infinitely many discontinuities. – Mariano Suárez-Alvarez♦ Aug 24 '10 at 0:29 show 2 more comments ## 3 Answers The function is Riemann-integrable because it is bounded (it takes values in $[0,1]$) and has countably many discontinuities, namely, the points of the form $\frac{1}{n}$ and $0$. This uses Lebesgue's criterion for Riemann integrability which you probably meant with Riemann-Lebesgue-Lemma and hence unfortunately didn't want to use. As for doing it by hand with partitions, try Akhil Mathew's hint above. - To show that the function in Riemann-integrable you have to show that it is bounded and has at most a countable set of discontinuities in the interval you are integrating. But this is the case of an improper integral, since $1/x$ is not defined in $x = 0$, you have to follow the criterions for improper integrals, which is the existence of the limit $\lim_{x\rightarrow 0} \int_x^1 \frac{1}{x} dx$ and its finiteness. Assuming that by $[\frac{1}{x}]$ you mean the integer part of x (the closest integer less then or equal to x), we can always assume that $[\frac{1}{x}]$ is zero, because if x less than or greater than zero, you always have a number less than one, the integer part of which is always zero. So your integral is the same as $\int_0^1 \frac{1}{x} dx - \int_0^1 [\frac{1}{x}] dx = \int_0^1 \frac{1}{x} dx$, which does not converge on that interval. EDIT: promt comments showed that I'm way over my head...Sorry for the wrong answer. - If $x$ is in $(0,1]$ then $\frac{1}{x}$ is in $[1,\infty)$ and $[\frac{1}{x}]$ could be anything. – Rasmus Aug 23 '10 at 14:23 You're right, I'm an idiot...I'll correct – Andy Aug 23 '10 at 14:27 Let $\epsilon > 0$ be given. There is a positive integer $n_{0}$ such that $1/n < \epsilon/2$ for $n > n_{0}$. Choose a partition $P$ determined by $$0 = x_{0} < x_{1}= \frac{1}{n_{0}+1} < x_{2} < \cdots < x_{n'_{0}}= \frac{1}{n_0}$$ $$< x_{n'_{0}+1} < \cdots < x_{n_{1}}'= \frac{1}{n_{0}-1} < \cdots < x_{n_{n_{0}-1}'}=1$$ such that $$\displaystyle x_{i} - x_{i-1} < \frac{\epsilon}{4n_{0}}$$ Then $$U(P,f) - L(P,f) = \frac{1}{n_{0}+1} + \sum\limits_{i=2}^{n_{0}'} (M_{i}-m_{i})(x_{i}=x_{i-1})$$ $$+ \sum\limits_{k=0}^{n_{0}-2} \sum\limits_{i=n_{k}'+1}^{n_{k+1}'}(M_{i}-m_{i})(x_{i}-x_{i-1})$$ $$< \frac{\epsilon}{2} + 2n_{0} \frac{\epsilon}{4n_{0}} = \epsilon$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423224329948425, "perplexity_flag": "head"}