url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/13660?sort=votes
## A comprehensive functor of points approach for manifolds ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This seems unrealistic, because the topology on a manifold doesn't have anything to do with the properties its structure sheaf, but I figured I might as well ask. This wouldn't be the first time I was pleasantly surprised about something like this. If not, is there any sort of way to attack differential geometry with abstract nonsense? Even though schemes have singularities, "it's better to work with a nice category of bad objects than a bad category of nice objects". Manifolds seem to be perfect illustration of this fact. Edit: Apparently my question wasn't clear enough. The actual question here is if we can define manifolds entirely as "functors of points" like we can with schemes (sheaves on the affine zariski site). There is no fully categorical and algebraic description of the category of smooth manifolds. When I say a "comprehensive functor of points approach", I mean a fully categorical description of the category of smooth manifolds. - 6 what is the precise question? – Martin Brandenburg Feb 1 2010 at 11:46 6 Harry, I don't know what you mean by "doesn't have anything to do with properties of its structure sheaf"...a few reasons: one, to be a manifold means that the structure sheaf is locally the same as on $\mathbb{R}^n$ and two, the fact that for manifolds, all structure sheaves are fine (admit partitions of unity) is extremely important to their study. Additionally, Yoneda doesn't care what category you're in, so there is a functor of points and it uniquely determines the manifold, so your question (what I can interpret from it...) seems to have the answer: yes. – Charles Siegel Feb 1 2010 at 12:46 The global Zariski topology is directly constructed from the data in CRing, then flipped around in CRing^op. Obviously, you can't build up "the hausdorff topology" on CRing^op. At least I don't see how you could do it while still using ring data. – Harry Gindi Feb 1 2010 at 13:27 1 You can build the Hausdorff topology on R^n from ring data. If A is the ring ^{infty}(R^n), then R^n is Hom(A, R). A subset K of R^n is closed if and only if there is some f in A such that K = { h in Hom(A,R) such that h(f)=0}. The analysis lemma I'm using is that, for every closed subset of R^n, there is a C^{infty} function which vanishes on precisely that set. – David Speyer Feb 1 2010 at 14:37 2 I'm still confused by your question. My best reading now is: "Is the category of manifolds = Sheaves(some site)", in which case my answer is "No, but so what?". – Andrew Stacey Feb 1 2010 at 15:03 show 3 more comments ## 4 Answers Here are two things that I think are relevant to the question. First, I want to support Andrew's suggestion #5: synthetic differential geometry. This definitely constitutes a "yes" to your question is there any sort of way to attack differential geometry with abstract nonsense? --- assuming the usual interpretation of "abstract nonsense". It's also a "yes" to your question Can we describe it as some subcategory of some nice grothendieck topos? --- assuming that "it" is the category of manifolds and smooth maps. Indeed, you can make it a full subcategory. Anders Kock has two nice books on synthetic differential geometry. There's also "A Primer of Infinitesimal Analysis" by John Bell, written for a much less sophisticated audience. And there's a brief chapter about it in Colin McLarty's book "Elementary Categories, Elementary Toposes", section 23.3 of which contains an outline of how to embed the category of manifolds into a Grothendieck topos. Second, it's almost a categorical triviality that there is a full embedding of Mfd into the category Set${}^{U^{op}}$, where $U$ is the category of open subsets of Euclidean space and smooth embeddings between them. The point is this: $U$ can be regarded as a subcategory of Mfd, and then every object of Mfd is a colimit of objects of $U$. This says, in casual language, that $U$ is a dense subcategory of Mfd. But by a standard result about density, this is equivalent to the statement that the canonical functor Mfd$\to$Set${}^{U^{op}}$ is full and faithful. So, Mfd is equivalent to a full subcategory of Set${}^{U^{op}}$. There's a more relaxed explanation of that in section 10.2 of my book Higher Operads, Higher Categories, though I'm sure the observation isn't original to me. - I've been reading your book, but I'm not that far yet. – Harry Gindi Feb 1 2010 at 23:41 For what it's worth, the bit I was referring to (10.2.1 to 10.2.6) doesn't require anything earlier in the book. – Tom Leinster Feb 2 2010 at 21:52 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'm having a hard time understanding what the actual question is here, but the little I do get suggests that you start reading as follows: 1. generalised smooth space on the nLab 2. If you are particularly inclined towards sheaves, then read about Chen spaces and diffeological spaces (and be sure to take in Convenient Categories of Smooth Spaces while you are at it). 3. If you are a little more ambivalent about sheaves and just want to embed manifolds in a "nice" category, be sure to take in my personal favourite. 4. If you really want a comparison of the lot, consider wading through the extremely murky Comparative Smootheology. 5. Going further afield, there's synthetic differential geometry. Added later: (Edit inserted here so that the last line of the original post remains the last line of the edited post.) Two (minor) thoughts after having read Tom's answer: 1. The "site" Tom uses is bigger than necessary. I'm no expert on the categorical side of things, but subject to checking a few details then you can work with just the monoid $C^\infty(\mathbb{R},\mathbb{R})$ viewed as a one-object category. The point is that although it seems as though you need open sets of all dimensions, actually manifolds are determined completely by their smooth curves. So if you want a topos in which manifolds sits, sheaves on $C^\infty(\mathbb{R},\mathbb{R})$ will do. Of course, if you want your category to have other things as well then other sites may be more appropriate. See the extensive discussions on this on the nLab and nCafe. 2. Whenever you embed manifolds in a topos then you are going to break something. There is no way to embed manifolds in a topos and have the subcategory of manifolds behave exactly as the category of manifolds does. In brief, if you want to have a locally cartesian closed category then your embedding cannot preserve colimits. That is, there will be some diagrams in manifolds that have colimits in manifolds but have different colimits in your topos. For more details, see the details in the nLab page on Froelicher spaces. Finally, I disagree with the sentiment behind: "it's better to work with a nice category of bad objects than a bad category of nice objects". Manifolds seem to be perfect illustration of this fact. I much prefer: Manifolds are fantastic spaces. It’s a pity that there aren’t more of them. - I hadn't heard the second one. It's a good one though. – Harry Gindi Feb 1 2010 at 13:12 2 I'm going to remember that last line, if nothing else! +1 – Charles Siegel Feb 1 2010 at 16:33 Just so you know, Andrew, I'm still following this, so don't think that your posts are going unread. =) – Harry Gindi Feb 2 2010 at 8:56 I certainly wouldn't call it an "attack with abstract nonsense", but the "functor of points" language can also be used in the context of differentiable manifolds. A very good place to start would be Weil's article from 1953: "Théorie des points proches sur les variétés différentiables" While I imagine it doesn't specifically contain the phrase "functor of points", the idea is precisely the same. Weil's article also sheds some light (for algebraically oriented people like myself) on the "prolongations" studied much earlier by Cartan. - As far as abstract nonsense goes there is Joyce's approach to manifolds, yielding $C^\infty$-schemes and $C^\infty$-stacks. http://arxiv.org/abs/1001.0023 I don't suppose it has much to do with the functor of points, but it enlarges the category of manifolds in order to gain fibre products, and thus some intersection theory on manifolds. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9529683589935303, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/46267/the-four-clock-special-relativity-conundrum/48456
# The Four-Clock Special Relativity Conundrum Two open-car trains approach each other at fixed velocities. Each has a radar to see how quickly the other train is approaching, but apart from that the trains have no a priori knowledge of each other. Each train has engineers on its first and last cars. Each engineer has an atomic clock and a laser for communicating with their partner on the same train. Long before the trains meet, the engineers on each train use their lasers to adjust their relative separations to exactly one kilometer. Using the synchronization procedure first defined by Einstein, the engineers also exchange time data and use it to adjust their atomic clocks until they are precisely synchronized within their shared frame. The trains meet. The clocks are positioned so that they almost touch as they pass. At the moment of nearest contact, they exchange and record each other's values (timestamps). Some time later, the clocks on the trailing cars meet and perform the same procedure. The delay between the leading and trailing data events can now be measured in two ways. If the two timestamps taken from the first train are compared, the result is the time between the events as measured from the first train. This value is meaningful because the engineers on that train previously synchronized their clocks and stayed within the same frame of reference at all times. If the timestamps from the second train are used, a similar but distinct measurement of the time between the two events can be obtained. Now, three questions: 1. Is there anything wrong or impossible with this experimental setup? If so, what is it? 2. If you accept the experiment as realistic and meaningful, will the delays calculated from the perspective of each of the two trains be the same, or different? 3. If you answered "different," what is the correct procedure for predicting in advance what the ratio of the two delays will be? 2013-01-10 - The Answer(s!) I awarded the bounty to @FrankH for pages of excellent and educational work worth of reading by anyone who wants to understand special relativity better. However, I've also taken the unusual step of re-allocating the answer designation to @MatthewMcIrvin. There are two reasons: (1) Matthew McIrvin was the first one to spot the importance of the symmetric-velocities frame, which others ended up gravitating (heh!) to; and (2) Matthew's answer is short, which is great for readers in a hurry. FrankH, sorry about the switch, but I didn't realize I could split them before. So: If you are not overly familiar with special relativity and want to understand the full range of issues, I definitely recommend FrankH's answer. But if you already know SR pretty well and want the key insights quickly, please look at Matthew McIrvin's answer. I will have more "proximate data exchange" SR questions sometime in the near future. Two frames turns out to be a very special case, but I had to start somewhere. For folks with strong minds, stout hearts, and plenty of time to kill, a much more precise version of the above question is provided below. It requires new notations, alas. To reduce the growing size of this question, I have deleted all earlier versions of it. You can still find them in the change history. Precise version of the 4-clock conundrum Please see the first three figures below. They define two new operators, the frame view and clock synchronization operators, that make the problem easier to state precisely: Note that the four timestamps $\{T_{A1}, T_{B1}, T_{A2}, T_{B2}\}:>\{A,B\}$. That is, the four timestamps are shared and agreed to (with an accuracy of about 1 nanosecond in this case) by observers from both of the interacting frames A or B. Since T1 and T2 are historically recorded local events, this assertion can be further generalized to $\{T1,T2\}:>*$, where $*$ represents all possible frames. (And yes, $:>$ is two eyes looking left, while $<:$ is two eyes looking to the right.) Using the four shared time stamps, define: $T_{\Delta{A}} = T_{A2} - T_{A1}$ $T_{\Delta{B}} = T_{B2} - T_{B1}$ My main question is this: Assuming that $T_{\Delta{A}} = f(T_{\Delta{B}})$ exists, what is $f(x)$? Analysis (why this problem is difficult) Based purely on symmetry in the setup, the most obvious answer is $f(x)=x$, that is: $T_{\Delta{A}} = T_{\Delta{B}}$ There are some interesting reasons to be troubled by that seemingly straightforward and even obvious conclusion, not the least of which is that it violates the whole concept of special relativity (the Dingle heresy). That is, if you assume $f(x)=x$ and follow that line of logic through to its logical conclusion, you quickly end up with time that flows the same for all frames -- that is, no relativity. Such a conclusion is in flat violation of over a century of very detailed experimental evidence, and so is just not supportable. The following four figures show why it's so hard to assert that $T_{\Delta{A}} = T_{\Delta{B}}$ without violating special relativity. While the above figures accurately capture the reality of relativistic contraction of both distance and time, the problem in this case is simple: How do you decide which frame to select? The experiment as described can only give one outcome. Which one will it be? - Can't you remove the complexity of having a y separation by assuming infintely small clocks that can pass through each other? Then if you draw a spacetime diagram you see that if A1 and B1 coincide at some spacetime point, then A2 and B2 cannot coincide at a spacetime point. – twistor59 Dec 8 '12 at 9:15 Yes on complexity. What I'm actually trying to do is show that the point-like model you just described can be approached infinitely closely, as a limit, by making the two clock pairs miss each other by shorter and shorter lateral distances. Using a limit instead of assuming points makes the experiment more "real" and avoids fallacies that could pop up from using idealized points. So, yes, it's more complex, but my goal as an experimental limit is very much what you just describe. (I'm not quite sure what you mean by your last "cannot coincide" phrase.) – Terry Bollinger Dec 8 '12 at 14:29 Yes, ignore the last part of my comment, I was talking gibberish! – twistor59 Dec 8 '12 at 15:04 1 This question is about 2000 words long. Is there a way to ask it more concisely? – Mark Eichenlaub Dec 11 '12 at 5:29 I was thinking your update time is a part of the question because you mention clock in your title... – hwlau Dec 11 '12 at 6:29 show 7 more comments ## 8 Answers I'm not sure, but I think your objection here is that the times measured by these observers for the interval between the "A1 meets B1" and "A2 meets B2" events is the same, even though they're in frames that are moving relative to one another. So shouldn't there be some kind of time dilation? This is not a problem, though. The familiar relativistic time dilation formula has to do with the time t' you'll measure between two events, relative to the time t between the events in the frame where they occur at the same location--that is, their rest frame, the frame in which a single stationary clock could mark time for the two events. t'/t = gamma, the time dilation factor. What is the rest frame defined by these two events? It's not the frame moving with either train. It's a third frame: the frame in which the two trains are moving in opposite directions with the same speed! (Call it the "center of velocity" frame.) In this frame, A1 passes B1 and A2 passes B2 at exactly the same place. Relative to this center-of-velocity frame, both train frames are moving at the same speed, just in opposite directions. So the elapsed time between the two events gets time-dilated to exactly the same extent to observers on the two trains. The measured time won't be the same in every frame; but it will be the same in those two particular frames! - Matthew, first, thanks for keeping your answer and logic nicely focused on the main question, and not getting into specific examples of math too quickly. Yes: ${\forall}\{A,B|A\neq{B},A{\parallel}B,A{\epsilon}{\Phi},B{\epsilon}{\Phi}\}$, $\exists{O}|v_{A:>O}=-v_{B:>O}$, where $\parallel$ indicates that the centers of mass of the objects of concern within the two frames are approaching towards or diverging from the same point in space (parallel paths), and $\Phi$ is the set of all possible frames. The difficulty is the one you mentioned: Only one such frame exists. Is it always selected? How? – Terry Bollinger Jan 6 at 4:58 ALL: So far, Matthew McIrvin is the only one who has addressed the relevance of the center-of-velocity frame (what I've been calling $O$) for identifying the frame pairs. For those symmetric frame pairs, undoubtedly and without any equivocation, $T_{\Delta{A}}=T_{\Delta{A}}$. So, this seems to me is a very nifty insight by Matthew, one that would be very good to address explicitly within any answer that asserts $T_{\Delta{A}}=T_{\Delta{A}}$. For myself, I'm currently trying very hard to understand if (or why) that is not the only solution set. Or alternatively: How do you avoid Dingling? – Terry Bollinger Jan 6 at 19:21 Dr. Matt McIrivin has a good reputation for explanations. – Retarded Potential Jan 6 at 20:57 Matthew McIrvin, from the odd way your name works (actually, how it does not work), I gather you are not a regular member of Physics SE. I'm not entirely sure how that works for points issues. I assume you are most likely the same Dr Matt McIrvin who did the black hole FAQ for sci.physics. Nice work, that! – Terry Bollinger Jan 9 at 3:59 And if you get a chance, please take a look at my very last comment (for now) to FrankH. I intentionally skipped all of that -- the more-than-two frame cases -- for this question, since two frames is quite enough to start looking at the causal resolution oddities of unaccelerated proximate large-frame time stamp exchanges. But the bottom line is that the pure-time observer frames proliferate combinatorially with the number of occupied, unaccelerated, proximately-interacting frame states involved, so the $O$ set doesn't really solve anything. Which is to say: What an intriguing little mess!... – Terry Bollinger Jan 9 at 4:12 show 1 more comment The problem is that although A1 and A2 think the A clocks are perfectly synchronized and that B1 and B2 think that the B clocks are perfectly synchronized, neither pair will agree that the other pairs clocks are synchronized at all. This is the relativity of simultaneity of Special Relativity. *In particular the diagram in the original question (that diagram has now been removed) where all four clocks are showing 00:00 is wrong - no observer will agree that all four clocks are synchronized like that.* It is almost always confusing to think about the time dilation and length contraction. It is always clearer and less ambiguous to draw a space-time diagram of the situation. In the following picture that is drawn in the B1/B2 rest frame I drew all the B world lines and clock readings in blue, all the A world lines and clock readings are in red and light signals are in black dashed lines. This is for a relative speed of approximately 0.866c between the rest frames to the best of my humble drawing abilities: When B1 and A1 were at the same location (to within a foot) their clocks both said 00:00. The dotted lines are the light signals that always travel at speed "c" for all observers. The speed "c" is assumed to be drawn by lines at 45 degrees in this diagram. You can see that from the B1/B2 rest frame point of view, the A2 clock turns to 00:01 shortly after both A1 & B1 clocks turn to 00:00. However from the B1/B2 point of view, it takes a long time for A2's clock to turn to 00:02. That is why B1/B2 thinks that the A1/A2 clocks are not synchronized - it's because A1 is running away from the light signal while A2 is running towards the light signal. From the diagram you can also see that B1 thinks A1's clock is running at half speed since when B1's clock reads 00:04, he predicts that A1's clock will read 00:02 in B1's rest frame based on the timing of the light signals that A1 and A2 are passing back and forth. However, B1 will NOT think that A1 and A2's clocks are synchronized - A2 clicks over to the next second significantly sooner than A1's clock goes to the next second. Also to B1 and B2 it looks like A1 and A2 are a lot closer together than 1 light second - in particular A1 and A2 are about a half light second apart as measured by B's clocks. There are 4 events where a red line crosses a blue line. The experimental results are the values of the two clocks that are crossing each other. Here are the approximate results as read out from the imperfectly drawn diagram and after interpolating the clock readings: • A1 meets B1: A1=00:00.0, B1=00:00.0 • A2 meets B1: A2~00:01.2, B1~00:00.8 • A1 meets B2: A1~00:00.8, B2~00:01.2 • A2 meets B2: A1~00:01.7, B2~00:01.7 The counter intuitive thing about Special Relativity is that if you just swap all the A and B labels, you would get exactly the same figure from the point of view of A1 and A2's rest frame. So A1 and A2 will think that B's clocks are not synchronized and that B's clocks are running slow and that B1 and B2 are a half light second apart. From this A <=> B switched diagram which is the A1/A2 rest frame, we would then get the following experimental results: • B1 meets A1: B1=00:00.0, A1=00:00.0 • B2 meets A1: B2~00:01.2, A1~00:00.8 • B1 meets A2: B1~00:00.8, A2~00:01.2 • B2 meets A2: B1~00:01.7, A2~00:01.7 This is the same results as in the B1/B2 rest frame diagram. Note that in both of these lists of four events, the 1st and 4th events are at timelike separations so all observers will agree on the order of these events. However, the 2nd and 3rd events on both lists are at spacelike separations so different observers will disagree about the ordering of those events. In fact there are observers where these events would be seen as simultaneous. However, for all observers the values shown on the clocks that are passing each other at all four events will be the same consistent sets of values shown in these lists. That is about all there is to it. This is the theoretically predicted outcome of the experiment. Update on 1/4/2012 for the updated question: Reading from the values shown above and using the new notation of the question: $T_{A1} = 0, T_{B1} = 0, T_{A2} = 1.7, T_{B2} = 1.7,$ and therefore: $T_{\Delta A} = T_{\Delta B}$ Basically, in a given observers rest frame, he thinks his clock went from 0 to 1.7 but he thinks the other guys trailing clock went from 0.85 to 1.7 so it is running at half speed. The 0.85 comes from the relativity of simultaneity of special relativity. In the observers rest frame he will admit that the first of the other clocks started at 0 but that the second of the other clocks was already at 0.85 at that point and thus the other guys clocks are not synchronized. - FrankH, thanks! I was with you and going "yeah!" right up until your experimental prediction at the end. Perhaps I am not interpreting your prediction correctly? You seem to be saying that A2 and B2 will record the same compact spacetime event -- a two-way exchange of data that takes place within 1 ft and 1 ns -- as having different outcomes depending on your frame of reference. So: Are you saying A1 will record the exchange as showing that B2 lags behind A1, while B2 will record the same event as showing that A1 lags behind B1? Am I understanding you rightly? – Terry Bollinger Dec 8 '12 at 14:20 No, the outcomes are the same when recorded in the two rest frames of A1/A2 and B1/B2. I edited the answer to make this more explicit. Using the words "lags" is confusing because of time dilation. What can be measured is the values of the clocks. I also edited the answer to point out that your picture with all 4 clocks showing 00:00 is incorrect and misleading which I think causes your confusion. – FrankH Dec 8 '12 at 15:35 FrankH, thanks. Are you sure about those tables entries? In any case, yes of course; Einstein simultaneity never looks simultaneous when it is viewed from another frame. That was Einstein's main reason for elaborating the concept precisely at the beginning of his first SR paper. Most SR diagrams tend simply to neglect Einstein simultaneity, e.g. by allowing solid objects to exhibit time slopes along their lengths -- a very odd concept, that. Adding continuous status data exchanges entangles Einstein simultaneity with long-term causal consistency, making SR problems more interesting. – Terry Bollinger Dec 8 '12 at 22:33 I am sure that the 4 event clock value entries are wrong, but they are in the ball park of the right number. I just did approximately 0.866c slope for the A1/A2 velocity - it could easily be 0.8 or 0.9 as I drew it. Similarly, I am not absolutely sure about the spacing between the two red A1/A2 lines in the drawing. I moved A2 till I got approximately a $\gamma$=2 value where A1=00:02 when B1=00.04 and finally approximated the values of the clock for those events by just visually estimating off the drawing. That is why I used "~" instead of "=". These numbers could be calculated...... – FrankH Dec 8 '12 at 23:02 FrankH, again thanks, I really appreciate the effort and will look at it more carefully soon (can't right now). My concern was that as best I've been able to tell, the two middle interactions are just as label-symmetric as the leading edge and trailing edge interactions are. So, from that I'm having trouble understanding where the (apparent?) asymmetry is creeping into your middle two entries. It's surprisingly easy to introduce unintended privileged-frame asymmetries into Minkowski diagrams, for the simple reason that we think (and draw) hyperbolic space from a very Euclidean viewpoint. – Terry Bollinger Dec 9 '12 at 3:01 show 20 more comments By symmetry, the rear clocks read the same time as each other when they pass. This is possible because simultaneity is relative. For example, from frame A, the rear clock B is running slowly, but was also ahead of the front clock (even though the B clocks are synchronous in their own frame). Running slowly but with a head start, the clocks sync up when they cross. By the way, your graphics make incorrect assumptions. First, you show the clocks being synchronized in a "neutral" frame where they all have the same speed. Next, you show clocks A still synchronized in their rest frame. This isn't correct; they can only be synchronized in one of these frames. See http://en.wikipedia.org/wiki/Relativity_of_simultaneity Here is a full blow-by-blow. The event A1 = B1 be the mutual origin for all three reference frames. Let clocks A1 and B1 read zero at that event. Let the distance from A1 to A2 be 1 in its own frame, and likewise for B. Let the velocity of B in frame A be -0.866 so that $\gamma = 2$. We'll find the coordinates of all the clocks in all the frames. (x,t) denotes the space and time coordinates in that frame. "tA1: 0" means that the time read on clock A1 is zero. The procedure is that we first find the coordinates in the frame we're working on. Then, if we want to know what a clock reads, that's equivalent to ask what the time coordinate of that event is in its own frame, so we Lorentz-transform into the clock's own frame to find what time it reads. Frame A In this frame, clocks A1 and A2 are stationary. B1 and B2 are moving at -0.866 and are separated by a distance 1/2 because of lorentz contraction. Event A1 = B1 and simultaneously with it A1 = (0,0) B1 = (0,0) A2 = (-1,0) B2 = (1/2,0) tA1 = 0 tB1 = 0 tA2 = 0 (synchronized with A1 in this frame) tB2 = 0.866 (by Lorentz-transforming its coordinates) Event A2 = B2 A1 = (0, 1.732) (the time comes from the distance B2 had to move, 3/2, divided by its speed, 0.866) B1 = (-1.5, 1.732) (travels the same 3/2 distance as B2) A2 = (-1, 1.732) (same time as A1 by synchronization) B2 = (-1, 1.732) (same event at A2) tA1 = 1.732 tB1 = .866 (by Lorentz-transforming its coordinates) tA2 = 1.732 tB2 = 1.732 (by Lorentz-transforming its coordinates) As you can see, in this frame, B's clocks are both running slowly. However, B's clocks are not synchronized to each other in this frame. They are synchronized only in their own frame. In this frame, clock B2 runs ahead of clock B1. Since it's ahead but running slowly, it reads exactly the same times as A2 when they pass each other. Frame B In this frame clocks B1 and B2 are stationary. A1 and A2 are moving at 0.866 and are separated by a distance 1/2 because of lorentz contraction. Event A1 = B1 and simultaneously with it A1 = (0,0) B1 = (0,0) A2 = (-1/2,0) B2 = (1,0) tA1 = 0 tB1 = 0 tA2 = 0.866 (by Lorentz-transforming its coordinates) tB2 = 0 (synchronized with B1 in this frame) Event A2 = B2 A1 = (1.5, 1.732) B1 = (0, 1.732) A2 = (-1, 1.732) B2 = (-1, 1.732) tA1 = .866 tB1 = 1.732 tA2 = 1.732 tB2 = 1.732 This frame is the same as frame A, but with the labels switched. Again, the clocks A, which were synchronized in their own frame, are not synchronized here. When we Lorentz-transform all the coordinates involved, we find that time dilation works exactly as expected. Frame C This is the neutral frame where each pair of clock is coming in from the side. Their velocities work out to $\pm 0.577$ and $\gamma = 1.23$ Event A1 = B1 A1 = (0,0) B1 = (0,0) A2 = (-.816,0) B2 = (.816,0) tA1 = 0 tB1 = 0 tA2 = .577 (by Lorentz-transforming its coordinates to frame A) tB2 = .577 (by Lorentz-transforming its coordinates to frame B) Event A2 = B2 A1 = (.816, 1.41) (x comes from A1-A2 distance. Time comes from distance moved/velocity) B1 = (-.816, 1.41) A2 = (0, 1.41) B2 = (0, 1.41) tA1 = 1.15 tB1 = 1.15 tA2 = 1.73 tB2 = 1.73 (all found by Lorentz-transforming into the clock's frame) As we can see, all the clocks show the appropriate time dilation in this frame. In this frame, clocks A1 and B1 always show the same time, but run slowly. A2 and B2 similarly show the same time, run slowly by the same amount, but lag the other two by a constant amount. - First your second paragraph: Yes, of course. Please see the comment I just added below my question. I am not aware of a standard convention for expressing Einstein simultaneity unambiguously within a Minkowski space. My only intent is that each pair independently establishes Einstein simultaneity within their own spatially large frames. You of course cannot compare such regions meaningfully for simultaneity; that is the whole point of SR. However, you can compare time meaningfully for two compact subsets of those regions if they pass closely enough to qualify as one spacetime event. – Terry Bollinger Dec 16 '12 at 3:54 Now, your answer: I also cannot easily see how the lagging clocks can have anything but identical times due to the symmetry of A and B. So, if that is your answer, can you also explain how time dilation survives? During that nanosecond of the leading-edge close encounter, each can observe a slower time rate in the other in a very direct fashion; their faces just happen read the same at that moment. So: What is the mechanism by which this directly observed slower rate of time passage of other frame prove irrelevant over the non-zero time gap (for both frames) until the trailing clocks meet? – Terry Bollinger Dec 16 '12 at 4:10 I've updated with full gory details. I don't completely understand what you're confused about. To me the problem is quite simple and I don't get what you mean by "What is the mechanism by which this directly observed slower rate of time passage of other frame prove irrelevant over the non-zero time gap (for both frames) until the trailing clocks meet?" Nothing proves irrelevant. It's just that clocks synchronized in one frame are not synchronized in the other, and that's all there is to it. – Mark Eichenlaub Dec 16 '12 at 8:25 Mark Eichenlaub, thanks, I appreciate your efforts on Lorentz transformations. Forget my figures for a moment. Two open trains A and B have engineers at their leading and trailing edges. On A the leading engineer A1 and the trailing engineer A2 use lasers to Einstein synchronize their clocks. B1 and B2 do the same for B (only). My intent thus was two separate Einstein synchronizations, not the impossible task of synchronizing across frames. Next, both frames use their internally synchronized clocks to record the same pair of well-localized, frame-independent spacetime events. More later... – Terry Bollinger Dec 16 '12 at 17:42 @TerryBollinger Yes, I understand the clocks are separately synchronized. If you read my response, it is clear that the clocks are synchronized in their own frames. – Mark Eichenlaub Dec 16 '12 at 17:45 show 11 more comments The Boost along x Lorentz transformation is ideally suited for settling the conundrum quickly and easily. For an event recorded at (x,t) in the lab, the boosted frame will record a $t'$ for the same event as $$t' =\gamma(t - vx/c^2)$$ We use this to find the time shown on $A2$ and $B2$ at the instant $A1$ and $B1$ pass one another at t=0: $$t'_{A2} =\gamma(0 - vx/c^2)\quad t''_{B2} =\gamma(0 - vx/c^2)$$ They therefore show the same time, and will continue to do so since they're travelling at the same speed. - Hmm, nice brevity! (All: more later.) – Terry Bollinger Dec 25 '12 at 0:52 @TerryBollinger Thanks. Unfortunately there was a typo in the labelling of the times which no one mentioned, that I've now fixed. – John McVirgo Dec 25 '12 at 1:00 John, if you have a chance, please look over the comments I just made for Spacelike Cadet and Mark Eichenlaub about why I have a very hard time visualizing how the invariant spacetime interval between events $T1$ and $T2$ can be seen giving the same pure-time readings within two distinct frames that are observing that interval. I'm pretty sure you are in the $T_{\Delta{A}}=T_{\Delta{B}}$ category, if I'm reading your answer rightly. – Terry Bollinger Jan 6 at 4:33 Also: From your comment "they're traveling at the same speed" in particular I'm quite sure you are also looking at the symmetric branches of the Minkowski diagram (wish I had more time tonight to diagram some of this). – Terry Bollinger Jan 9 at 3:51 Is there anything wrong or impossible with this experimental setup? If so, what is it? I don't see any issues with your setup. If you accept the experiment as realistic and meaningful, will the delays calculated from the perspective of each of the two trains be the same, or different? (Yet again) the 2 delays are the same. I. kinematic analysis: • Both trains have length $L$ (measured in the rest frame of the train under consideration). • In A's reference frame, B approaches at speed $v$ and has length $L/\gamma$, with $\gamma = 1/\sqrt{1-(v/c)^2}$. • If A1 meets B1 at time 0, A1 meets B2 at time $L/(\gamma v)$, A2 meeets B1 at time $L/v$, and A2 meets B2 at time $T_{\Delta A}=(L/v)(1+1/\gamma)$. • In B's reference frame, the analysis is identical, giving the same result. II. Alternatively, considered as space-time events (addressing your comment to Spacelike Cadet (love that name!)), it is certainly true that there is an invariant interval $ds$ between the passings of the leading and of the ending cars. • In A's reference frame, that interval is $ds^2 = (cT_{\Delta A})^2 - L^2$. • In B's reference frame, that interval is $ds^2 = (cT_{\Delta B})^2 - L^2$. The crucial point is that both trains have "rest length" $L$, so it follows immediately that $T_{\Delta A}=T_{\Delta B}$. The time-like separations are the same only because the train lengths are identical. • The invariant interval is $ds= \frac{L}{v} \sqrt{\frac{2}{\gamma} + \frac{2}{\gamma^2}}$. If you answered "different," what is the correct procedure for predicting in advance what the ratio of the two delays will be? N/A. Update: As a check, here's an analysis in the "Center of Velocity" (aka CV) frame. • In the CV frame, both trains approach with velocity $\pm v_{CV}$ $$v_{CV} = \frac{v}{1+\sqrt{1-(v/c)^2}} = \frac{v}{1+\frac{1}{\gamma}}$$ (Note it's not just $v/2$! You can check this result by applying the addition-of-velocities formula to $v_{CV}$ and the relative train velocity $-v$, with the result being just $-v_{CV}$, the equal-and-opposite velocity of the other train in the CV frame.) • In the CV frame, both the first and last car crossings occur at the origin, and both trains are Lorentz-contracted to length $L/\gamma_{CV}$ with $\gamma_{CV}=1/\sqrt{1-(v_{CV}/c)^2}$. • The time duration between first and last car crossings is then $$T_{\Delta CV} = \frac{L}{\gamma_{CV}} \frac{1}{v_{CV}}$$ Plugging and chugging through the algebra, one finds: $$T_{\Delta CV} = \frac{L}{v} \sqrt{\frac{2}{\gamma}+\frac{2}{\gamma^2}}$$ This result is different from $T_{\Delta A}=T_{\Delta B}$. • The invariant interval between the two end-crossing events in the CV frame is just: $$ds^2 = (c T_{\Delta CV})^2$$ since both crossings happen at the CV origin, and $ds$ works out to be exactly the same as calculated in the A and B frames (as it must). - Art Brown, thanks! That's a nicely focused answer, and I really appreciate your addressing my bafflement about multiple interpretations of the invariant, which so far in my own poor head is something I can only get to work for the fully symmetric ${\pm}v$-around-a-third-frame-$O$ set of frame pairs. I'm hoping a careful reading of your answer will help me with that. – Terry Bollinger Jan 6 at 19:12 @TerryBollinger, you're welcome. Your comment about the fully symmetric "CM" frame was intriguing, so I worked through that case and added it to the answer. The challenge for me was recognizing that the trains are not approaching at $\pm v/2$ in the CM frame. – Art Brown Jan 6 at 20:51 (Playing catchup before allocating points.) Hmm... if I said "center of mass" somewhere, I didn't mean it, at least not that way. The pure-time $O$ (observer) frame is the one from which both $A$ and $B$ diverge at equal velocities, and that is not normally the center of mass. The one exception is when (a) the objects of interest in $A$ and $B$ are exactly equal in mass, and (b) they separated by pushing off of each other without any other masses involved. On the other hand, the center-of-mass frame is vital to system-wide energy conservation, but that's outside the scope of this question. – Terry Bollinger Jan 9 at 3:44 @TerryBollinger: No, you didn't say "center of mass". I was using "CM" as an abbreviation for the totally symmetric case of equal trains, but I think the same analysis applies to the frame where the two trains are approaching (and later diverging) at equal velocities (since it's a purely kinematic analysis), which I believe is your $O$ frame. Happy choosing... – Art Brown Jan 9 at 4:18 For the record, I corrected "center of mass" to "center of velocity"... – Art Brown Jan 11 at 5:33 What follows here is an answer entirely uninfluenced by other answers. I have created it entirely from the initial problem statement. Let there be four worldlines: $A_f$ for the front of the first train, $A_r$ for the rear of the first train. $B_f$ for the front of the second train, $B_r$ for the rear of the second train. These worldlines can be parameterized as follows. $$\begin{align*} A_f(\tau) &= u_A \tau \\ B_f(\tau) &= u_B \tau \end{align*}$$ $u_A$ and $u_B$ are four-velocities. Here, we assume that at $\tau = 0$, the fronts of the two trains are coincident at the origin. What we want to do now is figure out where the proper locations for the rears of the train should be. Without loss of generality, we can set these locations to be some distance $d$. We can, through an orthonormalization procedure, find the spacelike vectors that go along the trains' lengths. My background is in the clifford spacetime algebra, where we would represent this quantity as $iu$. Hence, the other two worldlines are: $$\begin{align*} A_r(\tau) &= (\tau - di) u_A \\ B_r(\tau) &= (\tau + di) u_B \end{align*}$$ Choosing minus for the A train ensures that the rear of the train is further down the -x-axis than the front. It should be noted that then the $A_r, A_f$ worldlines are described by $\tau$, the proper time of the A train. Similarly for the B worldlines; these proper times are different between the trains, of course. Now, we should be able to compute the intersection by saying $A_r(\tau_A) = B_r(\tau_B)$ for two different proper time intervals $\tau_A, \tau_B$. There are two vector components, so the system is well-described. Now, for simplicity, we choose $u_A = e_t$, the time basis vector. Thus, we choose a frame where the A train stays still and B moves past, from right to left. $u_B = \gamma(e_t - \beta e_x)$ then, and $i u_B = \gamma(e_x - \beta e_t)$. The equations look like $$\begin{align*}\tau_A &= \gamma \tau_B - \gamma \beta d \\ - d &= -\gamma \beta \tau_B + \gamma d \end{align*}$$ These equations are easily solved. At first glance, the solution appears to be $$\begin{align*} \tau_B &= \frac{(\gamma + 1) d}{\gamma \beta} \\ \tau_A &= \frac{(\gamma + 1) d}{\beta} - \gamma \beta d \end{align*}$$ But a little mathematical manipulation (in particular, using $\gamma = (1-\beta^2)^{-1/2}$), proves them to be the same. $$\begin{align*} \tau_A &= \frac{d(1+\gamma) - \beta^2 d \gamma}{\beta} \\ &= \frac{d(1 + \gamma[1-\beta^2])}{\beta} \\ &= \frac{d(1+1/\gamma)}{\beta}\\ &= \frac{d(\gamma + 1)}{\gamma \beta}\end{align*}$$ In short, then, 1) There is nothing wrong with the experimental setup. 2) As expected by symmetry of the problem (each train should measure the relative velocity of the other to be the same), the time delays measured by each train are the same, and they can be calculated according to the above calculation. - Murphid, very nice! I want to spend some time looking at this one, and also actually diagramming it... – Terry Bollinger Jan 18 at 2:52 Muphrid (sorry about earlier misspellings), while I very much like your concise answer and use of traditional SR worldlines and variables, I have a question: You seem to have kept $d$ invariant across frames, rather than making it frame dependent ($d_A$ and $d_B$) as you did with $\tau$ ($\tau_A$ and $\tau_B$). Perhaps you've taken care of it via $\gamma$, but it doesn't feel right. Is it possible that by making $d$ universal you may have preselected the only set of frames for which $d_A=d_B$, that being the symmetric velocity frames? Please let me know if I'm misreading something. – Terry Bollinger Jan 20 at 13:41 @TerryBollinger As written, the trains need not have the same length as according to an arbitrary frame, but each considers its own length to be $d$ according to its own rest frame. I later realized that this was not what you meant in the problem, and it would be better to consider two trains of arbitrary lengths, but I don't believe this affects the overall result. – Muphrid Jan 20 at 17:33 Muphrid, sorry, I was not clear. It was indeed my intent in the 4CC question that the trains use identical equipment to record identical front-to-back lengths $d$ within their respective frames. In explicit-observer notation that is: $d_{A:>A}=d_{B:>B}$, where $\phi:>\phi$ means "a measurement of something at rest within frame $\phi$ (the left $\phi$), as observed and recorded by an observer within frame $\phi$ (the $:>\phi$ part)". But across distinct frames with $d{\not\perp}v$, SR requires $d_{A:>A}>d_{B:>A}$, always. My concern was your $d$ variables may have diverse observer frames. – Terry Bollinger Jan 20 at 18:53 Okay, I see now. By construction, no, there should be no mixing of frames. $iu_A$ is a vector orthogonal to the four-velocity of train A, and as such, train A considers it to be an entirely spatial direction in its frame. Same for train B. Given any particular frame, we can find the trains' lengths according to that frame, and the results will show the expected length contraction. The calculation is a bit onerous, as the spatial vector used to do it intersects the four worldlines at different proper times, but it's not difficult, just tedious. – Muphrid Jan 20 at 18:56 show 1 more comment The effort you are putting in to this question is admirable. You most certainly deserve a satisfactory answer. Your question is: as a matter of principle Special Relativity asserts total symmetry: how is that accommodated? The diagrams that you are creating are at best snapshots of the ongoing process. I believe you need to create an animation. I believe you need to create an animation to explain it to yourself. I say that because I did that too: I created an animation. The process of working out how things proceed over time, so that the animation was correct, helped me absorb the most counter-intuitive aspects of SR. Let me recount a memory. As a teenager I would read books for kids about science/physics. I would read about special relativity too, I looked at the diagrams, and I was aware that I didn't understand it, at least not to my satisfaction. At some point there was a series of educational television programs about relativity. With the usual trains. But being television the creators of that series had taken the opportunity to present the spacetime physics with an animation! I remember it vividly: Einstein on one train and Poincaré on the other, clocks in the front, the middle, and the rear, the trains passing each other. Most importantly: the shift of reference frame from one train to the other was also represented in animated form! At the start you had the spatial axis (horizontal) and the time axis (vertical) at right angles to each other. When that coordinate system is subjected to a Lorentz transformation the axes move relative to each other in a scissor-like manner. And I saw the complete symmetry. You can use a frame co-moving with the Einstein train or a frame co-moving with the Poincaré train, and you can transform symmetrically between them. Years later I wanted to experience that vividness again, and I created an animation like the animation in that television program I remembered. The animation I created depicts pulses of light shuttling between clocks, etc., etc. (Incidentally; that animation isn't just lying around, I added it to my website.) - Cleonis, it is indeed fun to either see or figure out how some of these things look in action, both in 3D and in the unit cells -- the "scissors-like manner" -- of 4D transformations Minkowski space (usually subsetted down to t and x, granted, but the other axes behave well; adding one orthogonal y or z axis is also very instructive and not that hard). The Lorentz shortening effect is especially interesting due to its surprising complexity in 4D. It must be interpreted as a projection from a 4D cell down into a 3D hyperplane that is an a angle to the axis of projection (time for that frame). – Terry Bollinger Jan 6 at 18:35 The answer, as others have said, is that they mark the same time delay. So the task is to give an explanation that addresses the root of the confusion. I think it comes down to this: at the four events where two moving observers meet, both observers directly observe that the other clock is moving more slowly, right? Well, except time dilation is not a direct observation. Put it this way: on the one hand you seem to be trying to consider signals sent and received, and coincident (or nearly coincident) events, like the (near) meeting of observers. But the principle that moving clocks run slow has no direct meaning in those terms, it is understood in terms of simultaneous spaces. In terms of signals sent and received, the principle is that oncoming clocks run fast and outgoing clocks run slow. Maybe it will be instructive to look at this purely from the signal framework. In what follows, distances will be radar distances, and times will be reception times. That is, if you now receive the echo of a radar pulse that you sent some time $\Delta t$ ago, then we say that now, the distance to the object it bounced off of is $\frac{c}{2}\Delta t$. (Your point of view is what you see, your past light cone is now.) Formally, the metric is: $$d\tau^2 = dt^2 - \frac{2}{c}dr\cdot dt - \frac{r^2}{c^2} d\Omega^2$$ from which certain relevant facts can be derived: • The time expansion factor for a radially moving clock is: $$k = \sqrt{1 - \frac{2v}{c}}$$ $v$ is the velocity, the rate of change of (radar) distance with respect to (reception) time. For outgoing clocks, $v>0$, so $k<1$, they run slow. For incoming clocks, $v<0$, so $k>1$, they run fast. (In the extreme case, the speed of outgoing light is $\frac{c}{2}$, and the speed of oncoming light is infinite.) • $k$ is also the length expansion factor (in the radial direction). • When an oncoming objects meets and passes us, its expansion factor goes from $k$ to $k^{-1}$, and its velocity goes from $v$ to: $$v^\star = -\frac{cv}{c-2v}$$ For comparison, the "frame" velocity, the usual way of reckoning speeds, is $\frac{cv}{c-v}$. But this frame velocity is not directly observed. We can now address the sequence of events from the point of view of the front (F) and back (B) of the train. The clocks at the front and back of the train are synchronised in the sense that they each see the other running at the same rate, but at a time $\epsilon$ behind. The distance between the two clocks is a constant, $\epsilon c$, which is the length of the train. Let us say the oncoming train has speed $\frac{3}{2}c$, so that $k=2$. It is twice as long as ours, $2\epsilon c$. Its clocks (F',B') are running at twice the rate of ours. Front: • We meet F', say at time $T$, which we record. F' records its time, say, $T'$. Now F' is outgoing, and its speed becomes $\frac{3}{8}c$, and its clock starts running at half the rate of ours. Meanwhile B' is still oncoming at $\frac{3}{2}c$, its clock is showing $T' - \epsilon$ and still running at twice the rate of ours, and it is $2\epsilon c$ away. • At $T + \frac{4}{3}\epsilon$, we meet B'. Its clock is showing $T' + \frac{5}{3}\epsilon$, and starts running at half our rate. In which time F' has travelled $\frac{1}{2}\epsilon c$, so the length of the other train has shrunk to half the length of ours. B' is now also outgoing at $\frac{3}{8}c$, so the other train stops shrinking. F' is still $\frac{1}{2}\epsilon c$ away from B, and its clock is showing $T' + \frac{2}{3}\epsilon$. • At $T + \frac{8}{3}\epsilon$, F' meets B. B' shows $T' + \frac{7}{3}\epsilon$, while F' shows $T' + \frac{4}{3}\epsilon$. • At $T + 4\epsilon$, B' meets B. The clock at B is $\epsilon$ behind ours, so it is showing $T + 3\epsilon$, which it records. B' is showing $T' + 3\epsilon$, which it records, while F' is showing $T' + 2\epsilon$. Back: • At $T + \epsilon$, F meets F'. F, whose clock is $\epsilon$ behind ours, records $T$. F' records $T'$. B' shows $T' - \epsilon$. Both F' and B' are still oncoming at $\frac{3}{2}c$, and their clocks are still running at double rate. F' is $\epsilon c$ away. • At $T + \frac{5}{3}\epsilon$, we meet F'. The other train is still twice as long as ours, so B' is still $\epsilon c$ away from F, and $2\epsilon c$ away from us. F' shows $T' + \frac{4}{3}\epsilon$, and its clock starts running at half rate, while B' shows $T' + \frac{1}{3}\epsilon$ and is still running at double rate. • At $T + \frac{7}{3}\epsilon$, F meets B'. F' shows $T' + \frac{5}{3}\epsilon$, and so does B'. • At $T + 3\epsilon$, we meet B' and record our time. F' shows $T' + 2\epsilon$, while B' shows $T' + 3\epsilon$, which it records. [Addendum] I'll add the "neutral observer" O to this, as suggested by Dr. Matt McIrvin. O is the one who sees F meeting F' as happening in the same place as B meeting B'. From either train's perspective, O's expansion factor will $\sqrt{2}$ approaching and $\frac{1}{\sqrt{2}}$ receding -- in general, its expansion factor will be the square root. So, O is oncoming at speed $v = -\frac{c}{2}$ and its clock is running $\sqrt{2}$ times fast. Now from F's perspective, F meets both O and F' at time $T$, and O records time, say, $T_O$. O is now outgoing at speed $v = +\frac{c}{4}$ and its clock starts running $\sqrt{2}$ times slow. So B, which is $\epsilon c$ away, will meet O at $T + 4\epsilon$, which is also when it will meet B', and O will record time $T_O + 2\sqrt{2}\epsilon$. From B's perspective, F meets both O and F' at time $T+\epsilon$. O is still oncmoing at speed $v = -\frac{c}{2}$ and its clock is still running $\sqrt{2}$ times fast. So O arrives at time $T + 3\epsilon$, which is also when B' arrives, and O records time $T_O + 2\sqrt{2}\epsilon$. From O's perspective, either train is initially oncoming at speed $\frac{c}{2}$ and its clocks are running $\sqrt{2}$ times fast, and its length is $\sqrt{2}\epsilon c$. At time $T_O$ O meets F, which marks time $T$. Now F is outgoing at speed $\frac{c}{4}$ and its clock is running $\sqrt{2}$ times slow. Meanwhile B is still oncoming at $\frac{c}{2}$. Its clock shows $T - \epsilon$, and it is still running $\sqrt{2}$ times fast, and it is $\sqrt{2}\epsilon c$ away. So it arrives at time $T_O + 2\sqrt{2}\epsilon c$, at which point its clock shows $T + 3\epsilon$, which it records. Meanwhile F has travelled a distance $\frac{1}{\sqrt{2}}\epsilon c$, so the train has shrunk to that length. And O sees exactly the same thing for the other train in the other direction. - Nice math, but let's get back to the actual questions for a moment. If I read through your answer rightly, your answers are: (1) The experimental setup is valid; (2) $T_{\Delta{A}}$ = $T_{\Delta{B}}$, always; and (3) Not applicable. Is that correct? Also, am I correct that you are completely fine with asserting that the delay between $T1$ and $T2$ is independent of the frame from which it is observed? – Terry Bollinger Jan 5 at 1:15 Please verify your math notations before I comment any further. Early in your analysis you invoke a velocity of "$\frac{3}{2}c$", which is decidedly uncommon in this universe. – Terry Bollinger Jan 5 at 1:41 Correct. And as far as I can see the math is fine. Remember that I am talking exclusively about what each observer actually sees, I am never using simultaneous spaces. If it helps, insert "apparent" before all occurances of speeds, rates, and distances. An oncoming object certainly can have an apparent speed of $\frac{3}{2}c$. This is still less than the apparent speed of oncoming light, which is infinite. – Retarded Potential Jan 5 at 17:35 The "frame" velocity (i.e. in Minkowski co-ordinates) can be calculated from the formula I gave, $w=\frac{cv}{c-v}$. So $v=-\frac{3}{2}c$ and $v=+\frac{3}{8}c$ become $w=\pm\frac{3}{5}c$. But as I say, none of the observers directly measure these frame velocities, they measure the "apparent" velocities. – Retarded Potential Jan 5 at 17:43 Ouch. Keeping the discussion on apparent velocities was my original intent. That's why I had radars at the front of each train, to give velocities less than c. You are asserting $T_{\Delta{A}}=T_{\Delta{B}}$, yes? Even though $T1$ and $T2$ define two localized events in spacetime? Where my brain keeps hiccuping on that answer is any two such events should be separated by an invariant interval. So how in the heck do two quite different frames observe that interval, yet still arrive at the same pure-time separation figures within their frames? Am I the only one deeply troubled by that? – Terry Bollinger Jan 6 at 4:09 show 11 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 202, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553771615028381, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/173580-vector-problem-involving-orthogonal.html
# Thread: 1. ## Vector problem involving orthogonal Hi, I'm struggling with this problem: For $\vec{u} =$[-4, 1, 10]^T and $\vec{v} =$ [−12, −6, 8]^T find the vectors $\vec{u1}$ and $\vec{u2}$ such that: (i) $\vec{u1}$ is parallel to $\vec{v}$ (ii) $\vec{u2}$ is orthogonal to $\vec{v}$ (iii) $\vec{u} = \vec{u1} + \vec{u2}$ I figured I should firstly try and find u2 by (ii), and then after I found that I would be able to use (iii) to get u1. This approach didn't work out too well for me, heh. Basically I tried to set it up with dot product, and solve for u2: $\vec{v} * \vec{u2} = 0$ Didn't work out, just ended up with something like: -12a - 6b + 8c = 0 Where a, b, c are the numbers in u2. From there I couldn't see what more I could do to find a, b, c. Basically, I don't really know how to approach this problem . Anyone mind helping out a math newbie? Thanks in advance! 2. Originally Posted by cb220 Hi, I'm struggling with this problem: For $\vec{u} =$[-4, 1, 10]^T and $\vec{v} =$ [−12, −6, 8]^T find the vectors $\vec{u1}$ and $\vec{u2}$ such that: (i) $\vec{u1}$ is parallel to $\vec{v}$ (ii) $\vec{u2}$ is orthogonal to $\vec{v}$ (iii) $\vec{u} = \vec{u1} + \vec{u2}$ I figured I should firstly try and find u2 by (ii), and then after I found that I would be able to use (iii) to get u1. This approach didn't work out too well for me, heh. Basically I tried to set it up with dot product, and solve for u2: $\vec{v} * \vec{u2} = 0$ Didn't work out, just ended up with something like: -12a - 6b + 8c = 0 Where a, b, c are the numbers in u2. From there I couldn't see what more I could do to find a, b, c. Basically, I don't really know how to approach this problem . Anyone mind helping out a math newbie? Thanks in advance! Let $\vec{u}_1=\text{Proj}_{\vec{v}}\vec{u}=\frac{\vec{ u}\cdot \vec{v}}{||v||^2}\vec{v}$ Then $\vec{u}_2=\vec{u}-\vec{u}_1$ 3. Originally Posted by cb220 Hi, I'm struggling with this problem: For $\vec{u} =$[-4, 1, 10]^T and $\vec{v} =$ [−12, −6, 8]^T find the vectors $\vec{u1}$ and $\vec{u2}$ such that: (i) $\vec{u1}$ is parallel to $\vec{v}$ (ii) $\vec{u2}$ is orthogonal to $\vec{v}$ (iii) $\vec{u} = \vec{u1} + \vec{u2}$ I figured I should firstly try and find u2 by (ii), and then after I found that I would be able to use (iii) to get u1. This approach didn't work out too well for me, heh. Basically I tried to set it up with dot product, and solve for u2: $\vec{v} * \vec{u2} = 0$ Didn't work out, just ended up with something like: -12a - 6b + 8c = 0 Where a, b, c are the numbers in u2. From there I couldn't see what more I could do to find a, b, c. Basically, I don't really know how to approach this problem . Anyone mind helping out a math newbie? Thanks in advance! That's not a bad start! Yes, to satisfy (ii) you want -12a- 6b+ 8c= 0. And to satisfy (iii) you want u1= <-4- a, 1- b, 10- c>. And, then, to satisfy (i) you want u1 to be a multiple of v: u1= <-4-a, 1- b, 10- c>= d<12, 6, 8>. That is you have four equations, -12a- 6b+ 8c= 0, -4-a= 12d, 1- b= 6d, and 10- c= 8d, for the four numbers, a, b, c, and d.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.961879312992096, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/243259/proof-that-certain-operators-are-compact
# Proof that certain operators are compact I want to examine which of the following operators $T \colon C[0,1] \to C[0,1]$. are compact, by some I think I got the argument, but others I have no idea. a) $Tx(t) = x(t^2)$ Guess it is compact, but i have no idea how to proof this? b) $Tx(t) = x(0) + tx(1)$ Here the range of $T$ consist of lines, i.e. the set $\{ n + m \cdot x : n,m \in \mathbb{R} \}$, this set is finite-dimensional because $\{ \mathbb{1}, \operatorname{id} \}$ are a base (1 denotes the constant function 1(x) = 1 for all x). c) $Tx(t) = \int_0^1 e^{st} x(s) \mathrm{d}s$ This is compact according to example A.2 from Appendix A: Compact Operators d) $Tx(t) = \sum_{k=1}^{\infty} x(\frac{1}{k}) \frac{t^k}{k!}$ Guess here I could use arguments similar to those How to prove that an operator is compact? Proof that operator is compact because $x(\frac{1}{x})$ is bounded on $[0,1]$ and the series $\sum_{k=1}^{\infty} \frac{t^k}{k!}$ converges to $e^t - 1$. e) $Tx(t) = \sum_{k=0}^{\infty} \frac{x(t^k)}{k!}$. Here I have no idea how to proof or disproof compactness of $T$? f) $Tx(t) = \int_0^t x(s) \mathrm{d} s$ Here I have no glue too.... - I'm not sure this works and if it does, how to write it better but here is a thought: – Matt N. Nov 23 '12 at 16:27 a) You could show that the image of a bounded set is totally bounded. Let $X \subset C[0,1]$ be a bounded set that is: $\sup_{f \in X} \|f\|_\infty = K < \infty$. The map $t \mapsto t^2$ is continuous and its image is $[0,1]$. Hence if $X$ is bounded by $K$ then $TX$ is still bounded by $K$ since the domain stays the same. I'm not sure how to finish. One has to give a finite cover of $TX$ of sets of fixed size. – Matt N. Nov 23 '12 at 16:30 Small MathJax/LaTeX tip: I find that integrals look better when you put a thin space (`\,` in math mode) in front of the 'd'. Compare these two: $$\int f d\mu \quad \text{versus} \quad \int f\,d\mu.$$ – kahen Nov 23 '12 at 16:38 ## 2 Answers a) is not compact: it is actually onto, since for any $f$ we have $f=Tf_0$, where $f_0(t)=f(\sqrt t)$. b) you are right. c) you are right d) and e): as Davide says, look at the dimension of the rank of $S_n$. f) you can write $\int_0^tx(s)ds=\int_0^1x(s)\,1_{[0,t]}(s)ds$ and see that $T$ is compact using A.3 in Appendix A: compact operators - a) $T$ is bijective, and $C[0,1]$ is infinite dimensional, so $T$ is not a compact operator. d) After having showed that $T$ is well-defined, define $S_n(x)(t):=\sum_{j=1}^nx(j^{-1})\frac{t^k}{k!}$. What about the dimension of the rank of $S_n$? e) The set $\{x_p\colon t\mapsto t^p,p\in\Bbb N\}\subset C[0,1]$ is bounded, and $T(x_p)(t)=e^{t^p}$. The task is to show that no subsequence of $\{T(x_p)\}$ form an equi-continuous set. f) Arzelà-Ascoli's theorem is useful. - In d) a finite base for the range of $S_n$ would be $\{ t, t^2, \ldots, t^n \}$. But for e) for example with $n = 2$ we got $x(t) + x(t^2)/2$, which i guess is in some sense an extension of a) and so could not be compact? – Stefan Nov 23 '12 at 17:45 @Stefan I've corrected, as it's actually not the same approach for d) and e). – Davide Giraudo Nov 23 '12 at 20:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948912501335144, "perplexity_flag": "head"}
http://mathoverflow.net/questions/76015?sort=newest
## How much can a diagonal matrix change the eigenvalues of a symmetric matrix? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose that we have a symmetric matrix ${\bf S}$ with eigenvalue decomposition ${\bf S} = {\bf Q}{\bf \Lambda}{\bf Q}^T$. Assume that we have two diagonal matrices ${\bf D}_1$ and ${\bf D}_2$ that are multiplying ${\bf S}$ from the left and right, i.e. ${\bf A} = {\bf D}_1{\bf S}{\bf D}_2$. Can we relate the eigenvalues of ${\bf S}$ to the ones of ${\bf A}$? How about the case where ${\bf S}$ is not symmetric? - 1 Did you mean ${\bf A} = {\bf D}_1{\bf S}{\bf D}_2$? If not, what is ${\bf V}$? – Joseph O'Rourke Sep 21 2011 at 1:37 Yes, let me correct it, thank you :) – Anadim Sep 21 2011 at 1:53 ## 1 Answer In general there is no relation: for example consider the simplest case $S$ itself is diagonal and invertible. Letting $D_1=S^{-1}$ then $A$ can be any diagonal matrix $D_2$. The only considerations you can do are related to the presence of the zero eigenvalues using Binet formula for determinants. Notice also that in general $A$ itself can be nonsymmetric, and its eigenvalues can be complex. However small perturbations, i.e. small $D_1$ and $D_2$, result in a small perturbation of the eigenvalues of $S$ in the complex plane. - 1 Another case in which you can tell at least something is: when $D_1=D_2$ has positive entries, then the inertia of $S$ is preserved. – Federico Poloni Sep 23 2011 at 11:06 Thank you. I agree that if ${\bf S}$ is a diagonal then the eigenvalues can be arbitrary, based on the ${\bf D}_i$s. But then again, you were able to exactly describe the eigenvalues of ${\bf A}$ as a function of the eigenvalues of ${\bf S}$ and ${\bf D}$. Is there a way to do that in general? i.e., can I pick the diagonals ${\bf D}_i$ so that I can spectrally shape ${\bf S}$ (nondiagonal) in any way that I like? – Anadim Sep 24 2011 at 0:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9086475968360901, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/191243/is-this-a-typo-in-my-math-book
# Is this a typo in my math book? I am doing homework which is submitted online. I came across a question asking if two functions are equal. $f(x)=3x+4$ and $g(x)=14+(8/x)+b(x-4)$. I set the two equations together and got 7 for an answer but the book says the answer is $\frac {7}{3}$. Here is an image of the solution in the book: - The homework tag should never be used alone. Please add at least another one explaining the topic(s) covered by this homework – M Turgeon Sep 5 '12 at 2:46 5 Perhaps you should edit your query here to state the question that was asked clearly. The two functions ae not equal as functions. The solution seems to suggest that what was asked is something like "Find all values of $b$ suc that $f(-3) = g(-3)$ and $f(4) = g(4)$", that is the functions have equal value for two specific values of $x$. This is quite different from saying that the functions are equal. – Dilip Sarwate Sep 5 '12 at 2:47 ## 4 Answers If you set $f(-3) = g(-3)$, you end up with: $$-5 = 14 -(8/3) -7b$$ If you multiply by 3 to remove fractions, you are left with $$-15 = 42 - 8 -21b$$ Collecting like terms, we are left with $$21b = 49$$ reducing yields: $$b = 7/3$$ - The book is correct. Setting $f(-3)=g(-3)$ gives us $-5=14-\frac{8}{3}-7b \Rightarrow -19=-\frac{8}{3}-7b \Rightarrow 49=21b \Rightarrow b=\frac{7}{3}$. - Well, at least to your specific question "Is this a typo?", the answer is "No." You are making an arithmetic error somewhere, because: $$\frac{1}{7}\bigg(\frac{34}{3} + \frac{15}{3}\bigg) = \frac{1}{7}\frac{49}{3} = \frac{7}{3}.$$ - $$-5 = \left(\frac{34}{3}\right) - 7B$$ Multiply both sides by 3: $$-15= 3\cdot\left(\frac{34}{3}\right) - 3\cdot(7B)$$ that is: $$-15 = 34 - 21B$$ so $$-15-34 = -21B$$ $$B=\frac{49}{21}=\frac{7}{3}$$ The book is correct on this one. - @Joe, appreciate the Tex work. – Emmad Kareem Sep 5 '12 at 4:09 No problem - was just minor while I was reading this problem. :) – Joe Sep 5 '12 at 4:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9700481295585632, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/196160/why-these-line-integrals-have-the-same-value
# Why these line integrals have the same value? I'm posting with my phone so I cannot using latex. I will be thankful if somebody correct my post. I want to integrate $x^2-iy^2$ on the complex plane with (a) closed unit circle (b) closed unit square $(+1+i, 1-i, -1+i, -1-i)$ By my calculation, the answers are same; zero. But the integrand is not analytic so I cannot use Cauchy integral theorem. Then how can I explain this situation? - 1 If you cannot apply the theorem, this does not mean that the zero cannot arise as a result. Simply you cannot predict the result. – enzotib Sep 15 '12 at 14:22 In fact, the standard parameterization of the circle gives integral 0. But parameterizing each side of the square by x or y, as appropriate, gives sum of integrals on the sides to be 4 i. – murray Sep 15 '12 at 14:26 ## 1 Answer Morera's theorem says that a function $f$ is analytic iff all integrals on ALL closed piecewise differentiable curves vanish. There isn't anything inherently interesting about this particular function you have as you can find plenty of closed contours which will not give you zero. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9167409539222717, "perplexity_flag": "middle"}
http://catalog.flatworldknowledge.com/bookhub/reader/2992?e=coopermicro-ch09_s01
Please wait while we create your MIYO... # Microeconomics: Theory Through Applications, v. 1.0 by Russell Cooper and A. Andrew John Study Aids: Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards). Study Pass: Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes. Highlighting and Taking Notes: If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination. Printing: If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections. Search: To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result). View Full Student FAQs ## 9.1 A Walk Down Wall Street ### Learning Objectives 1. What are the different types of assets traded in financial markets? 2. What can you earn by owning an asset? 3. What risks do you face? Wall Street in New York City is the financial capital of the United States. There are other key financial centers around the globe: Shanghai, London, Paris, Hong Kong, and many other cities. These financial centers are places where traders come together to buy and sell assets. Beyond these physical locations, opportunities for trading assets abound on the Internet as well. We begin the chapter by describing and explaining some of the most commonly traded assets. Ownership of an assetA resource whose ownership gives you the right to some future benefit or a stream of benefits. gives you the right to some future benefit or a stream of benefits. Very often, these benefits come in the form of monetary payments; for example, ownership of a stock gives you the right to a share of a firm’s profits. Sometimes, these benefits come in the form of a flow of services: ownership of a house gives you the right to enjoy the benefits of living in it. ## Stocks One of the first doors you find on Wall Street is called the stock exchange. The stock exchange is a place where—as the name suggests—stocks are bought and sold. A stock (or share)An asset that comes in the form of (partial) ownership of a firm. is an asset that comes in the form of (partial) ownership of a firm. The owners of a firm’s stock are called the shareholders of that firm because the stock gives them the right to a share of the firm’s profits. More precisely, shareholders receive payments whenever the board of directors of the firm decides to pay out some of the firm’s profits in the form of dividendsA payment from a firm to a firm’s shareholders based on the firm’s profits.. Some firms—for example, a small family firm like a corner grocery store—are privately owned. This means that the shares of the firm are not available for others to purchase. Other firms are publicly traded, which means that anyone is free to buy or sell their stocks. In many cases, particularly for large firms such as Microsoft Corporation or Nike, stocks are bought and sold on a minute-by-minute basis. You can find information on the prices of publicly traded stocks in newspapers or on the Internet. ## Stock Market Indices Most often, however, we hear not about individual stock prices but about baskets of stocks. The most famous basket of stocks is called the Dow Jones Industrial Average (DJIA). Each night of the week, news reports on the radio and television and newspaper stories tell whether the value of the DJIA increased or decreased that day. The DJIA is more than a century old—it started in 1896—and is a bundle of 30 stocks representing some of the most significant firms in the US economy. Its value reflects the prices of these stocks. Very occasionally, one firm will be dropped from the index and replaced with another, reflecting changes in the economy. Figure 9.3 The DJIA: October 1928 to July 2007 This figure shows the closing prices for the DJIA between 1928 and 2010. Source: The chart is generated from http://finance.yahoo.com/q?s=^DJI. Figure 9.3 "The DJIA: October 1928 to July 2007" shows the Dow Jones Industrial Average from 1928 to 2011. Over that period, the index rose from about 300 to about 12,500, which is an average growth rate of about 4.5 percent per year. You can see that this growth was not smooth, however. There was a big decrease at the very beginning, known as the stock market crash of 1929. There was another very significant drop in October 1987. Even though the 1929 crash looks smaller than the 1987 decrease, the 1929 crash was much more severe. In 1929, the stock market lost about half its value and took many years to recover. In 1987, the market lost only about 25 percent of its value and recovered quite quickly. One striking feature of Figure 9.3 "The DJIA: October 1928 to July 2007" is the very rapid growth in the DJIA in the 1990s and the subsequent decrease around the turn of the millennium. The 1990s saw the so-called Internet boom, when there was a lot of excitement about new companies taking advantage of new technologies. Some of these companies, such as Amazon, went on to be successful, but most others failed. As investors came to recognize that most of these new companies would not make money, the market fell in value. There was another rise in the market during the 2000s, followed by a substantial fall during the global financial crisis that began around 2008. Very recently, the market has recovered again. If these ups and downs in the DJIA were predictable, it would be easy to make money on Wall Street. Suppose you knew the DJIA would increase 10 percent next month. You would buy the stocks in the average now, hold them for a month, and sell them for an easy 10 percent profit. If you knew the DJIA would decrease next month, you could still make money. If you currently owned DJIA stocks, you could sell them and then buy them back after the price decreased. Even if you don’t own these stocks right now, there is still a way of selling first and buying later. You can sell (at today’s high price) a promise to deliver the stocks in a month’s time. Then you buy the stocks after the price has decreased. This is called a forward sale. If this sounds as if it is too easy a way to make money, that’s because it is. The ups and downs in the DJIA are not perfectly predictable, so there are no easy profit opportunities of the kind we just described. We have more to say about this later in the chapter. Although the DJIA is the most closely watched stock market index, many others are also commonly reported. The Standard and Poor’s 500 (S&P 500) is another important index. As the name suggests, it includes 500 firms, so it is more representative than the DJIA. If you want to understand what is happening to stock prices in general, you are better off looking at the S&P 500 than at the DJIA. The Nasdaq is another index, consisting of the stocks traded in an exchange that specializes in technology-based firms. We mentioned earlier that the DJIA has increased by almost 5 percent per year on average since 1928. On the face of it, this seems like a fairly respectable level of growth. Yet we must be careful. The DJIA and other indices are averages of stock prices, which are measured in dollar terms. To understand what has happened to the stock market in real terms, we need to adjust for inflation. Between 1928 and 2007, the price level rose by 2.7 percent per year on average. The average growth in the DJIA, adjusted for inflation, was thus 4.8 percent − 2.7 percent = 2.1 percent. ## The Price of a Stock As a shareholder, there are two ways in which you can earn income from your stock. First, as we have explained, firms sometimes choose to pay out some of their income in the form of dividends. If you own some shares and the company declares it will pay a dividend, either you will receive a check in the mail or the company will automatically reinvest your dividend and give you extra shares. But there is no guarantee that a company will pay a dividend in any given year. The second way you can earn income is through capital gainsIncome from an increase in the price of an asset.. Suppose you own a stock whose price has gone up. If that happens, you can—if you want—sell your stock and make a profit on the difference between the price you paid for the stock and the higher price you sold it for. Capital gains are the income you obtain from the increase in the price of an asset. (If the asset decreases in value, you instead incur a capital loss.) To see how this works, suppose you buy, for $100, a single share of a company whose stock is trading on an exchange. In exchange for$100, you now have a piece of paper indicating that you own a share of a firm. After a year has gone by, imagine that the firm declares it will pay out dividends of $6.00 per share. Also, at the end of the year, suppose the price of the stock has increased to$105.00. You decide to sell at that price. So with your $100.00, you received$111.00 at the end of the year for an annual return of 11 percent: $$106.00+$5.00 $100.00 =0.11=11%.$ (We have used the term return a few times. We will give a more precise definition of this term later. At present, you just need to know that it is the amount you obtain, in percentage terms, from holding an asset for a year.) Suppose that a firm makes some profits but chooses not to pay out a dividend. What does it do with those funds? They are called retained earnings and are normally used to finance business operations. For example, a firm may take some of its profits to build a new factory or buy new machines. If a firm is being managed well, then those expenditures should allow a firm to make higher profits in the future and thus be able to pay out more dividends at a later date. Presuming once again that the firm is well managed, retained earnings should translate into extra dividends that will be paid in the future. Furthermore, if people expect that a firm will pay higher dividends in the future, then they should be willing to pay more for shares in that firm today. This increase in demand for a firm’s shares will cause the share price to increase. So if a firm earns profits but does not pay a dividend, you should expect to get some capital gain instead. We come back to this idea later in the chapter and explain more carefully the connection between a firm’s dividend payments and the price of its stock. ## The Riskiness of Stocks Figure 9.3 "The DJIA: October 1928 to July 2007" reminds us that stock prices decrease as well as increase. If you choose to buy a stock, it is always possible its price will fall, in which case you suffer a capital loss rather than obtain a capital gain. The riskiness of stocks comes from the fact that the underlying fortunes of a firm are uncertain. Some firms are successful and earn high profits, which means that they are able to pay out large dividends—either now or in the future. Other firms are unsuccessful through either bad luck or bad management, and do not pay dividends. Particularly unsuccessful firms go bankrupt; shares in such a firm become close to worthless. When you buy a share in a firm, you have the chance to make money, but you might lose money as well. ## Bonds Wall Street is also home to many famous financial institutions, such as Morgan Stanley, Merrill Lynch, and many others. These firms act as the financial intermediaries that link borrowers and lenders. If desired, you could use one of these firms to help you buy and sell shares on the stock exchange. You can also go to one of these firms to buy and sell bonds. A bondA promise to make cash payments to a bondholder at predetermined dates (such as every year) until the maturity date. is a promise to make cash payments (the couponThe cash payments paid to a bondholder.) to a bondholder at predetermined dates (such as every year) until the maturity date. At the maturity dateThe date of final payment of principal and interest on a bond., a final payment is made to a bondholder. Firms and governments that are raising funds issue bonds. A firm may wish to buy some new machinery or build a new plant, so it needs to borrow to finance this investment. Or a government might issue bonds to finance the construction of a road or a school. The easiest way to think of a bond is that it is the asset associated with a loan. Here is a simple example. Suppose you loan a friend $100 for a year at a 6 percent interest rate. This means that the friend has agreed to pay you$106 a year from now. Another way to think of this agreement is that you have bought, for a price of $100, an asset that entitles you to$106 in a year’s time. More generally (as the definition makes clear), a bond may entitle you to an entire schedule of repayments. ## The Riskiness of Bonds Bonds, like stocks, are risky. • The coupon payments of a bond are almost always specified in dollar terms. This means that the real value of these payments depends on the inflation rate in an economy. Higher inflation means that the value of a bond has less worth in real terms. • Bonds, like stocks, are also risky because of the possibility of bankruptcy. If a firm borrows money but then goes bankrupt, bondholders may end up not being repaid. The extent of this risk depends on who issues the bond. Government bonds usually carry a low risk of bankruptcy. It is unlikely that a government will default on its debt obligations, although it is not impossible: Iceland, Ireland, Greece, and Portugal, for example, have recently been at risk of default. In the case of bonds issued by firms, the riskiness obviously depends on the firm. An Internet start-up firm operated from your neighbor’s garage is more likely to default on its loans than a company like the Microsoft Corporation. There are companies that evaluate the riskiness of firms; the ratings provided by these companies have a tremendous impact on the cost that firms incur when they borrow. Inflation does not have the same effect on stocks as it does on bonds. If prices increase, then the fixed nominal payments of a bond unambiguously become less valuable. But if prices increase, firms will typically set higher nominal prices for their products, earn higher nominal profits, and pay higher nominal dividends. So inflation does not, in and of itself, make stocks less valuable. Toolkit: Section 17.8 "Correcting for Inflation" You can review the meaning and calculation of the inflation rate in the toolkit. One way to see the differences in the riskiness of bonds is to look at the cost of issuing bonds for different groups of borrowers. Generally, the rate at which the US federal government can borrow is much lower than the rate at which corporations borrow. As the riskiness of corporations increases, so does the return they must offer to compensate investors for this risk. ## Real Estate and Cars As you continue to walk down the street, you are somewhat surprised to see a real estate office and a car dealership on Wall Street. (But this is a fictionalized Wall Street, so why not?) Real estate is another kind of asset. Suppose, for example, that you purchase a home and then rent it out. The rental payments you receive are analogous to the dividends from a stock or the coupon payments on a bond: they are a flow of money you receive from ownership of the asset. Real estate, like other assets, is risky. The rent you can obtain may increase or decrease, and the price of the home can also change over time. The fact that housing is a significant—and risky—financial asset became apparent in the global financial crisis that began in 2007. There were many aspects of that crisis, but an early trigger of the crisis was the fact that housing prices decreased in the United States and around the world. If you buy a home and live in it yourself, then you still receive a flow of services from your asset. You don’t receive money directly, but you receive money indirectly because you don’t have to pay rent to live elsewhere. You can think about measuring the value of the flow of services as rent you are paying to yourself. Our fictional Wall Street also has a car dealership—not only because all the financial traders need somewhere convenient to buy their BMWs but also because cars, like houses, are an asset. They yield a flow of services, and their value is linked to that service flow. ## The Foreign Exchange Market Further down the street, you see a small store listing a large number of different three-letter symbols: BOB, JPY, CND, EUR, NZD, SEK, RUB, SOS, ADF, and many others. Stepping inside to inquire, you learn that that, in this store, they buy and sell foreign currencies. (These three-letter symbols are the currency codes established by the International Organization for Standardization (http://www.iso.org/iso/home.htm). Most of the time, the first two letters refer to the country, and the third letter is the initial letter of the currency unit. Thus, in international dealings, the US dollar is referenced by the symbol USD.) Foreign currencies are another asset—a simple one to understand. The return on foreign currency depends on how the exchange rate changes over the course of a year. The (nominal) exchange rate is the price of one currency in terms of another. For example, if it costs US$2 to purchase €1, then the exchange rate for these two currencies is 2. An exchange rate can be looked at in two directions. If the dollar-price of a euro is 2, then the euro price of a dollar is 0.5: with €0.5, you can buy US$1. Suppose that the exchange rate this year is US$2 to the euro, and suppose you have US$100. You buy €50 and wait a year. Now suppose that next year the exchange rate is US$2.15 to the euro. With your €50, you can purchase US$107.50 (because US$(50 × 2.15) = US$107.50). Your return on this asset is 7.5 percent. Holding euros was a good investment because the dollar became less valuable relative to the euro. Of course, the dollar might increase in value instead. Holding foreign currency is risky, just like holding all the other assets we have considered. The foreign exchange market brings together suppliers and demanders of different currencies in the world. In these markets, one currency is bought using another. The law of demand holds: as the price of a foreign currency increases, the quantity demanded of that currency decreases. Likewise, as the price of a foreign currency increases, the quantity supplied of that currency increases. Exchange rates are determined just like other prices, by the interaction of supply and demand. At the equilibrium exchange rate, the quantity of the currency supplied equals the quantity demanded. Shifts in the supply or demand for a currency lead to changes in the exchange rate. Toolkit: Section 17.20 "Foreign Exchange Market" You can review the foreign exchange market and the exchange rate in the toolkit. ## Foreign Assets Having recently read about the large returns on the Shanghai stock exchange and having seen that you can buy Chinese currency (the yuan, which has the international code CNY), you might wonder whether you can buy shares on the Shanghai stock exchange. In general, you are not restricted to buying assets in your home country. After all, there are companies and governments around the world who need to finance projects of various forms. Financial markets span the globe, so the bonds issued by these companies and governments can be purchased almost anywhere. You can buy shares in Australian firms, Japanese government bonds, or real estate in Italy.Some countries have restrictions on asset purchases by noncitizens—for example, it is not always possible for foreigners to buy real estate. But such restrictions notwithstanding, the menu of assets from which you can choose is immense. Indeed, television, newspapers, and the Internet report on the behavior of both US stock markets and those worldwide, such as the FTSE 100 on the London stock exchange, the Hang Seng index on the Hong Kong stock exchange, the Nikkei 225 index on the Tokyo stock exchange, and many others. You could buy foreign assets from one of the big financial firms that you visited earlier. It will be happy to buy foreign stocks or bonds on your behalf. Of course, if you choose to buy stocks or bonds associated with foreign companies or governments, you face all the risks associated with buying domestic stocks and bonds. The dividends are uncertain, there might be inflation in the foreign country, the price of the asset might change, and so on. In addition, you face exchange rate risk. If you purchase a bond issued in Mexico, you don’t know what exchange rate you will face in the future for converting pesos to your home currency. You may feel hesitant about investing in other countries. You are not alone in this. Economists have detected something they call home bias. All else being equal, investors are more likely to buy assets issued by corporations and governments in their own country rather than abroad. ## A Casino Toward the end of your walk, you are particularly surprised to see a casino. Stepping inside, you see a casino floor, such as you might find in Las Vegas, Monaco, or Macau near Hong Kong. You are confronted with a vast array of betting opportunities. The first one you come across is a roulette wheel. The rules are simple enough. You place your chip on a number. After the wheel is spun, you win if—and only if—you guessed the number that is called. There is no skill—only luck. Nearby are the blackjack tables where a version of 21 is played. In contrast to roulette, blackjack requires some skill. As a gambler in blackjack, you have to make choices about taking cards or not. The objective is to get cards whose sum is as high as possible without going over 21. If you do go over 21, you lose. If the dealer goes over 21 and you don’t, you win. If neither of you goes over 21, then the winner is the one with the highest total. There is skill involved in deciding whether or not to take a card. There is also a lot of luck involved through the draw of the cards. You always thought of stocks and bonds as serious business. Yet, as you watch the players on the casino floor, you come to realize that it might not be so peculiar to see a casino on Wall Street. Perhaps there are some similarities between risking money at a gambling table and investing in stocks, bonds, or other assets. As this chapter progresses, you will see that there are some similarities between trading in financial assets and gambling in a casino. But you will learn that there are important differences as well. ### Key Takeaways • Many different types of assets, such as stocks, bonds, real estate, and foreign currency, are traded in financial markets. • Your earnings from owning an asset depend on the type of asset. If you own a stock, then you are paid dividends and also receive a capital gain or incur a capital loss from selling the asset. If you own real estate, then you have a flow of rental payments from the property and also receive a capital gain or incur a capital loss from selling the asset. • Risks also depend on the type of asset. If you own a bond issued by a company, then you bear the risk of that company going bankrupt and being unable to pay off its debt. ### Checking Your Understanding 1. If you live in a house rather than rent it, do you still get some benefits from ownership? How would these benefits compare with the income you could receive if you rented out the house? 2. What assets are subject to the risk of bankruptcy? Close Search Results Study Aids Need Help? Talk to a Flat World Knowledge Rep today: • 877-257-9243 • Live Chat • Contact a Rep Monday - Friday 9am - 5pm Eastern We'd love to hear your feedback! Leave Feedback! Edit definition for #<Bookhub::ReaderController:0x000000107bb1e0> show #<Bookhub::ReaderReporter:0x000000107cf708> 475762 To get digital downloads, you can buy the All Access Pass I am an Instructor Continue reviewing this book online. Already registered? I am a Student Visit the course page for this book to see affordable Already purchased this book?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9584159851074219, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/34687/two-elementary-games-in-number-theory
# two elementary games in number theory I solved these two problems from a programming challenge website: numgame and numgame2. These two problems are very similar. In the first one, the position is a number $n$ and each player can subtract from $n$ a divisor $d$ of $n$ with $1 \leq d < n$. Players Alice and Bob alternate with Alice going first and the first person unable to move loses. The second problem is similar except each player can subtract a prime number $p$, or 1, from the current position $n$, with $p < n$ ($p$ is not necessarily a divisor of $n$). We assume that the players play optimally and, as usual, ask who the winner is given the initial value of $n$. The claim is that in the first game Alice wins if $n \equiv 0 \pmod{2}$ and Bob wins if $n \equiv 1 \pmod{2}$, while in the second game Alice wins if $n \equiv 1 \pmod{4}$ and Bob wins otherwise. I'm looking for someone to give me a proof of these answers, or some hints to start. - 2 If possible, please add into the question the rules of the games, and not just a link and a jumbled explanation. – Asaf Karagila Apr 23 '11 at 10:50 In particular the rules include that two players Alice and Bob alternate. While the first game (numgame) has Alice play first, the second game (numgame2) requires Bob to play first. In both games a player loses if no valid move is available at their turn, i.e. if the remaining number is 1. – hardmath Apr 23 '11 at 11:14 In the first problem the player who starts with a prime number loses, not necessarily 1. – Vicfred Apr 23 '11 at 11:23 Because the rules of the first game allow for a proper divisor, if the player Alice starts with n = 2 (a prime), she has a valid move in taking away 1, leaving Bob with a losing position. Apart from that, of course, primes are odd and thus illustrate the claimed classification of outcomes for the first game (n odd gives Bob a win). – hardmath Apr 24 '11 at 10:39 ## 4 Answers Hint for the first problem: If the current position $n$ is even and it's Alice's turn, can she always make a move into an odd position? If the current position $n>1$ is odd and it's Bob's turn, what can you prove about the parity of the position he must move into? - Second Problem: Bob can never change a number of the form $4k+1$ into a number of the same form (primes and 1 tend to be not divisible by 4). Alice can always change a number not of this form into a number of this form by substracting 1,2 or 3 as needed. - Hint for the first problem: Who wins if $n=2$, who wins if $n=1$? If $n$ is even, can Alice assure that after 2 turns (one turn by Alice and one by Bob) the number is still even? - 1st Game 1) let n = value of current position. 2) n = 1 x n, so n is always divisible by 1 3) if n is even, Alice can always subtract 1 leaving Bob with odd value n 4) an odd number has no even divisors since if n/2a = b, then n = 2ab and is even. 5) if n is odd, Bob must always leave Alice with an even value n since all divisors are odd and he must subtract an odd number from n, while the difference between 2 odd numbers must always be even ((2x+1)-(2y+1) = 2(x-y)). 6) thus, by 3) & 5) with each subsequent pair of turns, Alice can always leave Bob with a smaller odd number position n. 7) This strategy leads to a descent that eventually must leave Bob with the odd position n = 1, and he loses. The descent could be hastened by Alice if she chose the largest odd divisor to subtract instead of 1 (which is the smallest) when it was her turn. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9398989677429199, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/193897/how-can-i-get-sequence-4-4-2-4-4-2-4-4-2-ldots-into-equation/193907
# How can I get sequence $4,4,2,4,4,2,4,4,2\ldots$ into equation? How can I write an equation that expresses the nth term of the sequence: $$4, 4, 2, 4, 4, 2, 4, 4, 2, 4, 4, 2,\ldots$$ - 31 nth term of decimal expansion 442/999 ;) – wim Sep 11 '12 at 13:25 Why isn't this posted to tex stackexchange? – g33kz0r Sep 12 '12 at 3:09 3 @g33kzor Why should it be? I don't think the question was how to literally write (in TeX) the equation. – Austin Mohr Sep 12 '12 at 12:04 ## 16 Answers $$\frac{14}{3} - \frac{8}{3}\cos^2 (\frac{2 \pi n}{3})$$ -- Added: The original formula was typed late at night, and sufered from a couple of computational blunders; hopefuly the present formula is correct. Of course, the square on the cosine is unnecessary (I only put it there because I thought, due to miscalculation, that it simplified the coefficients). In some sense the more natural formula is the one without the squared cosine, namely $$\frac{10}{3} - \frac{4}{3}\cos(\frac{2 \pi n}{3})$$ (as noted by the OP below). Note that the existence of such a formula is not accidental or without interest. It is an illustration of finite Fourier theory (or, if you prefer, character theory of the finite abelian group $\mathbb Z/3\mathbb Z$). In general, any function of $n$ that depends only on $n \bmod N$ can be written as a linear combination of the functions $e^{2 \pi i n /N}$. The most familiar example is probably the formula $(-1)^n$ for the sequence $-1,1,-1,1,\ldots$. Whether such a formula is ever computationally useful is outside my area of expertise, but there is no doubt about the theoretical utility of finite Fourier theory. [See Lubin's answer for an answer more explictly in keeping with this remark.] - 3 @MattE: Exactly what I need, BUT, this seems to work better: $$4(\frac{5}{6}-\frac{1}{3}\cos(\frac{2\pi n}{3} ))$$ – ben Sep 11 '12 at 5:14 1 – Deebster Sep 11 '12 at 14:04 5 Amazing how the most computationally inefficient formula is the most up voted. – asmeurer Sep 11 '12 at 18:38 2 @asmeurer: Dear asmuerer, You might find the formula dubious, but in fact it is an example of finite Fourier theory: a function depending on the congruence class of $n$ mod $N$ admits a Fourier expansion. It is analogous to writing $-1,1,-1,1, \ldots$ as $(-1)^n$, and is useful in some contexts (at least theoretical ones, which is what I am more familiar with). Regards, – Matt E Sep 12 '12 at 0:30 1 It certainty is mathematically interesting. But if the user wanted it for a program (possible, but it wasn't stated), then @AlexBecker's answer would be better. I've seen very computationally inefficient answers to sequence problems on math SE for problems where the question specifically stated it was for a computer program. See for example math.stackexchange.com/questions/162495/…. – asmeurer Sep 13 '12 at 17:11 show 12 more comments How about $$x_n=\begin{cases} 4 &\text{if }n\equiv 0,1\:(\bmod 3)\\ 2 &\text{if }n\equiv 2\:(\bmod 3)\\ \end{cases}$$ assuming you start indexing from $0$. - Assuming you only want to determine the nth entry of a sequence, this would be the most efficient way of calculating it. – dstibbe Sep 11 '12 at 9:18 20 Seriously? My highest upvoted non CW answer is this? – Alex Becker Sep 11 '12 at 17:41 2 ROFL. I'm guessing this is merely caused by people wanting to top the "approved" answer, since that answer is computationally very inefficient. – dstibbe Sep 11 '12 at 20:37 3 At least it is a better answer than the one offered by that lunkhead Austin Mohr. – MJD Sep 11 '12 at 22:29 1 @Matt E I'm quite familier with the Fourier theory. However, if someone is not going to use the formula for anything else besides determing the value of the nth position, then it is overly complex (and thus inefficient) when above formula provides the same. – dstibbe Sep 12 '12 at 8:39 show 4 more comments $$f(n) = \begin{cases} 4 \text{ if } n \equiv 0 \text{ or } 1 \text{ (mod 3)}\\ 2 \text{ if } n \equiv 2 \text{ (mod 3)} \end{cases}$$ - $$4-2\cdot\mathbf 1_{3\mid n}\qquad\text{or}\qquad 2+2\cdot\mathbf 1_{\gcd(3,n)=1}$$ - 8 Which has exactly enough characters to be accepted. – Did Sep 11 '12 at 5:52 2 And even a downvote... Hallelujah! – Did Sep 11 '12 at 18:47 Expanded version with many more characters. – Did Sep 15 '12 at 4:35 $$a_n:=\left\{\begin{array}{}4\,,&\text{if}\,\;\;n\neq 0\pmod 3\\2\,,&\text{if}\,\;\;n=0\pmod 3\end{array}\right....?$$ - Or 2+ 2*(n %% 3 != 0) . (Not sure of the local convention for modulo remainder so used the R operator) – DWin Sep 11 '12 at 16:43 The "quotients" $a_j$ of the simple continued fraction for $$\frac{17 + \sqrt {442}}{9}.$$ See PURELY PERIODIC - $a_{n+2} = |a_{n+1} - a_n| + 2$, where $a_1 = a_2 = 4$. - $$\Large 2^{2-0^{(n \text{ mod } 3)}}$$ - Try $a_n=2^{1+\lceil n/3 \rceil - \lfloor n/3 \rfloor}$, where $\lceil n/3 \rceil$ is the least integer $\geq n/3$ and where $\lfloor n/3 \rfloor$ is the greatest integer $\leq n/3$. Then, if 3 divides $n$, you get $2^1$; if it doesn't, you get $2^2$. - $$x_n= 2+ 3 \left\{ \frac{n}{3} \right\} + 3\left\{ \frac{n}{3}\right\}\left(2- 3\left\{ \frac{n}{3} \right\}\right) \,.$$ where $\{ \}$ denotes the fractional part. - Mimicking the cosine answer of @Matt E, I suggest setting $\omega=(1+\sqrt{-3})/2$ and taking $a_n=2+2|\omega^n-\omega^{2n}|/\sqrt3$. - Dear Lubin, Indeed this is what I had in mind (as I indicated in an edit to my answer, which also seemed to be rife with typos; hopefully they are now fixed and agree with your answer). Regards, – Matt E Sep 12 '12 at 1:05 The On-Line Encyclopedia of Integer Sequences, is always a good place to start looking for them. One often needs to search for one excluding constant multiplication factor and/or drop a few initial terms. And/or add a constant factor (as Theóphile points out below) For this one we can use 1,2,2,1,2,2,1,2,2,... (http://oeis.org/A130196), drop the initial "1" and multiply by 2 - 1 Alternatively, $0,1,1,0,1,1, ...$ . – Théophile Sep 12 '12 at 13:39 $$x_n = 3 + (-1)^{((n+2) mod 3)}$$ - $x_n = 4(n^2 \bmod 3)+2(1-(n^2 \bmod 3))=2+2(n^2 \bmod 3)$, assuming you start indexing from $1$. - $2n^2+4n+4 \pmod 6$ for $n \geq 0$. - $x_n= \begin{cases} 4,&n=0,1\\ (x_{n-2} + x_{n-1})\,\bmod 4 + 2,&n\ge2 \end{cases}$ - One can avoid using 'mod' by writing the second clause as $10-x_{n-2}-x_{n-1}$ – Marc van Leeuwen Oct 7 '12 at 12:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216833710670471, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/65460/coefficients-of-lacunary-series-on-quasiconformally-transformed-unit-disk/67523
Coefficients of lacunary series on quasiconformally transformed unit disk Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Say I have a lacunary $q$ series $s(q)=\sum_{n=0}^{\infty} a_{n}q^{n}$ , and I have a quasiconformal transformation $\xi$ which preserves the boundary of the unit disk in $\mathbb{C}$ such that if $|q|=1$ then $|\xi(q)|=1$. Is there a method from Teichmüller theory that allows us to explicitly write down the coefficients $b_{n}$ of $s(\xi(q)) = \sum_{n=0}^{\infty} a_{n}\xi(q)^{n} = \sum_{n=0}^{\infty} b_{n}q^{n}$ given some explicit $\xi$? - 1 Answer The answer is no: pick $s(q)=q$ to be simply the identity, then you would obtain automatically a power series expansion for any quasiconformal $\xi$ preserving the circle, which can't be true since there are some $\xi$ that are not analytic near the origin... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9173704385757446, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?s=b13b5dbbbecd3e16f10b79c26b5cc69c&p=4275475
Physics Forums ## Trying to determine initial thrust on an object from angular velocity? Hi all, I built an electric paper airplane launcher and I'm trying to figure out how much force or thrust is being applied to my paper airplane when it's launched. The setup looks like this: The discs are 124mm in diameter, are spinning at approximately 5800 RPM each, and are about 1mm apart (far enough not to touch but close enough to grab the airplane. The airplane is 7 inches long. I'm trying to figure out how much force is transferred to the airplane. My initial guess was to convert the angular velocity of the discs into linear velocity, and multiply that by mass of the plane.. but I don't think that's accurate. First, I'm not sure that two discs spinning in opposite directions at 5800 RPM each equals an angular velocity of 11600 RPM. I'm not sure that I can combine them that way. Second, assuming that I figure out the combined linear velocity of both discs, I am not sure how that is transferred to the airplane. The discs are applying force to the airplane for the total distance of its length, but I'm not sure how that factors into the airplane's initial acceleration. I'm sure there is a lot I'm missing here.. just looking for a point in the right direction. Thank you! PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Hi, unfortunately you can't add the angular velocities of the wheels like that. Just like your car has four wheels going 60 mph each, but the car doesn't go 240 mph, in your launcher, the second wheel just allows there to be good grip and provide more power with a second motor. You are correct that you take the angular velocity of the discs and convert into linear velocity of the plane. For a circular thing, linear velocity is given by $v = \omega r = 5800rpm \cdot 62 mm=5900mm/s \approx 6m/s$. 6m/s will be the maximum speed of your plane. Thrust is a concept that applies to jets and rockets, so here we're just considering force. The trick is that the plane isn't actually accelerating during the whole time it passes through your launcher. I (of course) haven't seen your launcher, but I'm guessing it is like a baseball shooter? You put the plane in, and pretty much instantly it reaches full speed and shoots itself out. Force is proportional to acceleration.. so if something isn't accelerating, there isn't a net force being applied. So in your case, the plane very quickly reaches its maximum speed, then just keeps passing out from the launcher. So to get the force, you need to have an idea of how long it takes the plane to reach its maximum speed. This is a topic called impulse momentum theory, which you can read more about on wikipedia. The idea though, is that if the acceleration takes place in a sufficiently small amount of time, then you can approximate newtons law. Usually you will see it as F=ma, but this is the same as saying F=dp/dt, where p is momentum. But in our small-time approximation, we can say that $F=Δp/Δt$, where Δp is the change in momentum during the time interval, and Δt is the change in time.. For your plane, (if I assume the mass is ~250 mg, which is about standard for a sheet of printer paper), the change in momentum is $p=m Δv=250mg * 6m/s = 1500 mg m/s$. Δt is something you would have to measure or estimate.. perhaps, as an idea, if you had a camera, film the launch with a ruler behind, look at it frame by frame, and estimate how long it takes the plane to accelerate. But say, as an example, it took 50ms to accelerate.. Then the force is Δp/Δt = 1500/50 g m/s^2 = 30 g m/s^2 = 0.03 kg m/s^2 = 0.03 Newtons = 0.00674 pounds force. A good time estimate is crucial, and this is still an estimation, but this will at least give you an idea. Hope this helps. Thread Tools | | | | |---------------------------------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Trying to determine initial thrust on an object from angular velocity? | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 33 | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593822360038757, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2008/09/08/cauchys-condition-for-uniform-convergence/?like=1&source=post_flair&_wpnonce=2c354f29b9
# The Unapologetic Mathematician ## Cauchy’s Condition for Uniform Convergence As I said at the end of the last post, uniform convergence has some things in common with convergence of numbers. And, in particular, Cauchy’s condition comes over. Specifically, a sequence $f_n$ converges uniformly to a function $f$ if and only if for every $\epsilon>0$ there exists an $N$ so that $m>N$ and $n>N$ imply that $|f_m(x)-f_n(x)|<\epsilon$. One direction is straightforward. Assume that $f_n$ converges uniformly to $f$. Given $\epsilon$ we can pick $N$ so that $n>N$ implies that $|f_n(x)-f(x)|<\frac{\epsilon}{2}$ for all $x$. Then if $m>N$ and $n>N$ we have $|f_m(x)-f_n(x)|<|f_m(x)-f(x)|+|f(x)-f_n(x)|<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon$ In the other direction, if the Cauchy condition holds for the sequence of functions, then the Cauchy condition holds for the sequence of numbers we get by evaluating at each point $x$. So at least we know that the sequence of functions must converge pointwise. We set $f(x)=\lim\limits_{n\rightarrow\infty}f_n(x)$ to be this limit, and we’re left to show that the convergence is uniform. Given an $\epsilon$ the Cauchy condition tells us that we have an $N$ so that $n>N$ implies that $|f_n(x)-f_{n+k}(x)|<\frac{\epsilon}{2}$ for every natural number $k$. Then taking the limit over $k$ we find $|f_n(x)-f(x)|=\lim\limits_{k\rightarrow\infty}|f_n(x)-f_{n+k}(x)|\leq\frac{\epsilon}{2}<\epsilon$ Thus the convergence is uniform. About these ads Like Loading... ## 3 Comments » 1. [...] we’ve got Cauchy’s condition: a series converges uniformly if for every there is an so that and both greater than zero [...] Pingback by | September 9, 2008 | Reply 2. In the Cauchy –> Uniform convergence direction, could you clarify the step “taking the limit over k”? until this point you have only established pointwise convergence. But in this step it feels like you’ve presupposed uniform convergence in saying f_n+k –> f(x). I know you’re not doing anything wrong, because Rudin has the same proof. I’m just confused and would appreciate the help Comment by Anirudh | February 21, 2011 | Reply • I believe they forgot to mention that the conditions for the Cauchy criterion here also include that for each epsilon the N chosen must hold for all x in the domain of the sequence of functions; from there you can see that is indeed true. Comment by Armitage | January 6, 2013 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## RSS Feeds RSS - Posts RSS - Comments • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 26, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228771924972534, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/tagged/validation+regression
# Tagged Questions 1answer 55 views ### validating a regression model I want to build a regression model to predict daily income from customers. I have 2 problems: 1. Choosing data for the training set - do I use daily income from 1 month ago, 6 month ago etc. 2. How do ... 0answers 87 views ### Question about the validation step for a multinomial logit model I've been skimming through a couple of books (all german ones, hence I do not cite them here) at what residual plots one should look at if the usual model assumptions in the context of a multinomial ... 2answers 168 views ### Computing c-index for an external validation of a Cox PH model with R First off, I'll state that I'm aware many questions get asked about the c-index. I've searched this site and others, and I haven't found an answer for my situation. I can successfully use ... 2answers 216 views ### In logistic regression, does the lack of significance of the parameter estimates in a test sample indicate overfitting? I am trying to build a logistic regression model where I have a dependent variable $y$ and independent variables $x_1$, $x_2$... $x_n$. $y$ can take only two values - 0 or 1. My original modelling ... 0answers 187 views ### Logistic Regression Cost Function issue in Matlab I'm trying to implement a logistic regression function in matlab. I calculated the theta values, linear regression cost function is converging and then I use those parameters in logistic regression ... 1answer 68 views ### Is a different CV arrangement the same as a validation set? I have a smallish dataset ~ 1500 rows X 500 columns. I've been using a standard 5 fold CV setup where row 1 = CV set1, row2 = CV set2, ... row 6 = CV set1,etc. I'm at the point where I'm trying to ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8820997476577759, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/6384/why-does-dilation-invariance-often-imply-proper-conformal-invariance?answertab=active
# Why does dilation invariance often imply proper conformal invariance? Why does a quantum field theory invariant under dilations almost always also have to be invariant under proper conformal transformations? To show your favorite dilatation invariant theory is also invariant under proper conformal transformations is seldom straightforward. Integration by parts, introducing Weyl connections and so on and so forth are needed, but yet at the end of the day, it can almost always be done. Why is that? - 6 You can look for Polchinski's paper "Scale and Conformal Invariance in QFT" which discusses the issue in detail. Maybe there have been some developments since then, but that is a good starting point. – user566 Mar 5 '11 at 18:56 2 For 4d field theory it's still an open question whether scale invariance implies conformal invariance. There's been some recent work on this topic by Slava Rychkov and collaborators, see e.g. 1101.5385. – Matt Reece Mar 6 '11 at 23:06 2 By the way, given that scale invariance does not imply conformal invariance, maybe the question can be rephrased. – user566 Mar 9 '11 at 18:14 ## 3 Answers As commented in previous answers, conformal invariance implies scale invariance but the converse is not true in general. In fact, you can have a look at Scale Vs. Conformal Invariance in the AdS/CFT Correspondence. In that paper, authors explicitly construct two non trivial field theories which are scale invariant but not conformally invariant. They proceed by placing some conformal field theories in flat space onto curved backgrounds by means of the AdS/CFT correspondence. - 2 Thanks for that, somehow I missed this one, it is really interesting. – user566 Mar 9 '11 at 18:14 I knew about this paper but reading this question, made it came to my mind again (fortunately, because it's very interesting) – xavimol Mar 9 '11 at 19:23 The rule-of-thumb is that 'conformal ⇒ scale', but the converse is not necessarily true (some condition(s) needs to be satisfied) — but, of course, this varies with the dimensionality of the problem you're dealing. PS: Polchinski's article: Scale and conformal invariance in quantum field theory. - It's not a rule-of-thumb that conformal implies scale, it's just a fact. The conditions are mostly locality and low-order of derivatives, which is sometimes imposed by unitarity and renormalizability. – Ron Maimon May 4 '12 at 19:43 @RonMaimon: Conformal invariance requires scale invariance in a Poincare invariant theory simply because of the commutator $[K_\mu,P_\nu]=2i(\eta_{\mu\nu}D-M_{\mu\nu})$. The notation should be obvious. – AndyS Jun 15 '12 at 23:10 @AndyS: The very existence of D in the conformal group is enough to show conformal implies scale--- it's not a rule of thumb, it's an obvious implication, that's what the comment above was trying to say. You don't need the commutator business to show this, the dilatation is a conformal transformation all by itself. – Ron Maimon Jun 16 '12 at 6:51 @RonMaimon: What you're saying is not true; you need the commutator in order to prove what you call "an obvious implication". Also, there is a clear distinction between dilatations and special conformal transformations. – AndyS Jun 19 '12 at 3:23 @AndyS: What I am saying is true, and you are saying nonsense. Dilatations are to conformal invariance as rotations about the z-axis are to rotations. They are a special case. If you have rotational invariance, you have rotational invariance around the z-axis. If you have conformal invariance, you have dilatation invariance. This is not an arguable point, it is not a difficult point, and I don't know why you make the comment above. – Ron Maimon Jun 19 '12 at 15:48 show 4 more comments Maybe this does it: $\begin{array}{rccl} \textrm{Translation:}&P_\mu&=&-i\partial_\mu\\ \textrm{Rotation:}&M_{\mu\nu}&=&i(x_\mu\partial_\nu-x_\nu\partial_\mu)\\ \textrm{Dilation:}&D&=&ix^\mu\partial^\mu\\ \textrm{Special Conformal:}&C_\mu&=&-i(\vec{x}\cdot\vec{x}-2x_\mu\vec{x}\cdot\partial) \end{array}$ Then the commutation relation gives: $[D,C_\mu] = -iC_\mu$ so $C^\mu$ acts as raising and lowering operators for the eigenvectors of the dilation operator $D$. That is, suppose: $D|d\rangle = d|d\rangle$ By the commutation relation: $DC_\mu - C_\mu D = -iC_\mu$ so $DC_\mu|d\rangle = (C_\mu D -iC_\mu)|d\rangle$ and $D(C_\mu|d\rangle) = (d-i)(C_\mu|d\rangle)$ But given the dilational eigenvectors, it's possible to define the raising and lowering operators from them alone. And so that defines the $C_\mu$. P.S. I cribbed this from: http://web.mit.edu/~mcgreevy/www/fall08/handouts/lecture09.pdf - Let's say you cited Mr. McGreevy. – mbq♦ Mar 6 '11 at 13:04 The issue is that it just isn't true that scale invariance implies conformal invariance. The simplest counterexample is some self-interacting Levy field theory. – Ron Maimon May 4 '12 at 19:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9239389896392822, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/41988-complex-number.html
Thread: 1. Complex Number Find the complex number $z$ which satisfies both $|z-3i|=3$ and $arg(z-3i)=\frac{3\pi}{4}$. Thanks in advance. 2. Hi ! Originally Posted by Air Find the complex number $z$ which satisfies both $|z-3i|=3$ and $arg(z-3i)=\frac{3\pi}{4}$. Thanks in advance. $z-3i=|z-3i|e^{i 3 \pi/4}$ $z-3i=3*(\cos 3 \pi /4+i \sin 3 \pi /4)=3*\left(-\frac{\sqrt{2}}{2}+i \frac{\sqrt{2}}{2}\right)$ 3. Originally Posted by Moo Hi ! $z-3i=|z-3i|e^{i 3 \pi/4}$ $z-3i=3*(\cos 3 \pi /4+i \sin 3 \pi /4)=3*\left(-\frac{\sqrt{2}}{2}+i \frac{\sqrt{2}}{2}\right)$ Is there an alternative method which would use substitution? 4. Originally Posted by Air Is there an alternative method which would use substitution? Hmmm you mean substituting $z-3i=a+ib$ ? $3=|z-3i|=\sqrt{a^2+b^2}$ $3 \pi /4=arg(z-3i)=arctan \frac ba$ ---> $\frac ba=\tan (3 \pi /4)=-1$ $a=-b$ , $a \neq 0$ Substituting in the modulus : $3=\sqrt{a^2+a^2}$ $9=2a^2$ $a=\pm \frac{3}{\sqrt{2}}=\pm 3 \cdot \frac{\sqrt{2}}{2}$ But now, how to get if it's + or -, I still have to think about it 5. Originally Posted by Moo But now, how to get if it's + or -, I still have to think about it A friend of mine explained it to me. While dealing with squared numbers, it's not bijective over all real numbers. So you can't talk in terms of equivalence. That is to say that when you do the substitution here, you will have to check back if the results you've got satisfy the conditions. If $a=3 \cdot \frac{\sqrt{2}}{2}$, then $b=-3 \cdot \frac{\sqrt{2}}{2}$ $arg(z-3i)=arg \left(3 \cdot \frac{\sqrt{2}}{2}-i \cdot 3 \cdot \frac{\sqrt{2}}{2}\right)=arg \left(\frac{\sqrt{2}}{2}-i \cdot \frac{\sqrt{2}}{2}\right)= -\frac{\pi}{4} \neq \frac{3 \pi}{4} \quad \square$ Then, try out $a=-3 \cdot \frac{\sqrt{2}}2 \dots \dots \dots \dots \dots \blacksquare$ 6. Originally Posted by Air Find the complex number $z$ which satisfies both $|z-3i|=3$ and $arg(z-3i)=\frac{3\pi}{4}$. Thanks in advance. A geometrical approach would be to note that: 1. $|z-3i|=3$ defines a circle of radius 3 and centre C at z = 3i. 2. $\text{arg} \, (z-3i)=\frac{3\pi}{4}$ defines a ray with terminus at z = 3i and making an angle $\frac{3\pi}{4}$ with the positive direction of the horizontal. It's then easy to see that the required value of z is the point A of intersection of the circle and the ray. There's an obvious isosceles triangle AOC (OC and AC have length 3 and angle ACO = $\frac{\pi}{2} + \frac{\pi}{4} = \frac{3 \pi}{4}$. (O is the origin). A small bit of geometry and trigonometry, a gentle caress and the triangle gives it all to you: $\text{arg} \, (z) = \frac{\pi}{2} + \frac{\pi}{8} = \frac{5 \pi}{8}$ $|z| = OA = 6 \cos \frac{\pi}{8} = 3 \sqrt{2 + \sqrt{2}}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404507875442505, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/104704?sort=newest
## Azuma’s Inequality when the conditions hold with high probability? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In Azuma's Inequality, is the statement true when $|X_k - X_{k-1}| < c_k$ almost surely rather than with probability 1? If not, is there another result which gives strong concentration when the above inequality (for each $k$) holds with high probability? - Do you mean almost surely vs. surely? Almost surely implies probability 1... – fuzzytron Aug 14 at 16:26 Yes, I mean almost surely instead of surely. – Patt Geffrey Aug 14 at 16:31 What do you mean by high probability? Do you just want $|X_k - X_{k-1}|< c_{k}$ to hold with probability tending to $1$, as $k \to \infty$? I guess, that when this convergence is fast enough, then some sort of Azuma's inequality still holds, but only for tails distant enough and, of course, with worse constants. I haven't thought about it too long, so I may be wrong. – Mateusz Wasilewski Aug 14 at 16:38 If the martingales considered is $(X_k)_{k=1}^{N}$, what I mean is that $\mathbb{P}(\forall k, |X_k - X_{k-1}| < c_k) \to 1$ as $N$ goes to $\infty$. – Patt Geffrey Aug 14 at 16:54 1 The absolute values $|d_k|$ of the martingale differences can have any joint distribution subject to the boundedness constraint, so you can make the exceptional sets disjoint for a long time and $|d_k|$ large on the $k$-th exceptional set $A_k$. Piecing things together, you get a counterexample if $\Bbb{P}(A_k)\to 0$ but is not summable. – Bill Johnson Aug 14 at 21:02 show 3 more comments ## 1 Answer There is a large literature on variations of Azuma's inequality. One lemma that is similar to what you ask is Lemma 3.1 of this old paper of Wormald and myself. It considers the case where $|X_k-X_{k-1}|$ is within one bound with very high probability and within some wider bound always. There are lots of such results. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9268019795417786, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/31754/quadripolar-moment-in-curved-space
# quadripolar moment in curved space So, i'm going over the Thorne's derivation of the quadrupolar radiation term, and they write the core term as: $$\frac{3 r_i r_j - 2 r^2 \delta_{ij}}{4 r^5}$$ But if i try to obtain this term by Covariant deriving the dipole term; $$\nabla_{ij}{ \frac{1}{r}} = - \nabla_{j}{ \frac{r_i}{2 r^3}}$$ i am left with: $$\frac{3 r_i r_j - 2 r^2 \delta_{ij}}{4 r^5} - \frac{\Gamma^i_{kj}r_k}{2r^3}$$ Where the last term seems to be dismissed in the text with no good reason. Is there a reason why this term is being ignored? - The quadropolar form is for flat space. – Ron Maimon Jul 10 '12 at 20:11 @RonMaimon, that i agree. But in the linear approximation, terms linear in $\Gamma$ are not ignored in the neighbourhood of a stellar object, only quadratic terms should be dismissed – lurscher Jul 10 '12 at 20:15 I see. You're asking how does the Schwartschild term deform the outgoing quadropole radiation--- the question of ougoing radiation is usually to expand the power in each outgoing multipole at infinity. The outgoing power also has an $\epsilon$ in it, it's weak presumably in the approximation you are using, so the geometrical correction from $\Gamma$ to the outgoing radiation profile looks second order to me, maybe you are thinking of a strong gravitational radiation case? What's the system? – Ron Maimon Jul 10 '12 at 20:32 i think your point is that this $\Gamma$ factor will multiply with the retarded source that already has linear terms, so it is second order as you point out – lurscher Jul 10 '12 at 20:44 yes, that's exactly it, but this might be an interesting and universal second order suppression of outgoing radiation, just by the ratio of the sphere area in Schwartzschild at the scale of the wavelength of the outgoing radiation to the same area in flat space (using coordinates where r is radial length, not sphere area). – Ron Maimon Jul 10 '12 at 21:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391304850578308, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?s=ab7cfe3ab27334e5a4b9c6993334df0d&p=3975514
Physics Forums ## Variable substitution in Langevin equation and Fokker-Planck equation Dear all, I have a question about the variable substitution in Langevin equation and Fokker-Planck equation and this has bothered me a lot. The general Langevin equation is: $$\frac{dx}{dt}=u(x)+\sqrt{2 D(x)}\eta(t)$$ and the corresponding Fokker-Planck equation is thus: $$\frac{\partial \rho(x)}{\partial t}=-\frac{\partial}{\partial x}\left[u(x)\rho(x)\right]+\frac{\partial^2}{\partial x^2}\left[D(x)\rho(x)\right]$$ which means the stationary distribution of x should satisfy $$u(x)\rho(x)=\frac{\partial}{\partial x}\left[D(x)\rho(x)\right]$$ However, problem emerges when I want to use a variable substitution y(x), since the Langevin equation becomes $$\frac{dy}{dt}=u(x)y'(x)+\sqrt{2 D(x)}y'(x)\eta(t)$$ which the corresponding F-P equation $$\frac{\partial \rho(y)}{\partial t}=-\frac{\partial}{\partial y}\left[u(x)y'\rho(y)\right]+\frac{\partial^2}{\partial y^2}\left[D(x)y'^2\rho(y)\right]$$ and the stationary distribution of y is thus $$u(x)y'\rho(y)=\frac{\partial}{\partial y}\left[D(x)y'^2\rho(y)\right]$$ Considering $$\rho(x)dx=\rho(y)dy \Rightarrow \rho(x)=\rho(y)y'$$ we can rewrite the stationary ρ(y) equation before as $$u(x)\rho(x)=\frac{\partial}{\partial y}\left[D(x)y'\rho(x)\right]=\frac{\partial}{\partial x}\left[D(x)y'\rho(x)\right]x'(y)=\frac{1}{y'}\frac{\partial}{\partial x}\left[D(x)y'\rho(x)\right]$$ which is not equal to the stationary ρ(x) derived before. Is there anything wrong with my derivation? Can anyone help me to figure this out? I have posted the same question in classical physics forum a few days ago but not one replies me. I hope people here can show me where I am going wrong or why this result happens. Thanks so much! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help You have to be careful when doing changes of variables in a stochastic differential equation. There are two main interpretations of stochastic calculus that physicists like to use: the Stratonovich interpretation and the Ito interpretation. The nature of your problem dictates which one to use, and things like changing variables are not the same in both (I'm afraid I don't remember the guidelines for which one you should use in which case). In Stratonovich, the change of variables follows the usual chain rule, while in Ito an additional term is produced. See http://en.wikipedia.org/wiki/Ito_Cal...for_physicists I believe the distinction is very important when you have multiplicative noise, as you do in your equation. (Also, I have been told that the distinction is only important when you have delta-function or singular noise correlations. If the noise correlations are given by some smoothly decaying function C(t-t'), then apparently the distinction between the two interpretations is unimportant). So, I think the problem may be that you should be using the Ito calculus and the additional term produced by the change of variables. Quote by Mute You have to be careful when doing changes of variables in a stochastic differential equation. There are two main interpretations of stochastic calculus that physicists like to use: the Stratonovich interpretation and the Ito interpretation. The nature of your problem dictates which one to use, and things like changing variables are not the same in both (I'm afraid I don't remember the guidelines for which one you should use in which case). In Stratonovich, the change of variables follows the usual chain rule, while in Ito an additional term is produced. See http://en.wikipedia.org/wiki/Ito_Cal...for_physicists I believe the distinction is very important when you have multiplicative noise, as you do in your equation. (Also, I have been told that the distinction is only important when you have delta-function or singular noise correlations. If the noise correlations are given by some smoothly decaying function C(t-t'), then apparently the distinction between the two interpretations is unimportant). So, I think the problem may be that you should be using the Ito calculus and the additional term produced by the change of variables. Yes I think you get the point. It seems that Ito calculus uses a different method for changing the variables. I am really unfamiliar with this area before so your link really helps. Thanks a lot! Thread Tools Similar Threads for: Variable substitution in Langevin equation and Fokker-Planck equation Thread Forum Replies Classical Physics 1 Classical Physics 2 Advanced Physics Homework 1 Calculus 2 Calculus & Beyond Homework 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9252644777297974, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/8518/is-there-something-similar-to-noethers-theorem-for-discrete-symmetries/8519
# Is there something similar to Noether's theorem for discrete symmetries? Noether's theorem states that, for every continuous symmetry of a system, there exists a conserved quantity, e.g. energy conservation for time invariance, charge conservation for $U(1)$. Is there any similar statement for discrete symmetries? - discrete symmetries of a lagrangian or just anything? – anon Aug 24 '10 at 8:53 I guess $t \mapsto -t$ is an interesting discrete symmetry. – anon Aug 24 '10 at 9:02 1 – Tobias Kienzler Aug 24 '10 at 12:17 1 1 I am curious if there is a conservation law associated with symmetries of the form psi(x)=psi(x+2*Pi*R) ( a 4-sphere) or psi(x,y)=psi(x+k,y-q) (klein bottle?) – Holowitz Apr 13 '11 at 5:33 show 1 more comment ## 7 Answers For continuous global symmetries, Noether theorem gives you a locally conserved charge density (and an associates current), whose integral over all of space is conserved (time independent). For global discrete symmetries you have to distinguish between the cases where the conserved charge is continuous or discrete. For infinite symmetries like lattice translations the conserved quantity is continuous, albeit a periodic one. So in such case momentum is conserved modulo vectors in the reciprocal lattice. The conservation is local just as in the case of continuous symmetries. In the case of finite group of symmetries the conserved quantity is itself discrete. You then don't have local conservation laws because the conserved quantity cannot vary continuously in space. Nevertheless for such symmetries you still have a conserved charge which gives constraints (selection rules) on allowed processes. For example, for parity invariant theories you can give each state of a particle a "parity charge" which is simply a sign, and the total charge has to be conserved for any process, otherwise the amplitude for it is zero. - Isn't this called Pontryagin duality or something? – Keenan Pepper Apr 13 '11 at 20:24 – Tobias Kienzler Apr 14 '11 at 9:58 3 can you provide references on this? – Tobias Kienzler Apr 18 '11 at 9:10 Put into one sentence, Noether's first Theorem states that a continuous, global, off-shell symmetry of an action $S$ implies a local on-shell conservation law. By the words on-shell and off-shell are meant whether Euler-Lagrange equations of motion are satisfied or not. Now the question asks if continuous can be replace by discrete? It should immediately be stressed that Noether Theorem is a machine that for each input in form of an appropriate symmetry produces an output in form of a conservation law. To claim that a Noether Theorem is behind, it is not enough to just list a couple of pairs (symmetry, conservation law). Now, where could a discrete version of Noether's Theorem live? A good bet is in a discrete lattice world, if one uses finite differences instead of differentiation. Let us investigate the situation. Our intuitive idea is that finite symmetries, e.g., time reversal symmetry, etc, can not be used in a Noether Theorem in a lattice world because they don't work in a continuous world. Instead we pin our hopes to that discrete infinite symmetries that become continuous symmetries when the lattice spacings go to zero, can be used. Imagine for simplicity a 1D point particle that can only be at discrete positions $q_t\in\mathbb{Z}a$ on a 1D lattice $\mathbb{Z}a$ with lattice spacing $a$, and that time $t\in\mathbb{Z}$ is discrete as well. (This was, e.g., studied in J.C. Baez and J.M. Gilliam, Lett. Math. Phys. 31 (1994) 205; hat tip: Edward.) The velocity is the finite difference $$v_{t+\frac{1}{2}}:=q_{t+1}-q_t\in\mathbb{Z}a,$$ and is discrete as well. The action $S$ is $$S[q]=\sum_t L_t$$ with Lagrangian $L_t$ on the form $$L_t=L_t(q_t,v_{t+\frac{1}{2}}).$$ Define momentum $p_{t+\frac{1}{2}}$ as $$p_{t+\frac{1}{2}} := \frac{\partial L_t}{\partial v_{t+\frac{1}{2}}}.$$ Naively, the action $S$ should be extremized wrt. neighboring virtual discrete paths $q:\mathbb{Z} \to\mathbb{Z}a$ to find the equation of motion. However, it does not seem feasible to extract a discrete Euler-Lagrange equation in this way, basically because it is not enough to Taylor expand to the first order in the variation $\Delta q$ when the variation $\Delta q\in\mathbb{Z}a$ is not infinitesimal. At this point, we throw our hands in the air, and declare that the virtual path $q+\Delta q$ (as opposed to the stationary path $q$) does not have to lie in the lattice, but that it is free to take continuous values in $\mathbb{R}$. We can now perform an infinitesimal variation without worrying about higher order contributions, $$0 =\delta S := S[q+\delta q] - S[q] = \sum_t \left[\frac{\partial L_t}{\partial q_t} \delta q_t + p_{t+\frac{1}{2}}\delta v_{t+\frac{1}{2}} \right]$$ $$=\sum_t \left[\frac{\partial L_t}{\partial q_t} \delta q_{t} + p_{t+\frac{1}{2}}(\delta q_{t+1}- \delta q_t)\right]$$ $$=\sum_t \left[\frac{\partial L_t}{\partial q_t} - p_{t+\frac{1}{2}} + p_{t-\frac{1}{2}}\right]\delta q_t + \sum_t \left[p_{t+\frac{1}{2}}\delta q_{t+1}-p_{t-\frac{1}{2}}\delta q_t \right].$$ Note that the last sum is telescopic. This implies (with suitable boundary conditions) the discrete Euler-Lagrange equation $$\frac{\partial L_t}{\partial q_t} = p_{t+\frac{1}{2}}-p_{t-\frac{1}{2}}.$$ This is the evolution equation. At this point it is not clear whether a solution for $q:\mathbb{Z}\to\mathbb{R}$ will remain on the lattice $\mathbb{Z}a$ if we specify two initial values on the lattice. We shall from now on restrict our considerations to such systems for consistency. As an example, one may imagine that $q_t$ is a cyclic variable, i.e., that $L_t$ does not depend on $q_t$. We therefore have a discrete global translation symmetry $\Delta q_t=a$. The Noether current is the momentum $p_{t+\frac{1}{2}}$, and the Noether conservation law is that momentum $p_{t+\frac{1}{2}}$ is conserved. This is certainly a nice observation. But this does not necessarily mean that a Noether Theorem is behind. Imagine that the enemy has given us a global vertical symmetry $\Delta q_t = Y(q_t)\in\mathbb{Z}a$, where $Y$ is an arbitrary function. (The words vertical and horizontal refer to translation in the $q$ direction and the $t$ direction, respectively. We will for simplicity not discuss symmetries with horizontal components.) The obvious candidate for the bare Noether current is $$j_t = p_{t-\frac{1}{2}}Y(q_t).$$ But it is unlikely that we would be able to prove that $j_t$ is conserved merely from the symmetry $0=S[q+\Delta q] - S[q]$, which would now unavoidably involve higher order contributions. So while we stop short of declaring a no-go theorem, it certainly does not look promising. Perhaps, we would be more successful if we only discretize time, and leave the coordinate space continuous? I might return with an update about this in the future. An example from the continuous world that may be good to keep in mind: Consider a simple gravity pendulum with Lagrangian $$L(\varphi,\dot{\varphi}) = \frac{m}{2}\ell^2 \dot{\varphi}^2 + mg\ell\cos(\varphi).$$ It has a global discrete periodic symmetry $\varphi\to\varphi+2\pi$, but the (angular) momentum $p_{\varphi}:=\frac{\partial L}{\partial\dot{\varphi}}= m\ell^2\dot{\varphi}$ is not conserved if $g\neq 0$. - 1 This paper may be useful for the discrete action ideas you suggest: arxiv.org/abs/nlin.CG/0611058 A "No-Go" Theorem for the Existence of an Action Principle for Discrete Invertible Dynamical Systems. I haven't read through it yet, but it sounds interesting. – Edward Apr 13 '11 at 10:14 If you solve the simple gravity pendulum problem, you can construct two independent conserved quantities. They can be combined in a quantity known as the total energy in this case. – Vladimir Kalitvianski Apr 13 '11 at 22:09 You mentioned crystal symmetries. Crystals have a discrete translation invariance: It is not invariant under an infinitesimal translation, but invariant under translation by a lattice vector. The result of this is conservation of momentum up to a reciprocal lattice vector. There is an additional result: Suppose the Hamiltonian itself is time independent, and suppose the symmetry is related to an operator $\hat S$. An example would be the parity operator $\hat P|x\rangle = |-x\rangle$. If this operator is a symmetry, then $[H,P] = 0$. But since the commutator of an operator with the Hamiltonian also gives you the derivative, you have $\dot P = 0$. - No, because discrete symmetries have no infinitesimal form which would give rise to the (characteristic of) conservation law. See also this article for a more detailed discussion. - – Tobias Kienzler Aug 26 '10 at 8:32 5 Who says that conservation laws can arise onyly from infinitesimal forms? – Lagerbaer Apr 12 '11 at 19:37 Maybe, http://www.technologyreview.com/blog/arxiv/26580/ I am by no means an expert, but I read this a few weeks ago. In that paper they consider a 2d lattice and construct an energy analogue. They show it behaves as energy should, and then conclude that for this energy to be conserved space-time would need to be invariant. - As was said before, this depends on what kind of 'discrete' symmetry you have: if you have a bona fide discrete symmetry, as e.g. $\mathbb{Z}_n$, then the answer is in the negative in the context of Nöther's theorem(s) — even though there are conclusions that you can draw, as Moshe R. explained. However, if you're talking about a discretized symmetry, i.e. a continuous symmetry (global or local) that has been somehow discretized, then you do have an analogue to Nöther's theorem(s) à la Regge calculus. A good talk introducing some of these concepts is Discrete Differential Forms, Gauge Theory, and Regge Calculus (PDF): the bottom line is that you have to find a Finite Difference Scheme that preserves your differential (and/or gauge) structure. There's a big literature on Finite Difference Schemes for Differential Equations (ordinary and partial). - Sobering thoughts: Conservation laws are not related to any symmetry, to tell the truth. For a mechanical system with N degrees of freedom there always are N conserved quantities. They are complicated combinations of the dynamical variables. Their existence is provided with existence of the problem solutions. When there is a symmetry, the conserved quantities get just a simpler look. EDIT: I do not know how they teach you but the conservation laws are not related to Noether theorem. The latter just shows how to construct some of conserved quantities from the problem Lagrangian and the problem solutions. Any combination of conserved quantities is also a conserved quantity. So what Noether gives is not unique at all. - 2 – anna v Apr 14 '11 at 6:31 2 General comment: there must be something missing in the current generation's education. The past three yeasr I have been following scientific blogs, I find that most difficulties and misunderstandings arise because people cannot understand or see the difference between necessary and sufficient conditions. How is mathematics taught at present bemuses me. – anna v Apr 14 '11 at 6:34 3 – kakaz Apr 14 '11 at 7:53 2 Cont. Then we have theory that for Hamiltonian systems if N integrals of motion exists - system is "integrable" So Vladimir statement in a case of Hamiltonian dynamics is wrong. Of course that there exists constants of motion not related to symmetry. But they are not related to structure of phase space and there is no foliation so in certain meaning they are particular, accidental one. And the may be represented ( after mathematical transformation) as initial conditions of well defined system. – kakaz Apr 14 '11 at 7:55 2 – Tobias Kienzler Apr 14 '11 at 8:36 show 6 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178914427757263, "perplexity_flag": "head"}
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.em/1109106437
On the Number of Perfect Binary Quadratic Forms Francesca Aicardi Source: Experiment. Math. Volume 13, Issue 4 (2004), 451-457. Abstract A perfect form is a form {\small $f=mx^2+ny^2+kxy$} with integral coefficients {\small $(m,n,k)$} such that {\small $f(\mathbb{Z}^2)$} is a multiplicative semigroup. The growth rate of the number of perfect forms in cubes of increasing side {\small $L$} in the space of the coefficients is known for small cubes, where all perfect forms are known. A form is perfect if its coefficients belong to the image of a map, {\small $Q$}, from {\small $\mathbb{Z}^4$} to {\small $\mathbb{Z}^3$}. This property of perfect forms allows us to estimate from below the growth rate of their number for larger values of {\small $L$}. The conjecture that all perfect forms are generated by {\small $Q$} allows us to reformulate results and conjectures on the numbers of the images {\small $Q(\mathbb{Z}^4)$} in cubes of side {\small $L$} in terms of the numbers of perfect forms. In particular, the proportion of perfect elliptic forms in a ball of radius {\small $R$} should decrease faster than {\small $R^{-3/4}$} and the proportion of all perfect forms in a ball of radius {\small $R$} should decrease faster than {\small $2/\sqrt{R}$}. First Page: Related Works: Supplemental material: On the Number of Perfect Binary Quadratic Forms - Appendices. Primary Subjects: 11E12 Secondary Subjects: 11N99 Keywords: Quadratic forms; multiplicative semigroups Full-text: Open access
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8605188131332397, "perplexity_flag": "head"}
http://mathoverflow.net/questions/26676/incompleteness-and-nonstandard-models-of-arithmetic/27243
## Incompleteness and nonstandard models of arithmetic ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The following are a collection of doubts, some of which shall have concrete answers while others may have not. Any kind of help will be welcome. Reading Peter Smith's "Gödel Without (Too Many) Tears", particularly where he gives a nonstandard model of Q, I began wondering if the reason for the existence of nonstandard models of arithmetic has anything to do with incompleteness theorems. I do not know if categoricity implies completeness (in the sense of every sentence being decidable by proof), but anyway, it seems reasonable, when one is formalizing a given (informal) theory, to try to "force" somehow the formal theory to talk "almost exclusively" about the intended interpretation. So I started thinking if some axiom (or axiom schema) could be added to PA in order to forbid its most obvious nonstandard models. The first idea in this line was: ok, we have our class of terms 0, S0, SS0, etc. So, if we found a way to tell that for every x there is some term to which it is equal, we would be done. But then I realized that our terms are defined inductively and that we are making implicitly the assumption: “and nothing else is a term”, very similar to the desired “and nothing else is a number” we would like to add to PA. This thought sort of worried me: every metatheoretic concept (terms, formulas, and even proofs!) is based on assumptions like these! (I have not still found a way out of these worries). Leaving that apart. What if we move on to a stronger theory (with different axioms, but with an extension by definitions that proves every axiom of PA), for example ZFC? Natural numbers become then 0 (the empty set) plus the von Neumann ordinals (obtained by Pair and Union) that contain no limit ordinal. The set of natural numbers is obtained from Infinity, just selecting them by Comprehension. Kunen says in page 23 of his “The Foundations of Mathematics” that the circularity in the informal definition of natural number is broken “by formalizing the properties of the order relation on omega”. Could nonstandard models survive this formalization? Well, I think I've read somewhere that being omega is absolute, so forcing would not be a way to obtain such nonstandard models. Also, I am not sure if (the extension by definitions from) ZFC set theory is a conservative extension of PA, but then it would not be able to prove anything about natural numbers (expressible in the language of arithmetic) that PA alone cannot prove. So somehow it looks like nonstandard models must manage to survive! Maybe due to the notion of being a subset of a given set not being particularly clear (although it looks like it should not be problematic with hereditarily finite sets). Thank you in advance. - 1 (I added some relevant tags.) – François G. Dorais♦ Jun 1 2010 at 12:27 1 As François mentions, the non-standard models are there as a consequence of Lowenheim-Skolem. However, note that the first incompleteness theorem actually produces a 'witness' sentence for the existence of models which are not \emph{elementarily equivalent} to the standard one. Even complete theories have models of all cardinalities by Lowenheim-Skolem, but these models could all be elementarily equivalent. The incompleteness theorem guarantees this isn't the case. – Brendan Cordy Jun 1 2010 at 15:34 ## 4 Answers Unfortunately, nonstandard models will survive any such attempt. This is guaranteed by the Löwenheim-Skolem Theorem which says that if a countable first-order theory T has an infinite model then it has one of every infinite cardinality. Since an uncountable model necessarily has nonstandard elements, this guarantees that there is a nonstandard model of T (and even countable ones). Actually, in your case you need a "two-cardinal" version of Löwenheim-Skolem. In your ZFC example, you move to a theory which interprets arithmetic inside a definable substructure (the set ω). The definable substructure of such a model which might still be countable even if the model itself is uncountable. Nevertheless, one can still blow up the size of the natural number substructure via the ultrapower construction, for example. To evade the Löwenheim-Skolem Theorem, one has to move beyond first-order logic. For example, in infinitary logic one allows infinite disjunctions such as $$\forall x(x = 0 \lor x = S0 \lor x = SS0 \lor \cdots)$$ which ensures that the model is standard. Also, second-order allows quantification over arbitrary sets under the standard interpretation, which again prohibits non-standard models. (See this related question.) This is the characterization of N most commonly used by working mathematicians. - Although not stated clearly, the idea is not to get rid of every nonstandard model (nor of every countable one), which, as you mention, is impossible. I would be happy if one could get rid of one of them (which could be impossible too, certainly I don't know). Restating the second part of the question: would moving to a stronger theory such as ZFC remove any of the nonstandard models of the weaker one (PA)? Moving from Q to PA certainly does! (See Smith's notes for an example). And what's more important. Does this have any relation with Incompleteness? – Marc Alcobé García Jun 1 2010 at 14:48 No, moving to ZFC doesn't help. This is what the second paragraph is about: no matter what theory you decide to interpret arithmetic in, if there is a model at all then there must be one with nonstandard integers. Perhaps surprisingly, this is not related to incompleteness per se, those are just properties of first-order logic. – François G. Dorais♦ Jun 1 2010 at 14:56 2 To clarify, moving to ZFC does reject some nonstandard models, but not all such models. For example, since ZFC proves Con(PA), no model of PA + ¬Con(PA) can be interpreted as the omega of a model of ZFC. – François G. Dorais♦ Jun 1 2010 at 16:23 Another question would be then if ZFC proves any mathematically interesting arithmetic statement (other than Con(PA) or some Gödel sentence for PA) that PA cannot prove. – Marc Alcobé García Jun 2 2010 at 14:07 2 For example, the Paris-Harrington Theorem - en.wikipedia.org/wiki/… - and Goodstein's Theorem - en.wikipedia.org/wiki/Goodstein%27s_theorem – François G. Dorais♦ Jun 2 2010 at 14:28 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think Francois Dorais has done a good job of answering your questions as stated, but let me add some comments that may get at the issues that may be worrying you under the surface. Many people I've met seem to view nonstandard models as demonstrating some kind of "flaw" in the set of axioms. They seem to have some kind of tacit expectation that the purpose of writing down the first-order axioms of Peano Arithmetic is to single out the natural numbers from among all other mathematical structures. But this is not the purpose of writing down the first-order axioms of Peano Arithmetic. If you want to single out the natural numbers, then you should proceed in the normal mathematical manner: Say what it means for two structures to be isomorphic, and prove that the natural numbers are unique up to isomorphism. First-order logic is weak. We say that two structures are elementarily equivalent if they satisfy exactly the same set of first-order sentences. It is the norm, rather than the exception, for there to exist non-isomorphic structures that are elementarily equivalent. In the case of the natural numbers, the set of first-order sentences satisfied by the natural numbers is usually denoted $Th(\mathbb{N})$. This is an extremely rich set of statements about the natural numbers. (Incompleteness is tangentially relevant here, because for any sentence $S$ in the first-order language of arithmetic, either $S$ or its negation is in $Th(\mathbb{N})$, so incompleteness tells us that we can never hope to capture $Th(\mathbb{N})$ with a recursive set of axioms.) Nevertheless, there are plenty of nonstandard models that are elementarily equivalent to $\mathbb{N}$ (i.e., satisfy all the sentences in $Th(\mathbb{N})$. There are lots of reasons to study first-order languages, but hoping to use elementary equivalence to capture isomorphism is not one of them. The existence of non-isomorphic elementarily equivalent structures does not demonstrate any kind of "flaw" with first-order logic, any more than the existence of non-diffeomorphic but topologically homeomorphic manifolds demonstrates any kind of flaw with the axioms for a topological space. - I realize I have mixed two things. I actually have no problem with Löwenheim-Skolem and the existence of elementarily equivalent models of PA with any cardinality. This is one reason for the existence of nonstandard models of arithmetic. But not the only reason. In the case of omega in ZFC, I didn't know about the ultrapower construction, but, as Bertrand Cody said, the first incompleteness theorem gives us an undecidable arithmetic statement, and hence a different reason: we then have different models that cannot be elementary equivalent. – Marc Alcobé García Jun 2 2010 at 14:00 Lowenheim-Skolem holds for first-order languages. If you replace the induction axiom schema of first-order PA with a sentence quantifying over properties you get second-order PA. Second-order PA is categorical but incomplete. - 1 With second-order semantics, we have to distinguish between syntactic completeness and semantic completeness, because these are no longer the same. Being categorical is stronger than being semantically complete, and PA with second-order induction is semantically complete. No effective theory with equality in 2nd-order logic with an infinite model is syntactically complete, so the fact that PA with second-order semantics is not syntactically complete is not really its fault. That is: it's not usually interesting to ask whether a 2nd-order theory is syntactically complete, because it isn't. – Carl Mummert Jun 15 2010 at 11:35 That's the blue pill. Forgive me if I can't resist mentioning the red pill. One way out is (constructive) Church's Thesis, from which it follows there are no nonstandard models of first-order arithmetic. http://www.jstor.org/pss/2274603 http://en.wikipedia.org/wiki/Extended_Church%27s_thesis - It's worth pointing out explicitly that this requires assuming ECT in the metatheory. There are certainly nonstandard models of ECT in usual first-order logic. – Carl Mummert Jun 15 2010 at 11:44 Yeah, it follows from ECT that there are no nonstandard models of HA, it does NOT follow from HA that there are no nonstandard models of ECT. But it does not follow that there are, either. It requires assuming classical logic in the meta-theory to show that. – Daniel Mehkeri Jun 15 2010 at 22:48 Mightn't it be enough just to add axioms that contradict ECT and also permit the desired construction? For example, the compactness theorem of logic might be provable from the fan theorem; I don't know if it is or not. But since ECT implies the negation of the fan theorem intuitionistically, and since the compactness theorem for countable theories is equivalent to the fan theorem classically, this is at least plausible. I know there is some existing work on intuitionistic model theory (e.g. jstor.org/pss/2271944) but I'm not familiar enough to skim for this result. – Carl Mummert Jun 16 2010 at 11:29 ECT can be weakened too. The paper I cited uses a weaker form, and the fan theorem is consistent with that form, so your example doesn't quite work. But you probably were thinking of WKL. Classically WLK is equivalent to the fan theorem, but constructively WKL implies a restricted form of the law of the excluded middle called LLPO. So it might work. – Daniel Mehkeri Jun 19 2010 at 15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325903654098511, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/02/24/kernels-of-polynomials-of-transformations/?like=1&source=post_flair&_wpnonce=3fdfac1636
# The Unapologetic Mathematician ## Kernels of Polynomials of Transformations When we considered the representation theory of the algebra of polynomials, we saw that all it takes to specify such a representation is choosing a single endomorphism $T:V\rightarrow V$. That is, once we pick a transformation $T$ we get a whole algebra of transformations $p(T)$, corresponding to polynomials $p$ in one variable over the base field $\mathbb{F}$. Today, I want to outline one useful fact about these: that their kernels are invariant subspaces under the action of $T$. First, let’s remember what it means for a subspace $U\subseteq V$ to be invariant. This means that if we take a vector $u\in U$ then its image $T(u)$ is again in $U$. This generalizes the nice situation about eigenspaces: that we have some control (if not as complete) over the image of a vector. So, we need to show that if $\left[p(T)\right](u)=0$ then $\left[p(T)\right]\left(T(u)\right)=0$, too. But since this is a representation, we can use the fact that $p(X)X=Xp(X)$, because the polynomial algebra is commutative. Then we calculate $\displaystyle\begin{aligned}\left[p(T)\right]\left(T(u)\right)&=T\left(\left[p(T)\right](u)\right)\\=T(0)&=0\end{aligned}$ Thus if $p(T)$ is a linear transformation which is built by evaluating a polynomial at $T$, then its kernel is an invariant subspace for $T$. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 6 Comments » 1. [...] it as the sum . Since , the same is true of the scalar multiple . And since is a polynomial in , its kernel is invariant. Thus we have as well. And thus is invariant under the transformation , and we can assume [...] Pingback by | February 25, 2009 | Reply 2. [...] generalized eigenspaces do not overlap, and each one is invariant under . The dimension of the generalized eigenspace associated to is the multiplicity of , which [...] Pingback by | March 4, 2009 | Reply 3. [...] can regard this as a polynomial in applied to , which has real coefficients. We can factor it to [...] Pingback by | March 31, 2009 | Reply 4. [...] of the characteristic polynomial of . That is, the characteristic polynomial has a factor . We can evaluate this polynomial at to get the linear transformation . Vectors in the kernel of this space are the eigenvalues [...] Pingback by | April 3, 2009 | Reply 5. [...] off, the generalized eigenspace of an eigenpair is the kernel of a polynomial in . Just like before, this kernel is automatically invariant under , just like the generalized eigenspace [...] Pingback by | April 7, 2009 | Reply 6. [...] tells us that all of our eigenvalues are , and the characteristic polynomial is , where . We can evaluate this on the transformation to find [...] Pingback by | January 19, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9191949963569641, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2009/11/04/parallelepipeds-and-volumes-iii/?like=1&source=post_flair&_wpnonce=8cb43e0c56
# The Unapologetic Mathematician ## Parallelepipeds and Volumes III So, why bother with this orientation stuff, anyway? We’ve got an inner product on spaces of antisymmetric tensors, and that should give us a concept of length. Why can’t we just calculate the size of a parallelepiped by sticking it into this bilinear form twice? Well, let’s see what happens. Given a $k$-dimensional parallelepiped with sides $v_1$ through $v_k$, we represent the parallelepiped by the wedge $\omega=v_1\wedge\dots\wedge v_k$. Then we might try defining the volume by using the renormalized inner product $\displaystyle\mathrm{vol}(\omega)^2=k!\langle\omega,\omega\rangle$ Let’s expand one copy of the wedge $\omega$ out in terms of our basis of wedges of basis vectors $\displaystyle k!\langle\omega,\omega\rangle=k!\langle\omega,\omega^Ie_I\rangle=k!\langle\omega,e_I\rangle\omega^I$ where the multi-index $I$ runs over all increasing $k$-tuples of indices $1\leq i_1<\dots<i_k\leq n$. But we already know that $\omega^I=k!\langle\omega,e_I\rangle$, and so this is squared-volume is the sum of the squares of these components, just like we’re familiar with. Then we can define the $k$-volume of the parallelepiped as the square root of this sum. Let’s look specifically at what happens for top-dimensional parallelepipeds, where $k=n$. Then we only have one possible multi-index $I=(1,\dots,n)$, with coefficient $\displaystyle\omega^{1\dots n}=n!\langle e_1\wedge\dots\wedge e_n,v_1\wedge\dots\wedge v_n\rangle=\det\left(v_j^i\right)$ and so our formula reads $\displaystyle\mathrm{vol}(\omega)=\sqrt{\left(\det\left(v_j^i\right)\right)^2}=\left\lvert\det\left(v_j^i\right)\right\rvert$ So we get the magnitude of the volume without having to worry about choosing an orientation. Why even bother? Because we already do care about orientation. Let’s go all the way back to one-dimensional parallelepipeds, which are just described by vectors. A vector doesn’t just describe a certain length, it describes a length along a certain line in space. And it doesn’t just describe a length along that line, it describes a length in a certain direction along that line. A vector picks out three things: • A one-dimensional subspace $L$ of the ambient space $V$. • An orientation of the subspace $L$. • A volume (length) of this oriented subspace. And just like vectors, nondegenerate $k$-dimensional parallelepipeds pick out three things • A $k$-dimensional subspace $L$ of the ambient space $V$. • An orientation of the subspace $L$. • A $k$-dimensional volume of this oriented subspace. The difference is that when we get up to the top dimension the space itself can have its own orientation, which may or may not agree with the orientation induced by the parallelepiped. We don’t always care about this disagreement, and we can just take the absolute value to get rid of a sign if we don’t care, but it might come in handy. ### Like this: Posted by John Armstrong | Analytic Geometry, Geometry ## 5 Comments » 1. [...] we could calculate it by expanding in terms of basic wedges. That is, we can [...] Pingback by | November 5, 2009 | Reply 2. [...] sampling the function at the specified point in the subinterval , multiplying by the -dimensional volume of the subinterval (which is just the product of its side-lengths), and summing this over all the [...] Pingback by | December 1, 2009 | Reply 3. [...] all the subintervals in the partition that are completely contained in the interior and add their volumes together. This is the volume of some collection of boxes which is completely contained within , and [...] Pingback by | December 3, 2009 | Reply 4. When might we need to know the volume of an n-dimensional parallelpiped? I’m trying to figure out what the applications are. Something in Quantum Mechanics, maybe to do with Hilbert space? Crystallography? Comment by Ellie | September 23, 2010 | Reply 5. Take a step back, Ellie: the more fundamental question is “when does $n$-dimensional space become meaningful in an application?” And to that I’d point out that the configuration space of two particles floating around in three-dimensional space is six-dimensional — three to describe the position of each particle. If spatial orientation matters, then even one object has a six-dimensional configuration space, as video game designers are fond of hyping. It should be clear how to get higher and higher dimensional spaces in meaningful applications. So, if we have a function that we need to integrate over such a space, then we do it by sampling the function at a bunch of points in the space and weighting each sample by the $n$-dimensional volume of a small piece of the space near the sample point. And we calculate those volumes by building them up from $n$-dimensional parallelepipeds! Comment by | September 23, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 28, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9018476009368896, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/11/09/
# The Unapologetic Mathematician ## Topologies as Categories Okay, so we’ve defined a topology on a set $X$. But we also love categories, so we want to see this in terms of categories. And, indeed, every topology is a category! First, remember that the collection of subsets of $X$, like the collection of subobjects on an object in any category, is partially ordered by inclusion. And since every partially ordered set is a category, so is the collection of subsets of $X$. In fact, it’s a lattice, since we can use union and intersection as our join and meet, respectively. When we say that a poset has pairwise least upper bounds it’s the same as saying when we consider it as a category it has finite coproducts, and similarly pairwise greatest lower bounds are the same as finite products. But here we can actually take the union or intersection of any collection of subsets and get a subset, so we have all products and coproducts. In the language of posets, we have a “complete lattice”. So now we want to talk about topologies. A topology is just a collection of the subsets that’s closed under finite intersections and arbitrary unions. We can use the same order (inclusion of subsets) to make a topology into a partially-ordered set. In the language of posets, the requirements are that we have a sublattice (finite meets and joins, along with the same top and bottom element) with arbitrary meets — the topology contains the least upper bound of any collection of its elements. And now we translate the partial order language into category theory. A topology is a subcategory of the category of subsets of $X$ with finite products and all coproducts. That is, we have an arrow from the object $U$ to the object $V$ if and only if $U\subseteq V$ as subsets of $X$. Given any finite collection $\{U_i\}_{i=1}^n$ of objects we have their product $\bigcap\limits_{i=1}^nU_i$, and given any collection $\{U_\alpha\}_{\alpha\in A}$ of objects we have their coproduct $\bigcup\limits_{\alpha\in A}U_\alpha$. In particular we have the empty product — the terminal object $X$ — and we have the empty coproduct — the initial object $\varnothing$. And all the arrows in our category just tell us how various open sets sit inside other open sets. Neat! ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9263333678245544, "perplexity_flag": "head"}
http://www.timgittos.com/learning/mit-single-variable-calculus/week-1/
# MIT OCW Single Variable Calculus - Week 1 ### Differentiation #### What is a derivative? Geometric interpretation Consider the problem of finding the tangent line to some function $$y=f(x)$$ at $$P=(x_0,y_0)$$: The tangent line has an equation of the form $$y – y_0 = m(x – x_0)$$, where m is the slope or gradient of the line. Finding $$y_0$$ is easy – $$y_0 = f(x_0)$$. Finding m involves calculus – $$m = f’(x_0)$$. That is, $$f’(x_0)$$, the derivative of $$f$$ at $$x_0$$ is the slope of the tangent line to $$y = f(x)$$ at P. Consider the following graph: How do we know which of these two lines is the tangent, given that both pass through the same point? Lets define the point where the secant and the tangent intersect as P, and the other point on the secant as Q. As the point Q gets closer to the point P, the slope of the secant line starts to match the slope of the tangent line. Therefore: The tangent is the limit of the secant line $$PQ$$ as the point $$Q \rightarrow P$$, assuming P is fixed. Again consider the secant line $$PQ$$. Calculating the gradient of $$PQ$$: The difference between Q’s x value and P’s x value is known as $$\Delta x$$ (delta x), or the change in x. The change in y is called $$\Delta f$$ (in this case?). The gradient between $$P$$ and $$Q$$ is the ratio of $$\frac{\Delta f}{\Delta x}$$. As the point Q gets closer to P, $$\Delta x$$ gets smaller. Thus $$m = \lim_{\Delta x \to 0} \frac{\Delta f}{\Delta x}$$ If P is the point $$(x_0, f(x_0))$$, then we can represent Q as $$(x_0 + \Delta x, f(x_0 + \Delta x))$$. Therefore $$m = \lim_{\Delta x \to 0} \frac{f(x_0 + \Delta x) – f(x_0)}{\Delta x}$$. This is the heart of the derivative. Example: Consider the graph for $$\frac{1}{x}$$ Find the tanget at point $$x_0$$: $$m = \lim_{\Delta x \to 0} \frac{ \frac{1}{x_0 + \Delta x} – \frac{1}{x_0} }{\Delta x}$$ $$m = \lim_{\Delta x \to 0} \frac{1}{\Delta x}\left( \frac{ x_0 – (x_0 + \Delta x) }{ (x_0 + \Delta x)x_0 } \right)$$ $$m = \lim_{\Delta x \to 0} \frac{1}{\Delta x}\left( \frac{ -\Delta x }{ (x_0 + \Delta x)x_0 } \right)$$ $$m = \lim_{\Delta x \to 0} -\frac{1}{ (x_0 + \Delta x)x_0 }$$ $$m \rightarrow -\frac{1}{ x_0^2 }$$ Example 2: Find the area of the triangle formed by the axes and the curve described by $$y = \frac{1}{x}$$. This is a word problem with only one part of calculus – the rest is regular math. Graph the problem: The only part of the problem that involves calculus is finding the tangent. We found the gradient $$m$$ above, so lets use that. $$y – y_0 = -\frac{1}{x_0^2}(x – x_0)$$ To calculate the area of the triangle, we need the base length (along the $$x$$ axis) and the base height (along the $$y$$ axis). We can find those by calculating the $$x$$ intercept and $$y$$ intercepts. $$x$$ intercept is at $$y = 0$$, and we can express $$y_0$$ in terms of $$f(x)$$: $$– \frac{1}{x_0} = -\frac{1}{x_0^2}(x – x_0)$$ $$– \frac{1}{x_0} = -\frac{x}{x_0^2} + \frac{1}{x_0}$$ $$\implies \frac{x}{x_0^2} = \frac{2}{x_0}$$ $$\implies x = 2x_0$$ Using symmetry, we can find the $$y$$ intercept quickly. $$y = \frac{1}{x} \implies xy = 1 \implies x = \frac{1}{y}$$ $$\therefore y = 2y_0$$ The area of a triangle is $$\frac{1}{2}$$ base x perpendicular height. $$\therefore area = \frac{1}{2}(2_x0)(2y_0) = 2x_0y_0$$ = 2 The area is 2 because $$y = \frac{1}{x}$$, and $$xy$$ will always equal 1. Notation: $$y = f(x), \Delta y = \Delta f$$ $$f’ = \frac{\delta f}{\delta x} = \frac{\delta y}{\delta x} = \frac{\delta}{\delta x}f = \frac{\delta}{\delta x}y$$ Example 3: $$y = \frac{\delta}{\delta x}x^n$$ $$\frac{\Delta f}{\Delta x} = \lim_{\Delta x \to 0} \frac{(x + \Delta x)^n – x^n}{\Delta x}$$ In order to solve this, we need to revist the binomial theorem (algebra). Binomial theorem states that: $$(x + \Delta x)^n = (x + \Delta x) … (x + \Delta x)$$ $$= x^n + nx^{n-1} + O((\Delta x)^n)$$ continuing: $$\frac{\Delta f}{\Delta x} = \frac{1}{\Delta x}\big((x + \Delta x)^n – x^n \big)$$ $$\frac{\Delta f}{\Delta x} = \frac{1}{\Delta x}\big(x^n + nx^{n-1}\Delta x + O((\Delta x)^n) – x^n \big)$$ $$\frac{\Delta f}{\Delta x} = \frac{1}{\Delta x}\big(nx^{n-1}\Delta x + O((\Delta x)^n) \big)$$ $$\frac{\Delta f}{\Delta x} = nx^{n-1} + O((\Delta x)^n)$$ $$\frac{\Delta f}{\Delta x} = \lim_{\Delta x \to 0} nx^{n-1} + O((\Delta x)^n) = nx^{n-1}$$ $$\therefore \frac{\delta}{\delta x}x^n = nx^{n-1}$$ ### Rate of Change Geometrically, a derivative can be thought of as the slope of a tangent line for a curve. It can also be considered from the point of view of relative rate of change. That is, the change in variables $$x$$ and $$y$$ for a given function $$f(x)$$. $$\frac{\Delta x}{\Delta y}$$ is the average rate of change – that is, the entire change over the entire interval. $$\frac{\delta x}{\delta y}$$ is the average rate of change limited, and is the instantaneous rate of change. Examples: 1. q = charge, $$\frac{\delta q}{\delta t}$$ = current. The rate of change in charge over time is current. 2. s = distance, $$\frac{\delta s}{\delta t}$$ = speed. The rate of change in distance over time is speed. Consider the MIT pumpkin drop: From the top of one of the buildings in MIT, on Halloween, students drop pumpkins from the top of building to the ground. Assume the building is 80m high, and assume the rate of acceleration due to gravity is 5m2 (not 9.8m2). $$h = 80 – 5t^2$$ at t = 0, h = 80 at t = 4, h = 0 The average speed is: $$\frac{\Delta h}{\Delta t} = \frac{0 – 80}{4 – 0} = -20m/s$$ The instantaneous speed as it hits the ground is: $$\frac{\delta}{\delta t}h = 0 – 10t$$ (differentiated the function for height) at t = 4, instantaneous speed is -40m/s. Examples that don’t involve time: 3. T = temperature, $$\frac{\delta T}{\delta x}$$ = temperature gradient (used in weather forecasting & airflow) 4. Sensitivity of measurements ## Limits & Continuity Left & right hand limits. $$\lim_{x \to x_0^+}$$ – right hand limit. Says that x is bigger than $$x_0$$ and we’re approaching $$x_0$$ with a negative $$\Delta$$ $$\lim_{x \to x_0^-}$$ – left hand limit. Says that x is smaller than $$x_0$$ and we’re approaching $$x_0$$ with a positive $$\Delta$$ Example: f(x) = \left\{ \begin{array}{l l} x + 1 & \quad x > 0\\ -x + 2 & \quad x < 0\\ \end{array} \right. $$\lim_{x \to x_0^+} f(x) = \lim_{x \to x_0} x + 1 = 1$$ $$\lim_{x \to x_0^-} f(x) = \lim_{x \to x_0^-} -x + 2 = 2$$ Note: we didn’t need x = 0 value. Continuity: $$f(x)$$ is continuous at $$x_0$$ when $$\lim_{x \to x_0} = f(x_0$$ Requirements: - $$\lim_{x \to x_0} f(x)$$ exists (from left & right, and left & right must be the same) - $$f(x_0)$$ is defined - Both of the above are equal to each other Continuous functions are easier to calculate limits for because you can just insert the $$x_0$$ term into the function and determine the limit. Discontinuity: If $$f(x)$$ is not continuous, then it is discontinuous. Jump discontinuity – Limit from left and right exist but are not equal Removable discontinuity – Limit from left and right are equal. Discontinuity can be removed by redefining the function for the point of discontinuity. Example: \begin{array}{1 1} g(x) = \frac{sin x}{x} \\ h(x) = \frac{1 – cosx}{x} \end{array} g(0) is undefined, but: \begin{array}{1 1} \lim_{x \to 0} \frac{sin x}{x} = 1 \\ \lim_{x \to 0} \frac{1 – cos x}{x} = 0 \end{array} These functions have a removable discontinuity at x = 0 Infinite discontinuity Consider $$\frac{1}{x}$$: \begin{array}{1 1} \lim_{x \to x_0^+} \frac{1}{x} = \infty \\ \lim_{x \to x_0^-} \frac{1}{x} = -\infty \end{array} Other (ugly) discontinuities $$y = sin \frac{1}{x} \text{as} \; x \to 0$$ No left or right limit ### Differential implies continuous theorem If f is differentiable at $$x_0$$, then x is continuous at $$x_0$$ Proof: \lim_{x \to x_0} f(x) – f(x_0) = 0 \lim_{x \to x_0} \frac{f(x) – f(x_0)}{x – x_0} (x – x_0) = f’(x_0) . 0 = 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 97, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8849911689758301, "perplexity_flag": "middle"}
http://mathhelpforum.com/new-users/212618-using-sine-cosine.html
# Thread: 1. ## using sine and cosine A scalene Triangle With a Side A 12m Long Side B 5m long and one angle which is opposite length A at 2.1 rad help with knowing the rest of the angles in degrees and also the other side please 2. ## Re: using sine and cosine Use the sine rule to find angle B. Use the fact that the angles add to 180 degrees to find angle C. Use the sine rule to find side c. 3. ## Re: using sine and cosine Or you could use the cosine law: $a^2= b^2+ c^2- 2bc cos(A)$ with a= 12, b= 5, and A= 2.1 to get a quadratic equation to solve for c. Note that there may be two different values for c and so two different triangles.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8215603828430176, "perplexity_flag": "middle"}
http://www.mathplanet.com/education/pre-algebra/discover-fractions-and-factors/factorization-and-prime-numbers
# Factorization and prime numbers A factor is a number that is divisible by another number and where the quotient is an integer. Example $\\ \frac{18}{2}=9\Rightarrow factor\\\\\\ \frac{18}{4}=4.5\Rightarrow not\: a\: factor \\$ 2 is a factor of 18 because the answer is an integer (9). 4 is not a factor of 18 because the answer is 4 with a remainder of 5. There are some rules of thumb to easily see if a number is a factor 2 - an even number is always divisible by two 3 - if the sum of its digits is divisible by 3 5 - if the last digit of the numerator is a 5 or 0 the number is divisible by 5 6 - if the number is divisible by 2 and 3 10 - if the last digit of the numerator is 0 the number is divisible by 10 Example Determine whether 256 is divisible by 2, 3, 4, 5, 6, or 10 $\\\begin{matrix} 256\div 2=128\: \: \: \: & ({\color{green} yes})\\ 256\div 3=85.3\: \: \: & ({\color{red} no})\: \: \\ 256\div 5=51.2 \: \: \: & ({\color{red} no})\: \: \\ 256\div 6=42.67 & ({\color{red} no})\: \: \\ 256\div 10=25.6\: & ({\color{red} no})\: \: \end{matrix}\\$ A prime number is an integer greater than one that only has two factors (that it is divisible by), itself and 1 e.g. 2, 3, 5, 7, 11… A composite number is an integer greater than one that has more than two factors e.g. the number 4 has the factors 1, 2, 4 and the number 6 has factors 1, 2, 3, 6. You can express a composite number in factors of prime numbers e.g. 4 = 2   2 The numbers 0 and 1 are considered neither composite numbers nor prime numbers. Example: Video lesson: Factorize the expression $12x^{3}y^{2}$ Next Class:  Discover fractions and factors, Finding the greatest common factor • Pre-Algebra • Algebra 1 • Algebra 2 • Geometry • Sat • Act
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 3, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9072232246398926, "perplexity_flag": "head"}
http://terrytao.wordpress.com/category/mathematics/mathgn/page/2/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Category Archive You are currently browsing the category archive for the ‘math.GN’ category. ## Covering a non-closed interval by disjoint closed intervals 4 October, 2010 in 245A - Real analysis, math.GN | Tags: intervals | by Terence Tao | 47 comments The following question came up in my 245A class today: Is it possible to express a non-closed interval in the real line, such as [0,1), as a countable union of disjoint closed intervals? I was not able to answer the question immediately, but by the end of the class some of the students had come up with an answer.  It is actually a nice little test of one’s basic knowledge of real analysis, so I am posing it here as well for anyone else who is interested.  Below the fold is the answer to the question (whited out; one has to highlight the text in order to read it). Read the rest of this entry » ## Is there a countable certificate for connectedness? 13 April, 2010 in math.GN, math.LO, question | Tags: certificates, connectedness, path-connectedness | by Terence Tao | 19 comments In topology, a non-empty set ${E}$ is said to be connected if cannot be decomposed into two nontrivial subsets that are both closed and open relative to ${E}$, and path connected if any two points ${x,y}$ in ${E}$ can be connected by a path (i.e. there exists a continuous map ${\gamma: [0,1] \rightarrow E}$ with ${\gamma(0)=x}$ and ${\gamma(1)=y}$). Path-connected sets are always connected, but the converse is not true, even in the model case of compact subsets of a Euclidean space. The classic counterexample is the set $\displaystyle E := \{ (0,y): -1 \leq y \leq 1 \} \cup \{ (x, \sin(1/x)): 0 < x \leq 1 \}, \ \ \ \ \ (1)$ which is connected but not path-connected (there is no continuous path from ${(0,1)}$ to ${(1,\sin(1))}$). Looking at the definitions of the two concepts, one notices a difference: the notion of path-connectedness is somehow a “positive” one, in the sense that a path-connected set can produce the existence of something (a path connecting two points ${x}$ and ${y}$) for a given type of input (in this case, a pair of points ${x, y}$). On the other hand, the notion of connectedness is a “negative” one, in that it asserts the non-existence of something (a non-trivial partition into clopen sets). To put it another way, it is relative easy to convince someone that a set is path-connected (by providing a connecting path for every pair of points) or is disconnected (by providing a non-trivial partition into clopen sets) but if a set is not path-connected, or is connected, how can one easily convince someone of this fact? To put it yet another way: is there a reasonable certificate for connectedness (or for path-disconnectedness)? In the case of connectedness for compact subsets ${E}$ of Euclidean space, there is an answer as follows. If ${\epsilon > 0}$, let us call two points ${x, y}$ in ${E}$ ${\epsilon}$-connected if one can find a finite sequence ${x_0 = x, x_1, \ldots, x_N = y}$ of points in ${E}$, such that ${|x_{i+1}-x_i| < \epsilon}$ for all ${0 \leq i < N}$; informally, one can jump from ${x}$ to ${y}$ in ${E}$ using jumps of length at most ${\epsilon}$. Let us call ${x_0,\ldots,x_N}$ an ${\epsilon}$-discrete path. Proposition 1 (Connectedness certificate for compact subsets of Euclidean space) Let ${E \subset {\bf R}^d}$ be compact and non-empty. Then ${E}$ is connected if and only if every pair of points in ${E}$ is ${\epsilon}$-connected for every ${\epsilon > 0}$. Proof: Suppose first that ${E}$ is disconnected, then ${E}$ can be partitioned into two non-empty closed subsets ${F, G}$. Since ${E}$ is compact, ${F, G}$ are compact also, and so they are separated by some non-zero distance ${\epsilon > 0}$. But then it is clear that points in ${F}$ cannot be ${\epsilon}$-connected to points in ${G}$, and the claim follows. Conversely, suppose that there is a pair of points ${x,y}$ in ${E}$ and an ${\epsilon > 0}$ such that ${x,y}$ are not ${\epsilon}$-connected. Let ${F}$ be the set of all points in ${E}$ that are ${\epsilon}$-connected to ${x}$. It is easy to check that ${F}$ is open, closed, and a proper subset of ${E}$; thus ${E}$ is disconnected. $\Box$ We remark that the above proposition in fact works for any compact metric space. It is instructive to see how the points ${(1,\sin(1))}$ and ${(0,1)}$ are ${\epsilon}$-connected in the set (1); the ${\epsilon}$-discrete path follows the graph of ${\sin(1/x)}$ backwards until one gets sufficiently close to the ${y}$-axis, at which point one “jumps” across to the ${y}$-axis to eventually reach ${(0,1)}$. It is also interesting to contrast the above proposition with path connectedness. Clearly, if two points ${x, y}$ are connected by a path, then they are ${\epsilon}$-connected for every ${\epsilon > 0}$ (because every continuous map ${\gamma: [0,1] \rightarrow E}$ is uniformly continuous); but from the analysis of the example (1) we see that the converse is not true. Roughly speaking, the various ${\epsilon}$-discrete paths from ${x}$ to ${y}$ have to be “compatible” with each other in some sense in order to synthesise a continuous path from ${x}$ to ${y}$ in the limit (we will not make this precise here). But this leaves two (somewhat imprecise) questions, which I do not know how to satisfactorily answer: Question 1: Is there a good certificate for path disconnectedness, say for compact subsets ${E}$ of ${{\bf R}^d}$? One can construct lousy certificates, for instance one could look at all continuous paths in ${{\bf R}^d}$ joining two particular points ${x, y}$ in ${E}$, and verify that each one of them leaves ${E}$ at some point. But this is an “uncountable” certificate – it requires one to check an uncountable number of paths. In contrast, the certificate in Proposition 1 is basically a countable one (if one describes a compact set ${E}$ by describing a family of ${\epsilon}$-nets for a countable sequence of ${\epsilon}$ tending to zero). (Very roughly speaking, I would like a certificate that can somehow be “verified in countable time” in a suitable oracle model, as discussed in my previous post, though I have not tried to make this vague specification more rigorous.) It is tempting to look at the equivalence classes of ${E}$ given by the relation of being connected by a path, but these classes need not be closed (as one can see with the example (1)) and it is not obvious to me how to certify that two such classes are not path-connected to each other. Question 2: Is there a good certificate for connectedness for closed but unbounded closed subsets of ${{\bf R}^d}$? Proposition 1 fails in this case; consider for instance the set $\displaystyle E := \{ (x,0): x \in {\bf R} \} \cup \{ (x,\frac{1}{x}): x > 0 \}. \ \ \ \ \ (2)$ Any pair of points ${x,y \in E}$ is ${\epsilon}$-connected for every ${\epsilon > 0}$, and yet this set is disconnected. The problem here is that as ${\epsilon}$ gets smaller, the ${\epsilon}$-discrete paths connecting a pair of points such as ${(1,0)}$ and ${(1,1)}$ have diameter going to infinity. One natural guess is then to require a uniform bound on the diameter, i.e. that for any pair of points ${x, y}$, there exists an ${R>0}$ such that there is an ${\epsilon}$-discrete path from ${x}$ to ${y}$ of diameter at most ${R}$ for every ${\epsilon > 0}$. This does indeed force connectedness, but unfortunately not all connected sets have this property. Consider for instance the set $\displaystyle E := \{ (x,y,0): x \in {\bf R}, y \in \pm 1 \} \cup \bigcup_{n=1}^\infty (E_n \cup F_n) \ \ \ \ \ (3)$ in ${{\bf R}^3}$, where $\displaystyle E_n := \{ (x,y,0): \frac{x^2}{n^2} + \frac{y^2}{(1-1/n)^2} = 1\}$ is a rectangular ellipse centered at the origin with minor diameter endpoints ${(0,1/n-1), (0,1-1/n)}$ and major diameter endpoints ${(-n,0), (n,0)}$, and $\displaystyle F_n := \{ (n,y,z): (y-1/2)^2+z^2=1/4 \}$ is a circle that connects the ${(n,0)}$ endpoint of ${E_n}$ to the point ${(n,1)}$ in ${E}$. One can check that ${E}$ is a closed connected set, but the ${\epsilon}$-discrete paths connecting ${(0,-1)}$ with ${(0,+1)}$ have unbounded diameter as ${\epsilon \rightarrow 0}$. Currently, I do not have any real progress on Question 1. For Question 2, I can only obtain the following strange “second-order” criterion for connectedness, that involves an unspecified gauge function ${\delta}$: Proposition 2 (Second-order connectedness certificate) Let ${E}$ be a closed non-empty subset of ${{\bf R}^d}$. Then the following are equivalent: • ${E}$ is connected. • For every monotone decreasing, strictly positive function ${\delta: {\bf R}^+ \rightarrow {\bf R}^+}$ and every ${x,y \in E}$, there exists a discrete path ${x_0=x,x_1,\ldots,x_N=y}$ in ${E}$ such that ${|x_{i+1}-x_i| < \delta(|x_i|)}$. Proof: This is proven in almost the same way as Proposition 1. If ${E}$ can be disconnected into two non-trivial sets ${F, G}$, then one can find a monotone decreasing gauge function ${\delta: {\bf R}^+ \rightarrow {\bf R}^+}$ such that for each ball ${B_R := \{ x \in {\bf R}^d: |x| \leq R \}}$, ${F \cap B_R}$ and ${G}$ are separated by at least ${\delta(R)}$, and then there is no discrete path from ${F}$ to ${G}$ in ${E}$ obeying the condition ${|x_{i+1}-x_i| < \delta(|x_i|)}$. Conversely, if there exists a gauge function ${\delta}$ and two points ${x,y \in E}$ which cannot be connected by a discrete path in ${E}$ that obeys the condition ${|x_{i+1}-x_i| < \delta(|x_i|)}$, then if one sets ${F}$ to be all the points that can be reached from ${x}$ in this manner, one easily verifies that ${F}$ and ${E \backslash F}$ disconnect ${E}$. $\Box$ It may be that this is somehow the “best” one can do, but I am not sure how to quantify this formally. Anyway, I was curious if any of the readers here (particularly those with expertise in point-set topology or descriptive set theory) might be able to shed more light on these questions. (I also considered crossposting this to Math Overflow, but I think the question may be a bit too long (and vague) for that.) (The original motivation for this question, by the way, stems from an attempt to use methods of topological group theory to attack questions in additive combinatorics, in the spirit of the paper of Hrushovski studied previously on this blog. The connection is rather indirect, though; I may discuss this more in a future post.) ## 245B, Notes 13: Compactification and metrisation (optional) 18 March, 2009 in 245B - Real analysis, math.GN, math.OA | Tags: compactification, Stone-Cech compactification, ultrafilters, Urysohn metrisation theorem | by Terence Tao | 25 comments One way to study a general class of mathematical objects is to embed them into a more structured class of mathematical objects; for instance, one could study manifolds by embedding them into Euclidean spaces. In these (optional) notes we study two (related) embedding theorems for topological spaces: • The Stone-Čech compactification, which embeds locally compact Hausdorff spaces into compact Hausdorff spaces in a “universal” fashion; and • The Urysohn metrization theorem, that shows that every second-countable normal Hausdorff space is metrizable. Read the rest of this entry » ## 245B, Notes 12: Continuous functions on locally compact Hausdorff spaces 2 March, 2009 in 245B - Real analysis, math.CA, math.FA, math.GN, math.OA | Tags: Gelfand-Naimark theorem, Prokhorov's theorem, Radon measure, Riesz representation theorem, Stone-Weierstrass theorem, Tietze extension theorem, Urysohn's lemma | by Terence Tao | 55 comments A key theme in real analysis is that of studying general functions ${f: X \rightarrow {\bf R}}$ or ${f: X \rightarrow {\bf C}}$ by first approximating them by “simpler” or “nicer” functions. But the precise class of “simple” or “nice” functions may vary from context to context. In measure theory, for instance, it is common to approximate measurable functions by indicator functions or simple functions. But in other parts of analysis, it is often more convenient to approximate rough functions by continuous or smooth functions (perhaps with compact support, or some other decay condition), or by functions in some algebraic class, such as the class of polynomials or trigonometric polynomials. In order to approximate rough functions by more continuous ones, one of course needs tools that can generate continuous functions with some specified behaviour. The two basic tools for this are Urysohn’s lemma, which approximates indicator functions by continuous functions, and the Tietze extension theorem, which extends continuous functions on a subdomain to continuous functions on a larger domain. An important consequence of these theorems is the Riesz representation theorem for linear functionals on the space of compactly supported continuous functions, which describes such functionals in terms of Radon measures. Sometimes, approximation by continuous functions is not enough; one must approximate continuous functions in turn by an even smoother class of functions. A useful tool in this regard is the Stone-Weierstrass theorem, that generalises the classical Weierstrass approximation theorem to more general algebras of functions. As an application of this theory (and of many of the results accumulated in previous lecture notes), we will present (in an optional section) the commutative Gelfand-Neimark theorem classifying all commutative unital ${C^*}$-algebras. Read the rest of this entry » ## 245B, Notes 11: The strong and weak topologies 21 February, 2009 in 245B - Real analysis, math.FA, math.GN | Tags: Banach-Alaoglu theorem, strong operator topology, strong topology, weak operator topology, weak topology | by Terence Tao | 46 comments A normed vector space ${(X, \| \|_X)}$ automatically generates a topology, known as the norm topology or strong topology on ${X}$, generated by the open balls ${B(x,r) := \{ y \in X: \|y-x\|_X < r \}}$. A sequence ${x_n}$ in such a space converges strongly (or converges in norm) to a limit ${x}$ if and only if ${\|x_n-x\|_X \rightarrow 0}$ as ${n \rightarrow \infty}$. This is the topology we have implicitly been using in our previous discussion of normed vector spaces. However, in some cases it is useful to work in topologies on vector spaces that are weaker than a norm topology. One reason for this is that many important modes of convergence, such as pointwise convergence, convergence in measure, smooth convergence, or convergence on compact subsets, are not captured by a norm topology, and so it is useful to have a more general theory of topological vector spaces that contains these modes. Another reason (of particular importance in PDE) is that the norm topology on infinite-dimensional spaces is so strong that very few sets are compact or pre-compact in these topologies, making it difficult to apply compactness methods in these topologies. Instead, one often first works in a weaker topology, in which compactness is easier to establish, and then somehow upgrades any weakly convergent sequences obtained via compactness to stronger modes of convergence (or alternatively, one abandons strong convergence and exploits the weak convergence directly). Two basic weak topologies for this purpose are the weak topology on a normed vector space ${X}$, and the weak* topology on a dual vector space ${X^*}$. Compactness in the latter topology is usually obtained from the Banach-Alaoglu theorem (and its sequential counterpart), which will be a quick consequence of the Tychonoff theorem (and its sequential counterpart) from the previous lecture. The strong and weak topologies on normed vector spaces also have analogues for the space ${B(X \rightarrow Y)}$ of bounded linear operators from ${X}$ to ${Y}$, thus supplementing the operator norm topology on that space with two weaker topologies, which (somewhat confusingly) are named the strong operator topology and the weak operator topology. Read the rest of this entry » ## 245B, Notes 10: Compactness in topological spaces 9 February, 2009 in 245B - Real analysis, math.FA, math.GN, math.LO | Tags: Alexander sub-base theorem, Arzela-Ascoli theorem, bases, compactness, precompactness, product spaces, sequential compactness, sub-bases, Tychonoff's theorem | by Terence Tao | 42 comments One of the most useful concepts for analysis that arise from topology and metric spaces is the concept of compactness; recall that a space ${X}$ is compact if every open cover of ${X}$ has a finite subcover, or equivalently if any collection of closed sets with the finite intersection property (i.e. every finite subcollection of these sets has non-empty intersection) has non-empty intersection. In these notes, we explore how compactness interacts with other key topological concepts: the Hausdorff property, bases and sub-bases, product spaces, and equicontinuity, in particular establishing the useful Tychonoff and Arzelá-Ascoli theorems that give criteria for compactness (or precompactness). Exercise 1 (Basic properties of compact sets) • Show that any finite set is compact. • Show that any finite union of compact subsets of a topological space is still compact. • Show that any image of a compact space under a continuous map is still compact. Show that these three statements continue to hold if “compact” is replaced by “sequentially compact”. Read the rest of this entry » ## 245B, Notes 9: The Baire category theorem and its Banach space consequences 1 February, 2009 in 245B - Real analysis, math.FA, math.GN, math.MG | Tags: Baire category theorem, closed graph theorem, non-complemented subspace, open mapping theorem, uniform boundedness principle | by Terence Tao | 38 comments The notion of what it means for a subset E of a space X to be “small” varies from context to context.  For instance, in measure theory, when $X = (X, {\mathcal X}, \mu)$ is a measure space, one useful notion of a “small” set is that of a null set: a set E of measure zero (or at least contained in a set of measure zero).  By countable additivity, countable unions of null sets are null.  Taking contrapositives, we obtain Lemma 1. (Pigeonhole principle for measure spaces) Let $E_1, E_2, \ldots$ be an at most countable sequence of measurable subsets of a measure space X.  If $\bigcup_n E_n$ has positive measure, then at least one of the $E_n$ has positive measure. Now suppose that X was a Euclidean space ${\Bbb R}^d$ with Lebesgue measure m.  The Lebesgue differentiation theorem easily implies that having positive measure is equivalent to being “dense” in certain balls: Proposition 1. Let $E$ be a measurable subset of ${\Bbb R}^d$.  Then the following are equivalent: 1. E has positive measure. 2. For any $\varepsilon > 0$, there exists a ball B such that $m( E \cap B ) \geq (1-\varepsilon) m(B)$. Thus one can think of a null set as a set which is “nowhere dense” in some measure-theoretic sense. It turns out that there are analogues of these results when the measure space $X = (X, {\mathcal X}, \mu)$  is replaced instead by a complete metric space $X = (X,d)$.  Here, the appropriate notion of a “small” set is not a null set, but rather that of a nowhere dense set: a set E which is not dense in any ball, or equivalently a set whose closure has empty interior.  (A good example of a nowhere dense set would be a proper subspace, or smooth submanifold, of ${\Bbb R}^d$, or a Cantor set; on the other hand, the rationals are a dense subset of ${\Bbb R}$ and thus clearly not nowhere dense.)   We then have the following important result: Theorem 1. (Baire category theorem). Let $E_1, E_2, \ldots$ be an at most countable sequence of subsets of a complete metric space X.  If $\bigcup_n E_n$ contains a ball B, then at least one of the $E_n$ is dense in a sub-ball B’ of B (and in particular is not nowhere dense).  To put it in the contrapositive: the countable union of nowhere dense sets cannot contain a ball. Exercise 1. Show that the Baire category theorem is equivalent to the claim that in a complete metric space, the countable intersection of open dense sets remain dense.  $\diamond$ Exercise 2. Using the Baire category theorem, show that any non-empty complete metric space without isolated points is uncountable.  (In particular, this shows that Baire category theorem can fail for incomplete metric spaces such as the rationals ${\Bbb Q}$.)  $\diamond$ To quickly illustrate an application of the Baire category theorem, observe that it implies that one cannot cover a finite-dimensional real or complex vector space ${\Bbb R}^n, {\Bbb C}^n$ by a countable number of proper subspaces.  One can of course also establish this fact by using Lebesgue measure on this space.  However, the advantage of the Baire category approach is that it also works well in infinite dimensional complete normed vector spaces, i.e. Banach spaces, whereas the measure-theoretic approach runs into significant difficulties in infinite dimensions.  This leads to three fundamental equivalences between the qualitative theory of continuous linear operators on Banach spaces (e.g. finiteness, surjectivity, etc.) to the quantitative theory (i.e. estimates): 1. The uniform boundedness principle, that equates the qualitative boundedness (or convergence) of a family of continuous operators with their quantitative boundedness. 2. The open mapping theorem, that equates the qualitative solvability of a linear problem Lu = f with the quantitative solvability. 3. The closed graph theorem, that equates the qualitative regularity of a (weakly continuous) operator T with the quantitative regularity of that operator. Strictly speaking, these theorems are not used much directly in practice, because one usually works in the reverse direction (i.e. first proving quantitative bounds, and then deriving qualitative corollaries); but the above three theorems help explain why we usually approach qualitative problems in functional analysis via their quantitative counterparts. Read the rest of this entry » ## 245B, Notes 8: A quick review of point set topology 30 January, 2009 in 245B - Real analysis, math.GN, math.MG | Tags: compactness, continuity, Hausdorff space, metric spaces, nets, point-set topology | by Terence Tao | 31 comments To progress further in our study of function spaces, we will need to develop the standard theory of metric spaces, and of the closely related theory of topological spaces (i.e. point-set topology).  I will be assuming that students in my class will already have encountered these concepts in an undergraduate topology or real analysis course, but for sake of completeness I will briefly review the basics of both spaces here. Read the rest of this entry » ## 245B notes 4: The Stone and Loomis-Sikorski representation theorems (optional) 12 January, 2009 in 245B - Real analysis, math.GN, math.RA | Tags: ultrafilters, stone representation theorem, boolean algebra, sigma-algebra, measure space | by Terence Tao | 24 comments A (concrete) Boolean algebra is a pair $(X, {\mathcal B})$, where X is a set, and ${\mathcal B}$ is a collection of subsets of X which contain the empty set $\emptyset$, and which is closed under unions $A, B \mapsto A \cup B$, intersections $A, B \mapsto A \cap B$, and complements $A \mapsto A^c := X \backslash A$. The subset relation $\subset$ also gives a relation on ${\mathcal B}$. Because the ${\mathcal B}$ is concretely represented as subsets of a space X, these relations automatically obey various axioms, in particular, for any $A,B,C \in {\mathcal B}$, we have: 1. $\subset$ is a partial ordering on ${\mathcal B}$, and A and B have join $A \cup B$ and meet $A \cap B$. 2. We have the distributive laws $A \cup (B \cap C) = (A \cup B) \cap (A \cup C)$ and $A \cap (B \cup C) = A \cup (B \cap C)$. 3. $\emptyset$ is the minimal element of the partial ordering $\subset$, and $\emptyset^c$ is the maximal element. 4. $A \cap A^c = \emptyset$ and $A \cup A^c = \emptyset^c$. (More succinctly: ${\mathcal B}$ is a lattice which is distributive, bounded, and complemented.) We can then define an abstract Boolean algebra ${\mathcal B} = ({\mathcal B}, \emptyset, \cdot^c, \cup, \cap, \subset)$ to be an abstract set ${\mathcal B}$ with the specified objects, operations, and relations that obey the axioms 1-4. [Of course, some of these operations are redundant; for instance, intersection can be defined in terms of complement and union by de Morgan's laws. In the literature, different authors select different initial operations and axioms when defining an abstract Boolean algebra, but they are all easily seen to be equivalent to each other. To emphasise the abstract nature of these algebras, the symbols $\emptyset, \cdot^c, \cup, \cap, \subset$ are often replaced with other symbols such as $0, \overline{\cdot}, \vee, \wedge, <$.] Clearly, every concrete Boolean algebra is an abstract Boolean algebra. In the converse direction, we have Stone’s representation theorem (see below), which asserts (among other things) that every abstract Boolean algebra is isomorphic to a concrete one (and even constructs this concrete representation of the abstract Boolean algebra canonically). So, up to (abstract) isomorphism, there is really no difference between a concrete Boolean algebra and an abstract one. Now let us turn from Boolean algebras to $\sigma$-algebras. A concrete $\sigma$-algebra (also known as a measurable space) is a pair $(X,{\mathcal B})$, where X is a set, and ${\mathcal B}$ is a collection of subsets of X which contains $\emptyset$ and are closed under countable unions, countable intersections, and complements; thus every concrete $\sigma$-algebra is a concrete Boolean algebra, but not conversely. As before, concrete $\sigma$-algebras come equipped with the structures $\emptyset, \cdot^c, \cup, \cap, \subset$ which obey axioms 1-4, but they also come with the operations of countable union $(A_n)_{n=1}^\infty \mapsto \bigcup_{n=1}^\infty A_n$ and countable intersection $(A_n)_{n=1}^\infty \mapsto \bigcap_{n=1}^\infty A_n$, which obey an additional axiom: 5. Any countable family $A_1, A_2, \ldots$ of elements of ${\mathcal B}$ has supremum $\bigcup_{n=1}^\infty A_n$ and infimum $\bigcap_{n=1}^\infty A_n$. As with Boolean algebras, one can now define an abstract $\sigma$-algebra to be a set ${\mathcal B} = ({\mathcal B}, \emptyset, \cdot^c, \cup, \cap, \subset, \bigcup_{n=1}^\infty, \bigcap_{n=1}^\infty )$ with the indicated objects, operations, and relations, which obeys axioms 1-5. Again, every concrete $\sigma$-algebra is an abstract one; but is it still true that every abstract $\sigma$-algebra is representable as a concrete one? The answer turns out to be no, but the obstruction can be described precisely (namely, one needs to quotient out an ideal of “null sets” from the concrete $\sigma$-algebra), and there is a satisfactory representation theorem, namely the Loomis-Sikorski representation theorem (see below). As a corollary of this representation theorem, one can also represent abstract measure spaces $({\mathcal B},\mu)$ (also known as measure algebras) by concrete measure spaces, $(X, {\mathcal B}, \mu)$, after quotienting out by null sets. In the rest of this post, I will state and prove these representation theorems. They are not actually used directly in the rest of the course (and they will also require some results that we haven’t proven yet, most notably Tychonoff’s theorem), and so these notes are optional reading; but these theorems do help explain why it is “safe” to focus attention primarily on concrete $\sigma$-algebras and measure spaces when doing measure theory, since the abstract analogues of these mathematical concepts are largely equivalent to their concrete counterparts. (The situation is quite different for non-commutative measure theories, such as quantum probability, in which there is basically no good representation theorem available to equate the abstract with the classically concrete, but I will not discuss these theories here.) Read the rest of this entry » ## 254A, Lecture 3: Minimal dynamical systems, recurrence, and the Stone-Čech compactification 13 January, 2008 in 254A - ergodic theory, math.DS, math.GN, math.LO | Tags: almost periodicity, lamplighter group, recurrence, Stone-Cech compactification, syndetic sets, ultrafilter | by Terence Tao | 53 comments We now begin the study of recurrence in topological dynamical systems $(X, {\mathcal F}, T)$ – how often a non-empty open set U in X returns to intersect itself, or how often a point x in X returns to be close to itself. Not every set or point needs to return to itself; consider for instance what happens to the shift $x \mapsto x+1$ on the compactified integers $\{-\infty\} \cup {\Bbb Z} \cup \{+\infty\}$. Nevertheless, we can always show that at least one set (from any open cover) returns to itself: Theorem 1. (Simple recurrence in open covers) Let $(X,{\mathcal F},T)$ be a topological dynamical system, and let $(U_\alpha)_{\alpha \in A}$ be an open cover of X. Then there exists an open set $U_\alpha$ in this cover such that $U_\alpha \cap T^n U_\alpha \neq \emptyset$ for infinitely many n. Proof. By compactness of X, we can refine the open cover to a finite subcover. Now consider an orbit $T^{\Bbb Z} x = \{ T^n x: n \in {\Bbb Z} \}$ of some arbitrarily chosen point $x \in X$. By the infinite pigeonhole principle, one of the sets $U_\alpha$ must contain an infinite number of the points $T^n x$ counting multiplicity; in other words, the recurrence set $S := \{ n: T^n x \in U_\alpha \}$ is infinite. Letting $n_0$ be an arbitrary element of S, we thus conclude that $U_\alpha \cap T^{n_0-n} U_\alpha$ contains $T^{n_0} x$ for every $n \in S$, and the claim follows. $\Box$ Exercise 1. Conversely, use Theorem 1 to deduce the infinite pigeonhole principle (i.e. that whenever ${\Bbb Z}$ is coloured into finitely many colours, one of the colour classes is infinite). Hint: look at the orbit closure of c inside $A^{\Bbb Z}$, where A is the set of colours and $c: {\Bbb Z} \to A$ is the colouring function.) $\diamond$ Now we turn from recurrence of sets to recurrence of individual points, which is a somewhat more difficult, and highlights the role of minimal dynamical systems (as introduced in the previous lecture) in the theory. We will approach the subject from two (largely equivalent) approaches, the first one being the more traditional “epsilon and delta” approach, and the second using the Stone-Čech compactification $\beta {\Bbb Z}$ of the integers (i.e. ultrafilters). Read the rest of this entry » ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 250, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9194965362548828, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/23034/question-with-einstein-notation
# Question with Einstein notation Let’s consider this equation for a scalar quantity $f$ as a function of a 3D vector $a$ as: $$f(\vec a) = S_{ijkk} a_i a_j$$ where $S$ is a tensor of rank 4. Now, I’m not sure what to make of the index $k$ in the expression, as it doesn’t appear on the left-hand side. Is it a typo, meaning there is a $k$ missing somewhere (like $f_k$), or does it mean that it should be summed over $k$ like so: $$f(\vec a) = \sum_i \sum_j \sum_k S_{ijkk} a_i a_j$$ - 1 I suppose so, but I'm no great shakes at Einstein notation. Anyway, if $S$ is of rank 4, you have to have those extra indices, so it can't be a typo. All this means is add all elements in a 'row' of the tensor, except in higher dimensions. I guess. – Manishearth♦ Mar 30 '12 at 13:21 1 Note:I am voting to close, since this seems a tad too localised. Feel free to ask this in chat if nobody has answered it in the comments by the time it gets closed :\ – Manishearth♦ Mar 30 '12 at 13:23 @Manishearth forget about what this equation represents, it's a general question about notation. If an index is repeated inside the same variable, does it imply summation? – F'x Mar 30 '12 at 13:24 1 I think one first has to calculate the "trace" of S indicated by the implicit Einstein sum over k, which leaves S as a second rank tensor with indices i and j. Summing over i and j as explicitely written in the second equation then gives the scalar corresponding to f on the l.h.s. – Dilaton Mar 30 '12 at 13:33 1 @Manishearth I'm pretty sure this is not the sort of thing "too localized" is for. (I think it's a fine question, actually) – David Zaslavsky♦ Mar 30 '12 at 18:50 show 5 more comments ## 2 Answers In the Einstein convention, pairs of equal indices to be summed over may appear at the same tensor. For example, the formula ${A_k}^k=tr~A$ is perfectly legitimate. But your formula looks strange, as one usually sums over a lower index and an upper index, whereas you sum over lower indices only, which doesn't make sense in differential geometry unless your metric is flat and Euclidean (and then higher order tensors are very unlikely to occur). - The tensor in question is the elastic stiffness tensor… – F'x Mar 30 '12 at 13:48 then it makes sense. – Arnold Neumaier Mar 30 '12 at 13:50 1 In particle physics as well, people are frequently sloppy with the notation and use upper and lower indices interchangeably. – David Zaslavsky♦ Mar 30 '12 at 18:48 @DavidZaslavsky: If the two summed indices are lower, it means contract with g, so there is no ambiguity and it might not be called sloppy. – Ron Maimon Mar 31 '12 at 2:52 You could rewrite your equation as $$f(\vec a) = S_{ijkl} a_i a_j \delta_{kl}$$ where $\delta_{kl}$ is the Kroneker Delta, if that helps. The last equation you've written is the right idea. I would stress, though, that Einstein notation usually uses one upper and one lower index. This is partially so you can quickly see if your summations and indices are correct. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412943720817566, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/248/list
## Return to Answer 3 added 13 characters in body 1. Well, there are stupid examples like the fact that P^n $\mathbb{P}^n$ has Kähler structures where any rational multiple of the hyperplane class is the Kähler class which are compatible with the standard complex structure (you just rescale the symplectic structure and metric). I think you should get similar examples with multi-parameter families on things like toric varieties with higher dimensional H^2.$H^2$. 2. I know some non-compact examples where you can deform the complex structure without changing the symplectic one. I don't know any compact examples, but they probably exist. The thing is, the only thing you can deform about a symplectic structure on a compact thing is its cohomology class (by the Moser trick), so anything with an big enough family of Kähler metrics will work. 3. This probably follows from GAGA, but you'd have to ask someone more expert than me to be sure. Edit: David's answer made me realize I forgot to say projective here. That's important. 2 added 96 characters in body 1. Well, there are stupid examples like the fact that P^n has Kähler structures where any rational multiple of the hyperplane class is the Kähler class which are compatible with the standard complex structure (you just rescale the symplectic structure and metric). I think you should get similar examples with multi-parameter families on things like toric varieties with higher dimensional H^2. 2. I know some non-compact examples where you can deform the complex structure without changing the symplectic one. I don't know any compact examples, but they probably exist. The thing is, the only thing you can deform about a symplectic structure on a compact thing is its cohomology class (by the Moser trick), so anything with an big enough family of Kähler metrics will work. 3. This probably follows from GAGA, but you'd have to ask someone more expert than me to be sure. Edit: David's answer made me realize I forgot to say projective here. That's important. 1 1. Well, there are stupid examples like the fact that P^n has Kähler structures where any rational multiple of the hyperplane class is the Kähler class which are compatible with the standard complex structure (you just rescale the symplectic structure and metric). I think you should get similar examples with multi-parameter families on things like toric varieties with higher dimensional H^2. 2. I know some non-compact examples where you can deform the complex structure without changing the symplectic one. I don't know any compact examples, but they probably exist. The thing is, the only thing you can deform about a symplectic structure on a compact thing is its cohomology class (by the Moser trick), so anything with an big enough family of Kähler metrics will work. 3. This probably follows from GAGA, but you'd have to ask someone more expert than me to be sure.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459412693977356, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/81296/every-equalizer-is-monic
# Every equalizer is monic Theorem I of section 3.10 of Goldblatt's Topoi states that every equalizer is monic. I don't understand the proof given. For reference, it is: Suppose $i : e \rightarrow a$ equalizes $f,g : a \rightarrow b$. Suppose $i \circ j = i \circ l$ where $j,l : c \rightarrow e$. Since $$f \circ (i \circ j) = (f \circ i) \circ j = (g \circ i) \circ j = g \circ (i \circ j)$$ there exists a unique $k : c \rightarrow e$ such that $i \circ k = i \circ j$. Hence, $k = j$. Since $i \circ l = i \circ j$, $k = l$. Hence, $j = l$ and $i$ is monic. I follow Goldblatt up to the derivation of the identity $k = j$. Since $f \circ g = f \circ h$ implies $g = h$ whenever $f$ is monic, but we don't know that $i$ is monic in this case, the identity must be derived in some other way. However, I don't understand what this way is. - 3 You're applying the universal property of $i$ to see that $k$ is the unique morphism $k: c \to e$ such that $i \circ k = i \circ j$ but $j$ is already a morphism $j: c \to e$ such that $i \circ j = i \circ j$, so $j = k$. – t.b. Nov 12 '11 at 5:26 I had read the definition and proof several times, but completely missed that $k$ is unique, from which the identities can be derived. Thank you for pointing out this now completely obvious fact. – danportin Nov 12 '11 at 5:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.96957927942276, "perplexity_flag": "head"}
http://www.chegg.com/homework-help/questions-and-answers/identical-balls-mass-42-g-suspended-threads-length-116-m-carry-equal-charges-shown-figure--q1368628
home / homework help / questions and answers / science / physics / two identical balls of mass... Close ## Hanging Charges Two identical balls of mass 4.2 g are suspended from threads of length 1.16 m and carry equal charges as shown in the figure. Each ball is 1.10 cm from the centerline. 1) Assume that the angle $%20%5CTheta%20$ is so small that its tangent can be replaced by its sine and find the magnitude of charge on one of the balls. 2) Now, assume the two balls are losing charge to the air very slowly. That means they'll be slowly approaching each other. If a ball is moving at an instantaneous speed of 4.20E-5 m/s, at what rate is the ball losing charge? Start by writing the speed of the ball and the rate of change of the charge as symbolic derivatives, and then relate those derivatives. Give your answer in Coulombs per second (C/s). Note that because the balls are losing charge so slowly, we can still use our results from the previous part, as the system is almost in equilibrium. # Answers (0) There are no answers to this question yet. #### Company Chegg Plants Trees
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950095534324646, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/96/what-is-a-coherent-risk-measure/2045
What is a “coherent” risk measure? What is a coherent risk measure, and why do we care? Can you give a simple example of a coherent risk measure as opposed to a non-coherent one, and the problems that a coherent measure addresses in portfolio choice? - 4 Answers I'm just providing a global answer to the question, as I think it can be interesting for some beginners in quant finance. The properties given by TheBridge: Normalize $\rho (\emptyset)=0$ This means you have no risk in taking no position. Sub-addiitivity $\rho(A_1+A_2) \leq \rho(A_1)+\rho(A_2)$ Having a position in two different can only decrease the risk of the portfolio (diversification) Positive homogeneity $\rho(\lambda A) = \lambda \rho(A)$ Doubling a position in an asset A doubles your risk. And finally, Translation invariance $\rho(A + x) = \rho(A)-x$ That is, adding cash to a portfolio only diminishes the risk. So a risk-measure is said to be coherent if and only if it has all these properties. Note that this is just a convention, but it is motivated by the fact that all these properties are the ones an investor expects to hold for a risk measure. Finally, notice that neither VaR nor Var are coherent risk measures, wherease the Expected Shortfall is. - there are 4 defining properties of coherent risk measures you can find them here as well as examples for coherent and counterexamples of those kind of risk measures Regards - Thanks. But I still don't get it: who defines these properties, where do they come from, is there any theoretical basis for imposing them, and why? – Dimitris Feb 1 '11 at 15:57 2 I think these are just formal ways of describing informal "common-sense" ideas about risk. In the Wikipedia article, each axiom has a short sentence that descries the motivation -- such as "the risk of two portfolios together cannot get any worse than adding the two risks separately". – Curt Hagenlocher Feb 2 '11 at 15:06 Coherent risk measures were created to address the problem that extant risk measures, like VaR, did not: namely that a risk measure should reward diversification. - I don't think that we should care if a risk measure is coherent. The reason that VaR is not coherent is because it need not be sub-additive. I'm willing to stand corrected, but I doubt that VaR is very far from sub-additive in practical situations. And I don't see a great deal of harm if it were. I have several problems with VaR but non-coherent is not among them. The homogeneity condition is wrong. I call this the Amaranth condition -- it turns out that being all of one side of a market is risky. - I'm not sure I get what you don't "like" in the homogeneity. Could you please explain a bit further? – SRKX♦ Sep 27 '11 at 16:36 The homogeneity condition claims that it is only 100 times more risky to own all of one side of a market than to share it equally with 99 others. I find that hard to believe. – Patrick Burns Sep 27 '11 at 20:06 I agree with you about homogeneity for large relative positions is not sensible, but it is worth noting that VaR doesn't address that, at least in implementations I have seen. – Brian B Oct 25 '11 at 18:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9525346755981445, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Fermi_level
# Fermi level Not to be confused with Fermi energy. In the context of solid state physics, the total chemical potential for electrons (or electrochemical potential) is known as the Fermi level, usually denoted by µ or EF. Identical in meaning to the electrochemical potential, the Fermi level of a body is the thermodynamic work required to add one electron to it (not counting the work required to remove the electron from wherever it came from). A precise understanding of the Fermi level---how it relates to electronic band structure in determining electronic properties, how it relates to the voltage and flow of charge in an electronic circuit---is essential to an understanding of solid state physics. In a band structure picture, the Fermi level can be considered to be a hypothetical energy level of an electron, such that at thermodynamic equilibrium this energy level would have a 50% probability of being occupied at any given time. The Fermi level does not necessarily correspond to an actual energy level (in an insulator the Fermi level lies in the band gap), nor does it even require the existence of a band structure. Nonetheless, the Fermi level is a precisely defined thermodynamic quantity, and differences in Fermi level can be measured simply with a voltmeter. ## The Fermi level and voltage In oversimplified descriptions of electric circuits it is said that electric currents are driven by differences in electrostatic potential (Galvani potential), but this is not exactly true.[1] As a counterexample, multi-material devices such as p-n junctions contain internal electrostatic potential differences at equilibrium, without any accompanying current. Also, if a voltmeter is attached to the junction, one simply measures zero volts. Clearly, the electrostatic potential is not the only factor influencing the flow of charge in a material. In fact, the quantity called "voltage" as measured in an electric circuit is more closely related to the chemical potential for electrons (Fermi level). When the leads of a voltmeter are attached to two points in a circuit, the displayed voltage is a measure of the work that can be obtained by allowing a tiny unit of charge to flow from one point to the other. If a simple wire is connected between two points of differing voltage (forming a short circuit), current will flow from positive to negative voltage, converting the available work into heat. The electrochemical potential (Fermi level) of a body expresses precisely the work required to add an electron to it, or equally the work obtained by removing an electron. Therefore, the observed difference (VA-VB) in voltage between two points "A" and "B" in an electronic circuit is exactly related to the corresponding difference (µA-µB) in electrochemical potential by the formula $(V_{\mathrm{A}}-V_{\mathrm{B}}) = -(\mu_{\mathrm{A}}-\mu_{\mathrm{B}})/e$ where -e is the electron charge. From the above discussion it can be seen that electrons will move from a body of high µ (low voltage) to low µ (high voltage) if a simple path is provided. This flow of electrons will cause the lower µ to increase (due to charging or other repulsion effects) and likewise cause the higher µ to decrease. Eventually, µ will settle down to the same value in both bodies. This leads to an important fact regarding the equilibrium (off) state of an electronic circuit: An electronic circuit in thermodynamic equilibrium will have a constant Fermi level throughout its connected parts. No current flows in this circuit. This also means that the voltage (measured with a voltmeter) between any two points will be zero, at equilibrium. Note that thermodynamic equilibrium here requires that the circuit should be internally connected and not contain any batteries or other power sources, nor any variations in temperature. ## Fermi level referencing and the location of zero Fermi level Much like the choice of origin in a coordinate system, the zero point of energy can be defined arbitrarily, since observable phenomena only depend on energy differences. When comparing distinct bodies, however, it is important that they are all consistent in their choice of the location of zero energy, or else nonsensical results will be obtained. It can therefore be helpful to explicitly name a common point to ensure that different components are in agreement. On the other hand, if a reference point is chosen ambiguously (such as "the vacuum", see below) it will instead cause more problems. A practical and well-justified choice of common point is a bulky, physical conductor, such as the electrical ground or earth. Such a conductor can be considered to be in a good thermodynamic equilibrium and so its µ is well defined. It provides a reservoir of charge, so that large numbers of electrons may be added or removed without incurring charging effects. It also has the advantage of being accessible, so that the Fermi level of any other object can be measured simply with a voltmeter. ### Why it is not advisable to use "the energy in vacuum" as a reference zero In principle, one might consider using the state of a stationary electron in the vacuum as a reference point for electrochemical potential. This approach is not advisable unless one is careful to define exactly where "the vacuum" is.[2] The problem is that not all points in the vacuum are equivalent. At thermodynamic equilibrium, it is typical for electrical potential differences of order 1 V to exist in the vacuum (Volta potentials). The source of this vacuum potential variation is the variation in work function between the different conducting materials exposed to vacuum. Just outside a conductor, the electrostatic potential depends sensitively on the material, as well as which surface is selected (its crystal orientation, contamination, and other details). The parameter that gives the best approximation to universality is the "Earth-referenced electrochemical potential" used earlier. This also has the advantage that it can be measured with a voltmeter. ## The Fermi level and band structure Simplified diagram of the filling of electronic band structure in various types of material, relative to the Fermi level EF (materials are shown in equilibrium with each other). In metals and semimetals the Fermi level lies inside at least one band, with semimetals containing far fewer charge carriers. In insulators the Fermi level is deep inside a forbidden gap, while in semiconductors the bands near the Fermi level are populated by thermally activated electrons and holes. In the band theory of solids, electrons are considered to occupy a series of bands composed of single-particle energy eigenstates each labelled by ϵ. Although this single particle picture is an approximation, it greatly simplifies the understanding of electronic behaviour and it generally provides correct results when applied correctly. The Fermi-Dirac distribution $f(\epsilon)$ gives the probability that (at thermodynamic equilibrium) an electron will occupy a state having energy ϵ. Alternatively, it gives the average number of electrons that will occupy that state given the restriction imposed by the Pauli exclusion principle:[3] $f(\epsilon) = \frac{1}{e^{(\epsilon-\mu) / (k T)} + 1}$ Here, T is the absolute temperature and k is Boltzmann's constant. If there is a state at the Fermi level (ϵ = µ), then this this level will have a 50% chance of being occupied at any given time. The location of µ within a material's band structure is important in determining the electrical behaviour of the material. • In an insulator µ lies within a large band gap, far away from any states that are able to carry current. • In a metal, semimetal or degenerate semiconductor, µ lies within a delocalized band. A large number of states nearby µ are thermally active and readily carry current. • In an intrinsic or lightly doped semiconductor, µ is close enough to a band edge that there are a dilute number of thermally excited carriers residing near that band edge. In semiconductors and semimetals the position of µ relative to the band structure can usually be controlled to a significant degree by doping or gating. These controls do not change µ which is fixed by the electrodes, but rather they cause the the entire band structure to shift up and down (sometimes also changing the band structure's shape). For further information about the Fermi levels of semiconductors, see (for example) Sze. [4] ### Local conduction band referencing, internal chemical potential, and the parameter ζ Simple band diagram with denoted vacuum energy EVAC, conduction band edge EC, Fermi level EF, valence band edge EV, electron affinity Eea, work function Φ and band gap Eg If the symbol ℰ is used to denote an electron energy level measured relative to the energy of the bottom of its enclosing band, ϵC, then in general we have ℰ = ϵ – ϵC, and in particular we can define the parameter ζ [5] by referencing the Fermi level to the band edge: $\zeta = \mu - \epsilon_{\rm C}.$ It follows that the Fermi-Dirac distribution function can also be written $f(\mathcal{E}) = \frac{1}{1 + \mathrm{exp}[(\mathcal{E}-\zeta)/k_{\mathrm{B}} T]}.$ The band theory of metals was initially developed by Sommerfeld, from 1927 onwards, who paid great attention to the underlying thermodynamics and statistical mechanics. He describes ζ as the "free enthalpy of an electron", but this name is not now in common use. Confusingly, in some contexts ζ may be called the "Fermi level", "chemical potential" or "electrochemical potential", leading to ambiguity with the globally-referenced quantity µ. In this article the terms "conduction-band referenced Fermi level" or "internal chemical potential" are used to refer to ζ. ζ is directly related to the number of active charge carriers as well as their typical kinetic energy, and hence it is directly involved in determining the local properties of the material (such as electrical conductivity). For this reason it is common to focus on the value of ζ when concentrating on the properties of electrons in a single, homogeneous conductive material. By analogy to the energy states of a free electron, the ℰ of a state is the kinetic energy of that state and ϵC is its potential energy. With this in mind, the parameter ζ could also be labelled the "Fermi kinetic energy". Unlike µ, the parameter ζ is not a constant at equilibrium, taking on multiple values due to variations in ϵC. ζ usually varies from location to location in a material, depending on factors such as material quality and impurities/dopants. Near the surface of a semiconductor or semimetal, ζ can be strongly controlled by externally applied electric fields, as is done in a field effect transistor. ζ in a multi-band material may even take on multiple values in a single location. For example, in a piece of aluminum metal there are two conduction bands crossing the Fermi level (even more bands in other materials);[6] each band has a different edge energy ϵC and a different value of ζ. The value of ζ at zero temperature is widely known as the Fermi energy, sometimes written ζ0. Confusingly (again), the name "Fermi energy" sometimes is used to refer to ζ at nonzero temperature. ## The Fermi level and temperature out of equilibrium See also: Quasi-Fermi level The Fermi level μ and temperature T are well defined constants for a solid state device in thermodynamic equilibrium situation, such as when it is sitting on the shelf doing nothing. When the device is brought out of equilibrium and put into use, then strictly speaking the Fermi level and temperature are no longer well defined. Fortunately, it is often possible to define a quasi-Fermi level and quasi-temperature for a given location, that accurately describe the occupation of states in terms of a thermal distribution. The device is said to be in 'quasi-equilibrium' when such a description is possible. The quasi-equilibrium approach allows one to build a simple picture of some non-equilibrium effects as the electrical conductivity of a piece of metal (as resulting from a gradient in μ) or its thermal conductivity (as resulting from a gradient in T). The quasi-μ and quasi-T can vary (or not exist at all) in any non-equilibrium situation, such as: • If the system contains a chemical imbalance (as in a battery). • If the system is exposed to changing electromagnetic fields. (as in capacitors, inductors, and transformers). • Under illumination from a light-source with a different temperature, such as the sun (as in solar cells), • When the temperature is not constant within the device (as in thermocouples), • When the device has been altered, but has not had enough time to re-equilibrate (as in piezoelectric or pyroelectric substances). In some situations, such as immediately after a material experiences a high-energy laser pulse, the electron distribution cannot be described by any thermal distribution. One cannot define the quasi-Fermi level or quasi-temperature in this case; the electrons are simply said to be "non-thermalized". In less dramatic situations, such as in a solar cell under constant illumination, a quasi-equilibrium description may be possible but requiring the assignment of distinct values of μ and T to different bands (conduction band vs. valence band). Even then, the values of μ and T may jump discontinuously across a material interface (e.g., p-n junction) when a current is being driven, and be ill-defined at the interface itself. ## Terminology problems Unfortunately, the definitions of the terms "Fermi level", "Fermi energy", "chemical potential", and "electrochemical potential" are by no means universal. This can lead to some confusion when comparing scientific or engineering literature between different authors. • Chemical potential and Electrochemical potential: In some parts of the literature the term "chemical potential" is used instead of "electrochemical potential". In the past there has been no consensus as to whether these two terms should mean the same thing. Some textbooks continue to make a distinction (and, worse, there are alternative conventions as to what each term means). The more modern view[citation needed] is that "chemical potential" should mean the same thing as "electrochemical potential", – but that in some contexts there is a separate concept – called here the "internal chemical potential" – that is the energy left when the "purely electrostatic component of electrochemical potential" is subtracted out. (In other contexts it may not be possible make a division into components in any sensible way.) In any case, it is usually only the total combined thermodynamic potential that can be measured. As already noted, it is thought less confusing here to use the name "electrochemical potential" for the total thermodynamic potential. • Alternative uses of the name "Fermi energy". It is normal in solid-state physics to use the term "Fermi energy" as a name for ζ0, as done here.[7] However, particularly in semiconductor physics and engineering, the term "Fermi energy" is sometimes used as a synonym for "Fermi level".[8] ## Discrete charging effects In cases where the "charging effects" due to a single electron are non-negligible, the above definitions should be clarified. For example, consider an capacitor made of two identical parallel-plates. If the capacitor is uncharged, the Fermi level is the same on both sides, so one might think that it should take no energy to move an electron from one plate to the other. But when the electron has been moved, the capacitor has become (slightly) charged, so this does take a slight amount of energy. In a normal capacitor, this is negligible, but in a nano-scale capacitor it can be more important. In this case one must be precise about the thermodynamic definition of the electrochemical potential as well as the state of the device (is it electrically isolated, or is it connected to an electrode?): • If the charge on a body is fixed and known, but the body is thermally connected to a reservoir, then it is in the canonical ensemble. We can define a "chemical potential" in this case as the work required to add one electron to a body that already has exactly $N$ electrons,[9] $\mu(N,T) = F(N+1,T) - F(N,T),$ where $F(N+1,T)$ is the free energy with $N+1$ electrons, and $F(N,T)$ is the free energy with $N$ electrons. The "chemical potential" here has a slightly different meaning than the Fermi level; the occupation of electron energy levels in the canonical ensemble is not described by the Fermi-Dirac distribution, as that distribution implies that $N$ can fluctuate. • When the body is also able to exchange charge with the reservoir (electrode), it enters the grand canonical ensemble. The value of chemical potential $\mu$ is fixed by the electrode, and the charge $N$ on the body may fluctuate. In this case $\mu$ corresponds to the notion of Fermi level in this article, as it is constant in the device at equilibrium, and the electron statistics are described by the Fermi-Dirac distribution. This $\mu$ is not determined by a discrete charging event; rather, it gives the infinitesimal amount of work needed to increase the average number of electrons ($\langle N\rangle$) by an infinitesimal amount: $\mu(\langle N\rangle,T) = \left(\frac{\partial F}{\partial \langle N\rangle}\right)_{T}$ In the example of the nano-scale capacitor we can therefore consider two distinct situations of charging. Let us label the two plates A and B, and note that the chemical potential of each plate will have some interdependence on the status of the other plate: • Electrically isolated plates (canonical ensemble): The work to move one electron from A to B will be determined by the process of removing then adding the electron (or adding then removing). This work is the difference $\begin{align} W & = \mu_{\rm B}(N_{\rm A}-1,N_{\rm B},T) - \mu_{\rm A}(N_{\rm A}-1,N_{\rm B},T) \\ & = F(N_{\rm A}-1,N_{\rm B}+1,T) - F(N_{\rm A},N_{\rm B},T). \end{align}$ • Reservoir-connected plates (grand canonical ensemble): We do not directly move the charge, but we may instead apply a voltage to each plate and change the average number of electrons $\langle N\rangle$ by one. For each plate, $\mu$ is a continuous function of $\langle N\rangle$ and the work performed is determined by integrals of $\mu$, or $W = F(\langle N_{\rm A}\rangle-1,\langle N_{\rm B}\rangle+1,T) - F(\langle N_{\rm A}\rangle,\langle N_{\rm B}\rangle,T).$ ## Footnotes and references 1. I. Reiss, What does a voltmeter measure? Solid State Ionics 95, 327 (1197) [1] 2. Technically, it is possible to consider the vacuum to be an insulator and in fact its Fermi level is defined if its surroundings are in equilibrium. Typically however the electrochemical potential is two to five electron volts below the vacuum electrostatic potential energy, depending on the work function of the nearby vacuum wall material. Only at high temperatures will the equilibrium vacuum be populated with a significant number of electrons (this is the basis of thermionic emission). 3. Kittel, Charles; Herbert Kroemer (1980-01-15). Thermal Physics (2nd Edition). W. H. Freeman. p. 357. ISBN 978-0-7167-1088-2. 4. Sze, S. M. (1964). Physics of Semiconductor Devices. Wiley. ISBN 0-471-05661-8. 5. Sommerfeld, Arnold (1964). Thermodynamics and Statistical Mechanics. Academic Press. 6. "3D Fermi Surface Site". Phys.ufl.edu. 1998-05-27. Retrieved 2013-04-22. 7. See, for example, Ashcroft and Mermin. Solid State Physics. ISBN 0-03-049346-3. 8. For example: D. Chattopadhyay (2006). Electronics (fundamentals And Applications). ISBN 978-81-224-1780-7.  and Balkanski and Wallis (2000-09-01). Semiconductor Physics and Applications. ISBN 978-0-19-851740-5. 9. Shegelski, Mark R. A. (2004-05). "The chemical potential of an ideal intrinsic semiconductor". American Journal of Physics 72 (5): 676–678. doi:10.1119/1.1629090.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.908301055431366, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/111982/intuitive-interpretation-of-these-differential-forms
# Intuitive interpretation of these differential forms Let $\pi: S^2-\{N\}\to \mathbb R^2$ be the stereographic projection map. Let $\sigma:\mathbb R^2\to S^2-\{N\}$ be its inverse. Let $p\in S^2-\{N\}$ and $x_1,x_2\in$ the tangent space of $S^2$ Would someone be nice enough to explain to me then what the following mean intuitively? And hopefully also a way to visualize them/gain some sort of physical intuition on them? 1) $(d\sigma)_{\pi(p)}$ 2) $d\pi_p$ 3) Why $x_1\cdot x_2=(d\pi_p(x_1),d\pi_p(x_2))_{\pi(p)}$ 4) $d\pi_p\circ (d\sigma)_{\pi(p)}=$ identity I have read the differential forms article on Wikipedia in hope to learn more, but I still don't quite get the idea. I know for example that (1) is the differential of $\sigma$ at the point $\pi(p)$ but I don't understand what that means. I hope that someone could give me a geometric picture of some kind. And if there should be such a saint out there, I would like to thank you very much (in advance). - ## 1 Answer I will explain answer only geometrically as required: What does $\pi$ does: $\pi$ takes (great circle- N) to line passes through origin. $\sigma$ take line passes through origin to (great circle -N) in a smooth manner. Now for $p\in (S^2- N)$, and $v\in T_p (S^2-N)$, there is a great circle $\gamma$ which passes through $p$, $\gamma(0)= p$ and $\gamma'(0)= v$. Via map $\pi$, $\gamma$ will be map to some line $l$ passes through origin. $d\pi_p$ maps vector $v$ to speed of $l$ at $\pi(p)$. And conversely $d\sigma_{\pi(p)}$ maps some speed vector of particular line passing through origin to the speed of the corresponding great circle at $p$. Now in your 3rd question you are defining metric on $S^2$, This matrix is induced metric... Actually this expression just says that map $\pi$ is conformal map. Angle between two line passes through origin is same as angle between their corresponding image that is great circles. As $\pi$ and $\sigma$ are inverse of each other hence we have $\pi \circ\sigma = Id$. Now take the derivative and use composition rule of rule of the derivative you will get your 4th one... Geometrically it says that Tangent space at $p$ is isomorphic to $\mathbb R^2$ - Thank you very much, Pradip. May I ask what do you mean by the speed of a line or of a great circle? – small potato Feb 22 '12 at 14:18 If $\gamma(t)$ is any curve either line or great circle then speed at $t$ is given by $\gamma'(t)=\frac{d}{dt}\gamma(t)|_{t=t_0}$ – zapkm Feb 22 '12 at 15:06 @smallpotato, take $\gamma(t)= 2t$ , then this is line passes through origin with constant speed $\gamma'(t)= 2$. – zapkm Feb 22 '12 at 15:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526455998420715, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/38210/interpretation-of-field-operator
Interpretation of field operator Consider a real scalar field operator $\varphi$. It can be written in terms of creation and anihilation operators as $$\varphi(\textbf{x})=\int \tilde{dk}[ a(k)e^{i\textbf{kx}}+a(k)^{\dagger}e^{-i\textbf{kx}}]$$ where $\tilde{dk}$ is a Lorenz-invariant measure. If $\varphi$ is interpreted as creating a particle at $\textbf{x}$ when acting on the vacuum, what is its action on a generic state? It seems to be creating a superposition of a state with one added quantum of energy through the creation operator, and a state with one less quanta of energy through the annihilation operator. - 1 Answer As the formula clearly shows, $\phi(x)$ cannot be interpreted as a pure creation operator of any type. It is a combination of creation and annihilation operators. Creation operators are those called $a(k)^\dagger$ and annihilation operators are called $a(k)$. So yes, if $\phi(x)$ acts on a generic state with a well-defined number of particles $N$, it produces a linear superposition of states that have $N+1$ and $N-1$ particles, respectively. When it acts on the vacuum, for example, however, the annihilation operator piece drops out and it creates a 1-particle state. It's somewhat hard to understand what you mean by "interpretation". The only right interpretation is the right calculation. It is an operator that gives something if it acts on a state, and all these answers may be calculated. They shouldn't be interpreted, they should be calculated. - Thanks, this is very helpful. The origin of my question is that I've often seen in textbooks correlation functions of the type $<0|T\varphi (x_1)\varphi (x_2)|0>$ are often described as a process where a particle is created at $x_2$, travels to $x_1$, and is then annihilated there. – Whelp Sep 24 '12 at 20:14 Whelp, that interpretation makes sense if we remember that we can think of each field operator acting on the vacuum bra/ket that they are next to, and that we are then taking the overlap of those two single particle states. – Doug Packard Sep 24 '12 at 20:52 @Lubos, well we can interpret things "differently" depending on whether we are thinking e.g. in the language of particle physics versus field integrals. Of course it really means the same thing, so convincing ourselves that an "interpretation" is correct is just a matter of using the dictionary to translate the one we are unsure about into the language we happen to more intuitively understand. – Doug Packard Sep 24 '12 at 20:56 Dear @Whelp, that statement is also correct because all the annihilation operators in $\phi(x_2)$ simply annihilate the vacuum ket on the right, and all the creation operators in $\phi(x_1)$ annihilate the vacuum bra on the left, so what is left is only the annihilation part of $\phi(x_1)$ and creation part of $\phi(x_2)$. However, if you had more general states in which the operators are sandwiched, you couldn't drop 1/2 of the terms this easily. – Luboš Motl Oct 3 '12 at 6:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9531245231628418, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/173941-order-topology-vs-standard-one-print.html
# Order Topology vs. the standard one? Printable View • March 9th 2011, 01:32 AM aharonidan Order Topology vs. the standard one? on $R^2$ we define an order topology (lexicographic order) as follow: $(x1,y1)>(x2,y2)$ if $x1>x2$ or ( $x1=x2$ & $y1>y2$). Is this topology equivalent to the standard topology? if not, which topology is finer?(give an example of an open set in one but not in the other)? thanks :) • March 9th 2011, 02:16 AM tonio Quote: Originally Posted by aharonidan on $R^2$ we define an order topology (lexicographic order) as follow: $(x1,y1)>(x2,y2)$ if $x1>x2$ or ( $x1=x2$ & $y1>y2$). You have defined no topology at all: you've defined a (partial) order on $\mathbb{R}^2$ . Now, as far as I know, an order topology is defined on a totally ordered set, so if you meant this then you first prove the above gives you a total order on the real plane and then look at the derived (order) topology, and then you can ask yourself whether this toplogy is the same as the usual Euclidean one. Tonio Is this topology equivalent to the standard topology? if not, which topology is finer?(give an example of an open set in one but not in the other)? thanks :) . • March 9th 2011, 02:43 AM aharonidan I wasn't clear enough. if (X,<) is a partial order set, then all (a,b) a<b U (-infinity,infinity) is a base of a topology on X. now I need to compare this topology with the Euclidean one. any help or hint is appreciated All times are GMT -8. The time now is 04:23 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9026402831077576, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/47547/why-do-many-people-say-vector-fields-describe-spin-1-particle-but-omit-the-spin/47550
# Why do many people say vector fields describe spin-1 particle but omit the spin-0 part? We know a vector field is a $(\frac{1}{2},\frac{1}{2})$ representation of Lorentz group, which should describe both spin-1 and spin-0 particles. However many of the articles(mostly lecture notes) I've read, when they talk about the relation between types of fields and spins of particles, they'll always say something like [...] scalar fields describe spin-0 particles, vector fields describe spin-1 particles. [...] Is there a good reason to omit the spin-0 part of vector fields? If one really wants to talk about spin-1 only, why not talk about tensor fields in the $(1,0)$ or $(0,1)$ representation? - ## 2 Answers It's because 4-vector fields in the $(1/2,1/2)$ representation don't produce any spin-0 excitations, at least not in consistent theories. Electromagnetism is the canonical example. The vector field $A_\mu$ would create both positive-norm ($A_i$) and negative-norm ($A_0$) polarizations. The latter is time-like. However, probabilities can't be negative, so for this electromagnetic potential as well as any other 4-vector field, there must exist a gauge symmetry that decouples the spin-0, timelike component. In fact, it decouples two components – the time-like one and the longitudinal one. The longitudinal one may get restored by the Higgs mechanism in the case of massive vectors such as the W-bosons. At any rate, whether one talks about massless particles created such fields – e.g. photons or gluons – or about the massive ones such as W- and Z-bosons, there is no physical spin-0 polarizations which is why the massive ones in particular are known as vector bosons. Incidentally, for electromagnetism and other gauge theories, the $(1,0)$ or $(0,1)$ representations appear, too. $F_{\mu\nu}$, the gauge-invariant field strength of the $(1/2,1/2)$ potential, transforms as $(1,0)\oplus (0,1)$. In the Minkowski signature, one may impose a real projection on this representation to get 6 real components (electric and magnetic field strength); in other signatures, one may separate $(0,1)$ and $(1,0)$. - I need a clarification for one more point. Is it that a vector field can describe either spin-0 or spin-1 but not both, or is it that it can only describe spin-1? – Jia Yiyang Dec 25 '12 at 15:58 I believe that you are confusing SO(4) irreps with SO(3) irreps. The (1/2,1/2) representation of SO(4) is irreducible, so it corresponds to a single spin, spin 1. You get a vector and a scalar in the Clebsch-Gordan decomposition of two spinors of SO(3). Of course, these two notions are related because the (1/2,1/2) irrep of SO(4) is formed from two spinors of SO(3), which I imagine is the source of your confusion. This means that under an SO(3) subgroup of SO(4) the vector will decompose as a product of an SO(3) vector and a scalar. For instance, under the subgroup of spatial rotations the space-like components of a 4-vector become a 3-vector and the time-like component becomes a scalar. However, Lubos has explained that in a good theory even the SO(3) scalar corresponding to the timelike component does not contribute any physical excitations. I'm not sure what answer you were looking for but I thought I should add this answer in case it was a simple case of mixing up SO(3) and SO(4) irreps. The moral of the story is that you must always remember which group you are dealing. When people say that the (1/2,1/2) irrep corresponds to a spin 1 field they are talking about spin 1 of SO(4). When people talk about addition of angular momentum in quantum mechanics the group is SO(3). - I agree that it's that when restricting to subgroup $SO(3)$ we get a C-G decomposition of spin-1 and spin-0, but I don't see what's wrong with calling these two components as spin-1 and spin-0 particles. After all spin is defined via the representation of little group and in our case(say massive) it is $SO(3)$. It seems to me that irreducibility in the full Lorentz group and reducibility in $SO(3)$ mean a boost may mix the states of spin-1 and spin-0 and a rotation cannot, which is not quite physical, but still it has nothing to do with the fact we have two types of spins. – Jia Yiyang Dec 26 '12 at 2:03 Ok, I see what is going on here. You have to differentiate between representations of the Lorentz group and representations of the Poincare group. Of course you are right that the spin is defined by the representation of the little group. However, to determine the spin of a field you do not decompose the representation of the Lorentz group into representations of the little group. The little group is used to construct representations of the Poincare group in which single particle states transform. – ald5657 Dec 26 '12 at 15:21 So, you can see that a massive vector field (Lorentz group) is spin 1 because it creates single particle states that transform in the spin 1 representation of the little group SO(3), not because it decomposes into the spin 1 representation under the SO(3) subgroup. – ald5657 Dec 26 '12 at 15:22 When restricting to rotation subgroup, there's a close relation between of representation of Lorentz group using field and using single particle states, i.e. the field representation must contain single particle state representation, c.f. Weinberg chap 5.1 page 196. Anyway this is not quite the point, the point is in the first place how do you know $(\frac{1}{2},\frac{1}{2})$ can describe spin-1?It is because $\frac{1}{2}+\frac{1}{2}=1$, then what makes you throw away $\frac{1}{2}-\frac{1}{2}=0$? – Jia Yiyang Dec 27 '12 at 6:49 For our purposes you should completely disregard what the (1/2,1/2) irrep decomposes into under SO(3). Yes, it is true that this decomposition must always contain the correct spin representation, but that is not necessary to understand the spin. The point is that if you take a (1/2,1/2) field like the photon and calculate the creation and annihilation operators you will find that these operators always create spin 1 single particle states and never spin 0 states. That's why we say that the (1/2,1/2) irrep is spin 1. – ald5657 Dec 27 '12 at 15:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364015460014343, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/10/23/group-representations/?like=1&_wpnonce=401286028f
# The Unapologetic Mathematician ## Group Representations We’ve now got the general linear group $\mathrm{GL}(V)$ of all invertible linear maps from a vector space $V$ to itself. Incidentally this lives inside the endomorphism algebra $\hom_\mathbf{Vect}(V,V)$ of all linear transformations from $V$ to itself. In fact, in ring-theory terms it’s the group of units of that algebra. So what can we do with it? One of the biggest uses is to provide representations for other algebraic structures. Let’s say we’ve got some abstract group. It’s a set with some binary operation defined on it, sure, but what does it do? We’ve seen groups acting on sets before, where we interpret a group element as a permutation of an actual collection of elements. Alternatively, an action of a group $G$ is a homomorphism from $G$ to the group of permutations of some set $S$ — $\hom_\mathbf{Set}(S,S)$. Another concrete representation of a group is as symmetries of some vector space. That is, we’re interested in homomorphisms $\rho:G\rightarrow\mathrm{GL}(V)$. A “representation” of a group $G$ is a vector space $V$ with such a homomorphism. In fact, this extends the notion of a group acting on a set. Indeed, for any set $S$ we can build the free vector space $\mathbb{F}[S]$ with a basis vector $e_s$ for each $s\in S$. Given a permutation $\pi$ on $S$ we get a linear map $\mathbb{F}[\pi]:\mathbb{F}[S]\rightarrow\mathbb{F}[S]$ defined by setting $\mathbb{F}[\pi](e_s)=e_{\pi(s)}$ and extending by linearity. We thus get a homomorphism from the group of permutations of $S$ to $\mathrm{GL}(\mathbb{F}[S])$. And then if we have a group action on $S$ we can promote it to a representation on the vector space $\mathbb{F}[S]$. We call such a representation a “permutation representation”. ## 13 Comments » 1. [...] We’ve defined a representation of the group as a homomorphism for some vector space . But where did we really use the fact that [...] Pingback by | October 24, 2008 | Reply 2. [...] We’ve seen how group representations are special kinds of algebra representations. But even more general than that is the representation [...] Pingback by | October 27, 2008 | Reply 3. [...] Now let’s narrow back in to representations of algebras, and the special case of representations of groups, but with an eye to the categorical interpretation. So, representations are functors. And this [...] Pingback by | October 28, 2008 | Reply 4. [...] general linear group acts on the hom-set by conjugation — basis changes. In fact, this is a representation of the group, but I’m not ready to go into that detail right now. What I can say is that the [...] Pingback by | October 30, 2008 | Reply 5. [...] let’s look at some examples of group representations. Specifically, let’s take a vector space and consider its general linear group [...] Pingback by | December 2, 2008 | Reply 6. [...] to the symmetric group . In fact, these are the image of the usual generators of under the permutation representation. They just rearrange the order of the basis [...] Pingback by | August 26, 2009 | Reply 7. [...] course it was the fall of 2008 before I even defined a group representation in the main exposition, so some of this information has been a long time coming. But still, what [...] Pingback by | January 11, 2010 | Reply 8. [...] course it was the fall of 2008 before I even defined a group representation in the main exposition, so some of this information has been a long time coming. But still, what [...] Pingback by | August 28, 2010 | Reply 9. [...] let’s go in a completely different direction! I want to talk about the representation theory of permutation groups. Now at least on the surface you might not think there’s a lot [...] Pingback by | September 7, 2010 | Reply 10. [...] specifically concerned with complex representations of these groups. That is, we want to pick some complex vector space , and for each permutation we [...] Pingback by | September 8, 2010 | Reply 11. [...] Sample Representations As promised, we want to see some examples of matrix representations for those who might not have seen much of them before. These are homomorphisms from a group to the [...] Pingback by | September 13, 2010 | Reply 12. [...] and Descending Representations Let’s recall that a group representation is, among other things, a group homomorphism. This has a few [...] Pingback by | October 29, 2010 | Reply 13. [...] Lie groups are groups, they have representations — homomorphisms to the general linear group of some vector space or another. But since is a [...] Pingback by | June 13, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344338178634644, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/93166-solved-basic-math-taken-next-level.html
# Thread: 1. ## [SOLVED] Basic math taken to a next level... Hello everyone! During my revision for my June exams (first year engineering) I came across this problem: $<br /> <br /> \frac{2\sqrt{2} + 2 \sqrt{6}}{\sqrt{2+\sqrt{3}}}<br /> <br />$ Simple math as it seems it kept me busy for 5 hours and stil no results. I know the answer is 4, but the catch is we are not allowed to use calculator. So I'm not interested in an answer, I'm interested in a method. I've already found multiple ways of simplifying it, so that there is no division by a square root but the method is still eluding me... 2. Is that $\frac{2\sqrt 2 +2\sqrt 6}{\sqrt2 +\sqrt3}$ ? Cause that doesn't equal $\frac{4}{3}$ 3. Hi, Originally Posted by FifthRider During my revision for my June exams (first year engineering) I came across this problem: $<br /> [2*sqrt(2) + 2*sqrt(6)]/[sqrt(2 + sqrt(3))]<br />$ Simple math as it seems it kept me busy for 5 hours and stil no results. I know the answer is $4/3$ , but the catch is we are not allowed to use calculator. So I'm not interested in an answer, I'm interested in a method. Let $A=\frac{2\sqrt{2}+2\sqrt{6}}{\sqrt{2+\sqrt{3}}}$. We have $A=2\sqrt{2}\cdot \frac{1+\sqrt{3}}{\sqrt{2+\sqrt{3}}}$ hence $A^2=8\cdot \frac{1+2\sqrt{3}+3}{2+\sqrt{3}}=8\times 2 =16$ which gives us $A=+\sqrt{A^2}=4$. 4. Thanks, feel like kicking myself in the butt. Did I mention that the 5 hours I spent on it was this morning from 12-5? Might explain the elusive answer... Thanks again!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9741208553314209, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/20594/can-mathematica-resolve-this-equation/20599
# Can mathematica resolve this equation? Obviously,the equation $x^2-3 y^2=2 z^2$ doesn't have positive integer solutions. I tried ````Exists[{x, y, z}, x > 0 && y > 0 && z > 0 && x^2 - 3 y^2 == 2 z^2] Resolve[%, Integers] ```` and ````Exists[{x, y, z}, Element[{x, y, z}, Integers], x > 0 && y > 0 && z > 0 && x^2 - 3 y^2 == 2 z^2] Resolve[%] ```` but I don't get any useful result. Am I using these functions incorrectly? - ## 1 Answer Why obviously? In principle, if you want to solve diophantine equations, `FindInstance` with option `Integers` is what you need but (as the warning message below says) it can be insufficient in proving such solutions don't exist which seems to be what you want. ````FindInstance[x^2 - 3 y^2 == 2 z^2 && z > 0, {x, y, z}, Integers] ```` - 4 The lack of any solution might not be "obvious," but Fermat's method of descent works well here. If we start with a primitive solution $(x,y,z)$ and reduce it mod $3$, we see easily that $x$ and $z$ are multiples of $3$: `Solve[x^2 - 3 y^2 == 2 z^2, {x, y, z}, Modulus -> 3]`. Dividing out by $3$ and reducing the new equation mod $3$ shows that $y$ also is a multiple of $3$: `Solve[3 xp^2 - y^2 == 6 zp^2, {xp, y, zp}, Modulus -> 3]`. This contradicts the assumption that we had a primitive solution, whence there can be no solutions. – whuber Mar 4 at 17:20 Thanks! I think your comment makes for a perfectly credible answer then. – gpap Mar 4 at 17:54 1 I don't believe my comment answers the question stated, which is whether `Exists` and `Resolve` have been "incorrectly" used. The question does explicitly presume the lack of all solutions, and your answer does challenge the basis of that presumption--which I think is a valid thing to do--so I commented only in order to address that tangential issue. – whuber Mar 4 at 17:58 @whuber The problem is how to state a "primitive" solution in such a way Mma can use it – belisarius Mar 4 at 22:20 – whuber Mar 4 at 22:43 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.914339542388916, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/tag/fourier-series/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘Fourier series’ tag. ## 245C, Notes 2: The Fourier transform 6 April, 2009 in 245C - Real analysis, math.CA, math.GR, math.OA, math.RT | Tags: characters, Fourier series, Fourier transform, Pontryagin duality | by Terence Tao | 78 comments In these notes we lay out the basic theory of the Fourier transform, which is of course the most fundamental tool in harmonic analysis and also of major importance in related fields (functional analysis, complex analysis, PDE, number theory, additive combinatorics, representation theory, signal processing, etc.). The Fourier transform, in conjunction with the Fourier inversion formula, allows one to take essentially arbitrary (complex-valued) functions on a group ${G}$ (or more generally, a space ${X}$ that ${G}$ acts on, e.g. a homogeneous space ${G/H}$), and decompose them as a (discrete or continuous) superposition of much more symmetric functions on the domain, such as characters ${\chi: G \rightarrow S^1}$; the precise superposition is given by Fourier coefficients ${\hat f(\xi)}$, which take values in some dual object such as the Pontryagin dual ${\hat G}$ of ${G}$. Characters behave in a very simple manner with respect to translation (indeed, they are eigenfunctions of the translation action), and so the Fourier transform tends to simplify any mathematical problem which enjoys a translation invariance symmetry (or an approximation to such a symmetry), and is somehow “linear” (i.e. it interacts nicely with superpositions). In particular, Fourier analytic methods are particularly useful for studying operations such as convolution ${f, g \mapsto f*g}$ and set-theoretic addition ${A, B \mapsto A+B}$, or the closely related problem of counting solutions to additive problems such as ${x = a_1 + a_2 + a_3}$ or ${x = a_1 - a_2}$, where ${a_1, a_2, a_3}$ are constrained to lie in specific sets ${A_1, A_2, A_3}$. The Fourier transform is also a particularly powerful tool for solving constant-coefficient linear ODE and PDE (because of the translation invariance), and can also approximately solve some variable-coefficient (or slightly non-linear) equations if the coefficients vary smoothly enough and the nonlinear terms are sufficiently tame. The Fourier transform ${\hat f(\xi)}$ also provides an important new way of looking at a function ${f(x)}$, as it highlights the distribution of ${f}$ in frequency space (the domain of the frequency variable ${\xi}$) rather than physical space (the domain of the physical variable ${x}$). A given property of ${f}$ in the physical domain may be transformed to a rather different-looking property of ${\hat f}$ in the frequency domain. For instance: • Smoothness of ${f}$ in the physical domain corresponds to decay of ${\hat f}$ in the Fourier domain, and conversely. (More generally, fine scale properties of ${f}$ tend to manifest themselves as coarse scale properties of ${\hat f}$, and conversely.) • Convolution in the physical domain corresponds to pointwise multiplication in the Fourier domain, and conversely. • Constant coefficient differential operators such as ${d/dx}$ in the physical domain corresponds to multiplication by polynomials such as ${2\pi i \xi}$ in the Fourier domain, and conversely. • More generally, translation invariant operators in the physical domain correspond to multiplication by symbols in the Fourier domain, and conversely. • Rescaling in the physical domain by an invertible linear transformation corresponds to an inverse (adjoint) rescaling in the Fourier domain. • Restriction to a subspace (or subgroup) in the physical domain corresponds to projection to the dual quotient space (or quotient group) in the Fourier domain, and conversely. • Frequency modulation in the physical domain corresponds to translation in the frequency domain, and conversely. (We will make these statements more precise below.) On the other hand, some operations in the physical domain remain essentially unchanged in the Fourier domain. Most importantly, the ${L^2}$ norm (or energy) of a function ${f}$ is the same as that of its Fourier transform, and more generally the inner product ${\langle f, g \rangle}$ of two functions ${f}$ is the same as that of their Fourier transforms. Indeed, the Fourier transform is a unitary operator on ${L^2}$ (a fact which is variously known as the Plancherel theorem or the Parseval identity). This makes it easier to pass back and forth between the physical domain and frequency domain, so that one can combine techniques that are easy to execute in the physical domain with other techniques that are easy to execute in the frequency domain. (In fact, one can combine the physical and frequency domains together into a product domain known as phase space, and there are entire fields of mathematics (e.g. microlocal analysis, geometric quantisation, time-frequency analysis) devoted to performing analysis on these sorts of spaces directly, but this is beyond the scope of this course.) In these notes, we briefly discuss the general theory of the Fourier transform, but will mainly focus on the two classical domains for Fourier analysis: the torus ${{\Bbb T}^d := ({\bf R}/{\bf Z})^d}$, and the Euclidean space ${{\bf R}^d}$. For these domains one has the advantage of being able to perform very explicit algebraic calculations, involving concrete functions such as plane waves ${x \mapsto e^{2\pi i x \cdot \xi}}$ or Gaussians ${x \mapsto A^{d/2} e^{-\pi A |x|^2}}$. Read the rest of this entry » ### Recent Comments Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue… Luqing Ye on 245A, Notes 2: The Lebesgue… E.L. Wisty on Simons Lecture I: Structure an…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 36, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8905301094055176, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/80175/proofs-from-the-book-bertrands-postulate-part-3-frac23np-leq-n-right
# Proofs from the BOOK: Bertrand's postulate Part 3: $\frac{2}{3}n<p \leq n \rightarrow$ no p divides $\binom{2n}{n}$ I have a very hard proof from "Proofs from the BOOK". It's the section about Bertrand's postulate, page 9: I have to show, that for $\frac{2}{3}n<p \leq n$ there is no p which divides $\binom{2n}{n}$. I know $$\binom{2n}{n}=\frac{(2n)!}{n!n!}$$ and from $\frac{2}{3}n<p \leq n$ I follow $3p>2n$. Then $(2n)!$ has only the prime factors $p$ and $2p$ (because $3p>2n$) and $n!$ has only the prime factor $p$. At this point the author goes on to the next part of the proof. Can someone explain to me, how the argument about the $p's$ proofs the statement, that there is no $p$ which divides $\binom{2n}{n}$? I hope my question is clear and sorry for my bad English. Thanks in advance :-) - 1 – Eric♦ Nov 8 '11 at 13:14 1 – Eric♦ Nov 8 '11 at 13:15 Thanks for the links, but I have to stick to the proof in the book... – ulead86 Nov 8 '11 at 13:16 ## 2 Answers Lemma: $\lfloor 2x \rfloor - 2\lfloor x \rfloor = \begin{cases} 1 & \frac12<\{x\}, \\ 0 & 0\le \{x\} \le \frac12. \end{cases}$ Here $\{x\}$ denotes the fractional part of $x$. Now if you already know (from the preceding part of the proof) that the exponent of $p$ is $$\left\lfloor \frac{2n}p \right\rfloor - 2\left\lfloor \frac np \right\rfloor$$ then you can use the above lema for $x=\frac np$. Namely, it is zero whenever $1\le \frac np < \frac32$, which is equivalent to $\frac23n<p\le n$. Note that in that part of the proof you already assume that $p>\sqrt{2n}$, so there is at most one non-zero summand in the sum $$\sum_{k\ge 1}\left\lfloor \frac{2n}{p^k} \right\rfloor - 2\left\lfloor \frac n{p^k} \right\rfloor$$ - – Martin Sleziak Nov 21 '11 at 18:24 Ok, got it. One more question: Further down on the page we have $$\frac{4^n}{2n} \leq \binom{2n}{n} \leq \prod_{p \leq \sqrt{2n}} 2n * \prod_{\sqrt{2n} < p \leq \frac{2}{3}n} p * \prod_{n<p \leq 2n} p$$ The author leaves out $\frac{2}{3}n<p\leq n$ because there are no $p's$. But what is the reason he writes $$\prod_{p \leq \sqrt{2n}} 2n$$ All the other products are with $p$ which is understandable, so why not in the first one? Greetings, Daniel Let us denote $a(p)$ the exponent of $p$ in $\binom{2n}n$. The inequality $$p^{a(p)} \le 2n$$ follows from $a(p)\le \max \{r; p^r\le 2n\}$. (Note that this inequality is mentioned in the proof.) The proof of the last inequality: We have $$a(p)=\sum_{k\ge 1}\left\lfloor \frac{2n}{p^k} \right\rfloor - 2\left\lfloor \frac n{p^k} \right\rfloor$$ and the summand is zero for $k> \max\{r; p^r\le 2n\}$ and all summands are at most one. So we have $$a(p) \le \sum_{k; p^k\le 2n} 1 = \max \{r; p^r\le 2n\}.$$ Therefore we have $$\prod_{p\le\sqrt{2n}} p^{a(p)} \le \prod_{p\le\sqrt{2n}} 2n.$$ Greetings, Martin - 2 I believe that it would be better to edit your original question and add this there than to post a complementary question as a new answer. (It does not matter so much, but making a habit of this would not be a good thing.) – Martin Sleziak Nov 8 '11 at 13:48 Hi Martin, thanks for your time and effort. I thought the same, so next time, I open a new question. Sorry for that. – ulead86 Nov 8 '11 at 13:56 Just to get it right: Because $a(p) \leq max\{r;p^r \leq 2n\}$ I can write $$\prod_{p \leq \sqrt{2n}} 2n$$ and not $$\prod_{p \leq \sqrt{2n}} p$$? – ulead86 Nov 8 '11 at 14:16 1 I would say that $\prod_{p\le\sqrt{2n}} p^{a(p)}$ can be estimated by $\prod_{p\le\sqrt{2n}} 2n$ - see my last edit. With the exception of the missing exponent I agree with what you wrote. – Martin Sleziak Nov 8 '11 at 14:29 Ah, thank you for the explanation :) – ulead86 Nov 8 '11 at 14:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400228261947632, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/38649/what-is-the-curvature-scalar-psi-4?answertab=active
# What is the curvature scalar $\Psi_{4}$? What is the curvature scalar $\Psi_{4}$? Is it related to the scalar curvature $R$? What does its real and imaginary parts represent? - ## 1 Answer It's one of the Weyl curvature scalars or coefficients, see the first page of http://arxiv.org/abs/1105.0781 - They're some "doubly light-like", see the formulae, components of the Weyl tensor, and because the Ricci scalar is specifically removed from the Weyl tensor, you may be sure that $\Psi_4$ isn't related to $R$. But both $R$ and $\Psi_n$ are linear combinations of components of the Riemann tensor. - 2 Typically, for the "natural" tetrad coordinate choices, $\psi_{4}$ represents terms related to radiation. But it is obviously something that his highly coordinate-dependent. – Jerry Schirmer Sep 29 '12 at 16:44 – user12345 Sep 30 '12 at 14:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183560013771057, "perplexity_flag": "middle"}
http://physics.aps.org/articles/v2/89
# Viewpoint: Slipping through blood flow , , and , Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ 08544, USA Published October 26, 2009  |  Physics 2, 89 (2009)  |  DOI: 10.1103/Physics.2.89 Simulations provide insight into how viscous flow transforms the shapes of red blood cells, which may influence their physiological properties. #### Why Do Red Blood Cells Have Asymmetric Shapes Even in a Symmetric Flow? Badr Kaoui, George Biros, and Chaouqi Misbah Published October 26, 2009 | PDF (free) Coronary artery disease—the leading cause of death in the United States—results from the formation of plaque in our arteries, which blocks the transport of blood. The movement and deformation of red blood cells can also affect the flow of blood, and vice versa, but the mechanics of this relationship is still being explored. Writing in Physical Review Letters, Badr Kaoui and Chaouqi Misbah at Université Joseph Fourier in Grenoble, France, and George Biros at the Georgia Institute of Technology in the US explore, with simulations, how flow deforms red blood cells [1]. Their numerical simulations show that an experimentally observed transition in the shape of red blood cells, from symmetric to asymmetric, occurs even when the cells (which the authors model as vesicles) move in a fluid with a symmetric flow velocity distribution. They suggest that this shape transition, which arises because the symmetric shape is unstable, may be able to influence the flow efficiency for red blood cells. As recent research links the chemical responses of red blood cells to their mechanics, such models of individual shape transitions of cells could offer further understanding of physiological flows. In the most elementary model for flow in the circulatory system, the heart acts as a pump, which drives the fluid containing red blood cells (blood) through circular tubes. In this case we expect the profile of the velocity to have a parabolic shape (Hagen-Poiseuille flow), as in the lower left part of Fig. 1. For a constant pressure drop, the volumetric flow rate of the fluid is proportional to the radius of the tube to the fourth power, which suggests that the diameter of blood vessels will play an important role in controlling blood flow. The natural expectation for a single red blood cell in a microvessel flow, which is assumed to have a parabolic velocity profile, is that it will be confined and should form a symmetrical shape in the center of the symmetrical flow. This result, however, is not necessarily the case, as pointed out by Kaoui et al. A common model for red blood cells is a vesicle, which is a drop of liquid that is completely enclosed by a bilayer made from the same kind of phospholipid molecules found in cell membranes. Red blood cells are different from vesicles in that they have a cytoskeleton protein network underneath the lipid bilayer membrane, which gives the system a shear elasticity and supports a biconcave shape (about $8μm$ in diameter) under static conditions (top of Fig. 1). Under flow conditions, however, the fact that the cell can deform contributes to the viscous energy dissipation of the flow. For example, a red blood cell can tumble, as a rigid body, and tank-tread, where the cell maintains a constant orientation in a flow while the membrane rotates around the cell’s cytoplasm. There is also a symmetric “parachute” cell morphology where the cell deforms as a result of viscous forces, but keeps a symmetric shape and therefore cannot tank-tread. In vivo studies have already demonstrated that red blood cells do in fact form asymmetric shapes in vessels that are less than $20μm$, which is only a little larger than the cell itself [2]. In the presence of large viscous forces, the cell deforms asymmetrically into a “slipper” shape [3]. These asymmetric cells tank-tread because asymmetric viscous forces—produced by the cell’s asymmetric shape or the nonsymmetric position of the cell relative to the long axis of the vessel—act on the membrane. This “slipper” shape, which is a consequence of the confinement of the cell in a close-fitting channel, is believed to substantially reduce viscous dissipation [4]. With their simulations, Kaoui et al. have tried to address the question of how the cell shape changes as a consequence of the detailed flow structure, largely in the absence of direct interactions with a wall. For simplicity, they model the red blood cells as vesicles and assume they move in a plane in a symmetric parabolic flow. They find that the shape transition results from a loss in stability of the shape, which occurs when a dimensionless vesicle deflation number $v$, defined as the ratio of the actual area to the area of a circle with a circumference equal to the perimeter of the cell, is below a certain value. ($v$ is always less than $1$, unless the cell is a circle.) Below the critical value of $0.7$, the symmetric parachutelike shape develops an instability and the cell transforms into an asymmetric slipperlike shape (Fig. 1, right). Most importantly, this transition is not dependent on either the confinement of the surrounding blood vessel walls or membrane shear elasticity, which provides a new perspective for understanding the dynamics of red blood cells under flow conditions. Kaoui et al. also point out that this shape transition causes a decrease in the velocity difference between the cell and the flow, which could potentially enhance the efficiency of blood flow. Kaoui et al.’s model is limited to two-dimensional vesicles and it is unclear if their results translate to more realistic three-dimensional models of red blood cells. Moreover, confinement and shear elasticity were shown to be unnecessary in this particular case, e.g., there was no direct wall effect, but such confinement influences may still play roles in shape transitions under different circumstances. Nevertheless, experiments on red blood cells have shown that viscous shear stresses in the flow control the transition from a symmetric parachutelike shape to an asymmetric slipper shape and that confinement is not necessary for the slipper shape [5], both conclusions that are consistent with the results of Kaoui et al. The study of Kaoui et al. is an important contribution to the field of cell dynamics, but the role this shape transition plays in other physiological factors has yet to be examined theoretically or experimentally. For example, the effect of the density of red blood cells on this shape transition has not been explored and could result in cell clustering [6]. Adenosine triphosphate (ATP), which is released by red blood cells and can cause blood vessel dilation in vivo, has been correlated with cell deformation [7, 8]. The role that the shape transitions discussed above play in the release of ATP or other chemicals thus will have interesting ramifications for both circulatory physiology and pathophysiology. In addition, as suggested by Kaoui et al., it has not been elucidated what role tank-treading or the lack thereof, plays in oxygen transport, which is the main function of red blood cells. ### References 1. B. Kaoui, G. Biros, and C. Misbah, Phys. Rev. Lett. 103, 188101 (2009). 2. R. Skalak and P. I. Branemark, Science 164, 717 (1969). 3. P. Gaehtgens, C. Dührssen, and K. H. Albrecht, Blood Cells 6, 799 (1980). 4. P. Gaehtgens and H. Schmid-Schönbein, Naturwissenschaften 69, 294 (1982). 5. M. Abkarian, M. Faivre, R. Horton, K. Smistrup, C. A. Best-Popescu, and H. A. Stone, Biomedical Materials 3, 13 (2008). 6. J. L. McWhirtera, H. Noguchi, and G. Gompper Proc. Natl. Acad. Sci. U.S.A. 106, 6039 (2009). 7. A. K. Price, D. J. Fischer, R. S. Martin, and D. M. Spence, Anal. Chem. 76, 4849 (2004). 8. J. Wan, W. D. Ristenpart, and H. A. Stone, Proc. Natl. Acad. Sci. U.S.A. 105, 16432 (2008). ### About the Author: Howard A. Stone Howard A. Stone is the Donald R. Dixon ’69 and Elizabeth W. Dixon Professor in Mechanical and Aerospace Engineering at Princeton University. He received his S.B. degree in chemical engineering from the University of California, Davis, and a Ph.D. in chemical engineering from Caltech. From 1989 to 2009 he was on the faculty in the School of Engineering and Applied Sciences at Harvard University. His research interests are in fluid dynamics, especially as they arise in research and applications at the interface of engineering, chemistry, and physics. He was the first recipient of the G. K. Batchelor Prize in Fluid Dynamics, which was awarded in August 2008. In 2009 he was elected to the National Academy of Engineering. ### About the Author: Alison M. Forsyth Alison M. Forsyth is a Ph.D. candidate at Harvard University in the School of Engineering and Applied Sciences and is currently a visiting scholar at Princeton University in the Department of Mechanical and Aerospace Engineering. She completed her B.S. in bioengineering at Syracuse University in 2006. Her research involves red blood cell deformation and dynamics with implications for physiological responses in the cardiovascular system. ### About the Author: Jiandi Wan Jiandi Wan is currently a Research Associate in the Department of Mechanical and Aerospace Engineering at Princeton University. His degrees are in chemistry from Wuhan University (B.S., 1998, M.S., 2001) and Boston University (Ph.D., 2006). Dr. Wan worked as a postdoctoral researcher in the School of Engineering and Applied Sciences at Harvard University from 2006 to 2009 and moved to Princeton University in 2009. Dr. Wan’s research includes microfluidic approaches for studying red blood cell dynamics and multiphase emulsions, photoinduced electron transfer dynamics, and surface chemistry. His recent work focuses on the material and biophysical applications of microfluidics. ## Related Articles ### More Fluid Dynamics Round and Round in Circles Synopsis | May 9, 2013 Ocean Wave Breaking Stirs Up Atmosphere Focus | May 3, 2013 ### More Biological Physics Round and Round in Circles Synopsis | May 9, 2013 Wind-up DNA Synopsis | Apr 25, 2013 ## New in Physics Wireless Power for Tiny Medical Devices Focus | May 17, 2013 Pool of Candidate Spin Liquids Grows Synopsis | May 16, 2013 Condensate in a Can Synopsis | May 16, 2013 Nanostructures Put a Spin on Light Synopsis | May 16, 2013 Fire in a Quantum Mechanical Forest Viewpoint | May 13, 2013 Insulating Magnets Control Neighbor’s Conduction Viewpoint | May 13, 2013 Invisibility Cloak for Heat Focus | May 10, 2013 Desirable Defects Synopsis | May 10, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.911927342414856, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/23782/boundary-conditions-of-navier-cauchy-equation
# Boundary conditions of Navier-Cauchy equation I'm having difficulties with Neumann boundary conditions in Navier-Cauchy equations (a.k.a. the elastostatic equations). The trouble is that if I rotate a body then Neumann boundary condition should be satisfied with zero force. In math language: if deformation is given by $$u_i ~=~ a_{ij}x_j - x_i.$$ Where $a_{ij}$ is rotational matrix. Then this $$\mu n_j ( u_{i,j} + u_{j,i}) + \lambda n_i u_{k,k} ~=~ 0$$ (Neumann boundary condition) should hold everywhere and for any vector $n_i$ (basically it doesn't matter how the body looks like). But if I substitute for $u_i$ I get $$2 \mu n_j(a_{ij} - \delta_{ij}) + \lambda n_i ( a_{jj} -3 ),$$ which is not zero. Because first term rotates with $n$ and the rest two just scale $n$. So I cannot get a zero for every $n$. Can someone see what am I doing wrong? I would be most grateful for any help. Tom - ## 1 Answer As far as I understand the question. The expression $(u_{i,j}+u_{j,i})$ is basically a strain tensor. And it works only for small deformations, so your rotation cannot be too large, Lets select the z axis along the axis of rotation. Then we can write the matrix explicitly and try to see what happens when we consider only linear terms in the rotation angle : $$a_{ij} = \left(\begin{array}{ccc} \cos\theta&-\sin\theta&0\\ \sin\theta& \cos\theta&0\\ 0& 0&1 \end{array}\right)\simeq \left(\begin{array}{ccc} 1&-\theta&0\\ \theta& 1&0\\ 0& 0&1 \end{array}\right)+O(\theta^2)$$ So, we've got that $a_{jj}-3=0$, and that diagonal terms in $(a_{ij}-\delta_{ij})$ are equal to zero. Therefore the part that scales $n_i$ vanishes. - doesn't the small deformation condition mean that it works as long as Hook's law holds? – Tom Apr 15 '12 at 10:41 @Tom I always thought that Hook's law holds as long as deformations are small enough... – Kostya Apr 15 '12 at 10:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285553097724915, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/55216/what-formulas-should-i-use-to-realistically-model-the-diffusion-of-a-drop-of-ink
# What formulas should I use to realistically model the diffusion of a drop of ink in a water? I am a mathematician and am originally from the math side of stackexchange. I want to model the behaviour of a drop of ink diffusing in water. I dont want to simply use the diffusion equation $u_t(\mathbf{x},t)=D \triangledown^2u(\mathbf{x},t)$ because firstly it will produce a diffusion of the ink completely symmetrical in the $x$, $y$ and $z$ direction, secondly, it does not take into account the gravity producing a force (say in the $z$-direction) and lastly, it does not take into account the velocity of the moving ink particles and the different pressure at each point. Now I would like (if possible) to create a program that will give a result that looks similar to the sort of chaotic diffusion we see in real life, possible by creating a non-symmetrical initial disturbance in the form of an inital velocity in the ink. What formulas should I be looking at in this case? Can I ignore some of the things I've mentioned above and still get a realistic result? Is it possibly true that the problems I've mentioned can be fixed by not taking $D$ to be constant but rather a function of the velocity, density and pressure and then using the formulas from fluid dynamics to find these at each position and time? As I said, I am a mathematician and I apologize in advance for the possibility that there are silly errors in my question or that my limited understanding of physics makes this a non-sensical question all together. Any help would be greatly appreciated though! - ## 1 Answer If the drop is very much static (in still water) and of similar fluid properties to the water around it (so that the ink just labels some initial region), then this is the correct equation to use. If, however, you want to treat the ink as having distinct properties from the water, then you want the Navier-Stokes equations. Since you are interested in gravity, I assume you have a different density in mind for the ink, and probably a different viscosity as well. Certainly turbulent fluids mix much faster than diffusion predicts. Generally, the mechanism by which this enhanced diffusion takes place is this: First, turbulent fluid flow, via nonlinear coupling term $(\mathbf{v}\cdot\nabla)\mathbf{v}$, creates smaller and smaller scale structures, i.e., fine layers of ink and water. Second, once these scales are small enough, diffusion is effectively fast, having only very small length scales to mix together. This depends on your system being unstable to perturbations, which depends a great deal on the geometry of your ink drop and the ink properties. As a starting point, you could treat the two fluids as immiscible, asking how pure parcels of fluid disperse due to shear or the Rayleigh-Taylor instability. Many Navier-Stokes solvers have been written; this page provides a non-exhaustive list. A more complex picture allows for true mixing, i.e., by diffusion. This can also be done, although it involves keeping track of the "ink density" and having a means of computing properties like density and viscosity of dilute ink. A simplistic approach might be to, at each time step, advance the fluid code, then apply a separate diffusion step, and then repeat, while keeping track of the ink density and the viscosity at each grid point. - Thanks a lot for your answer! So if I understand correctly I would at each step first apply the N-S equations to determine the velocity/density at each point. Then I use this distribution to determine the diffusion of the ink (diffusion eq) and so I can then find the ink density everywhere. Then I proceed to the next time step? – Teun Verstraaten Feb 26 at 22:08 That would be a simplistic approach. You'd have to do some work to characterize how inaccurate that approach might be. As a first attempt, this might be a good start. If you're trying to get something published, then you should be doing considerably more background reading and talking to experts in hydrodynamic simulation. – KDN Feb 26 at 23:13 Allright thanks a bunch! I am not at all looking to get anything published by the way. I did a project where we modeled the distributing of heat throughout a cake as it is being heated from the top using the heat/diffusion equation. But the result was a little bit boring. I just want some more practice with modelling and going for something a little bit more complex. Thanks! – Teun Verstraaten Feb 27 at 11:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399223923683167, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/193605/what-is-the-probability-of-rolling-a-1-on-the-catenative-doomsday-dice-cascader
# What is the probability of rolling a 1 on the Catenative Doomsday Dice Cascader? As seen here. Assume that each cascader bubble begins with a d6, as opposed to being determined by the prime bubble along with the number of cascader bubbles; the scene makes it ambiguous. $\frac{1}{36} = 0.02777...$ provides a simple lower bound for the probability (1 on the prime bubble and the sole cascader bubble). If the prime bubble rolls a 2, the odds are $\frac{1}{6}\left(\frac{1}{6}+\frac{1}{12}+\frac{1}{18}+\frac{1}{24}+\frac{1}{30}+\frac{1}{36}\right) = \frac{49}{720} = 0.0680555...$; multiple that by the 1/6 chance of rolling that 2 and add to the original 1/36, and you get $0.02\bar{7}+0.01134\overline{259} = 0.03912\overline{037}$. After that, it gets way beyond me. - Formally: let $X_0,X_1, X_2, X_3, ...$ be a sequence of random variables. $X_{0}$ is equal to $n$ with probability one (where $n$ is the number of sides on the initial cascading dice). Each successive variable $X_{k+1}$ is uniformly distributed on $\{1,2,3,...,\Pi_{i=0}^{k} X_{i}\}$. Define $p_{k}=P[X_{k}=1]$; what is $(1/6)\sum_{i=1}^{6}p_i$? – mjqxxxx Sep 10 '12 at 14:24 I found ambiguous whether there were $6$ more dice all the time or the number matched the original roll, not the number of sides on the dice. The chance if the prime bubble rolls higher than $2$ that you get a $1$ at the end will be very small, so in practice you can ignore it. – Ross Millikan Oct 6 '12 at 2:52 ## 1 Answer Suppose the prime bubble rolls an $n$. Then the cascader bubble rolls will be $\{r_1, r_2, ... r_{n-1},1\}$ with probability $$\frac{1}{6}\cdot\frac{1}{6r_1}\cdot\frac{1}{6r_1r_2}\cdots\frac{1}{6r_1r_2\cdots r_{n-1}}=\frac{1}{6^n r_1^{n-1} r_2^{n-2} \cdots r_{n-1}}$$ for any $r_1\in[6]=\{1,2,3,4,5,6\}$, $r_2\in[6r_1]$, $r_3\in[6r_1r_2]$, etc., up to $r_{n-1}\in[6r_1r_2\cdots r_{n-2}]$. The total probability is then $$p_{n}=\frac{1}{6^n}\sum_{r_1=1}^{6}\frac{1}{r_1^{n-1}}\sum_{r_2=1}^{6r_1}\frac{1}{r_2^{n-2}}\cdots\sum_{r_{n-1}=1}^{6r_1r_2\cdots r_{n-2}}\frac{1}{r_{n-1}}.$$ In particular, $$\begin{eqnarray} p_1&=&\frac{1}{6}=0.166666... \\ p_2&=&\frac{1}{36}\sum_{r_1=1}^{6}\frac{1}{r_1}=\frac{1}{36}H_{6}=\frac{49}{720}=0.06805555... \\ p_3&=&\frac{1}{216}\sum_{r_1=1}^{6}\frac{1}{r_1^2}\sum_{r_2=1}^{6r_1}\frac{1}{r_2} \\ &=&\frac{1}{216}\left(H_{6}+\frac{1}{4}H_{12}+\frac{1}{9}H_{18}+\frac{1}{16}H_{24}+\frac{1}{25}H_{30}+\frac{1}{36}H_{36}\right) \\ &=&\frac{97493779762855253}{5104009215002880000}=0.01910141... \end{eqnarray}$$ Using these first three terms gives the lower bound $(p_1+p_2+p_3)/6 = 0.04230...$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9055466651916504, "perplexity_flag": "head"}
http://mathoverflow.net/questions/119012/mod-3-moore-spectrum
## Mod 3 Moore spectrum ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I only know through stories that mod 3 moore spectrum is not associative. I do not know of any proof. I have been informed that Toda had proved it in the paper "Extended $p^{th}$ power". I was not able to follow it. Can anybody give me a proof that mod 3 Moore spectrum is not associative? - This is also Lemma 6.2 in Toda's On spectra realizing exterior parts of the Steenrod algebra (sciencedirect.com/science/article/pii/…), which you might find helpful. (I don't know if this proof is any different from the one in the Toda reference you mention, which I don't have available.) – Eric Peterson Jan 15 at 20:47 1 That paper of Toda's is a little concentrated. The nature of the answer to this question will probably depends pretty heavily on whether you know what a Massey product is. You can show that there must be a map from $M$ to $H\mathbb{Z}/3$ which preserves the unit and multiplication. The resulting map $H_* M \to H_* H\mathbb{Z}/3$ has as image a square-zero class in the dual Steenrod algebra ($\tau_0$) whose triple Massey product $\langle \tau_0, \tau_0, \tau_0 \rangle$ is not in the image. – Tyler Lawson Jan 16 at 4:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426595568656921, "perplexity_flag": "head"}
http://antimeta.wordpress.com/2007/12/01/foundations-of-category-theory/
# Antimeta A general distrust of strong metaphysical claims in mathematics and philosophy. ## Foundations of Category Theory 1 12 2007 Yesterday, Solomon Feferman from Stanford was up at Berkeley to give the logic colloquium, and he had a very interesting discussion of foundations for category theory. I figured I’d write up the outlines of the basic theories he discussed, because mathematicians might well be interested in how this sort of thing could work. I’d also be interested in hearing what people who actually use category theory think of all this. The issues that came up are the reason why set theory suffices as a foundation for all mathematics before category theory, but has troubles here. The basic idea is that he wants some formal framework in which one can work with categories, in a way that the following four criteria are satisfied to a sufficient degree: ## The Criteria 1. There should be a category of all groups (with homomorphisms), a category of all sets (with functions), a category of all topological spaces (with continuous maps), and a category of all categories (with functors). 2. If A and B are two categories, then there should be a category consisting of all functors from A to B, with morphisms given by natural transformations. 3. Standard set-theoretic operations used throughout mathematics should be possible – things like taking unions and intersections of a set of sets, constructing powersets and ordered pairs and n-tuples and the like. 4. The framework should be provably consistent from some standard set-theoretic background. (This criterion is obviously going to seem unnecessary for someone like Lawvere, who wants to use category theory as an alternate foundation for mathematics. For someone like that, the really interesting criterion will probably be #3, because they’ll have to translate away all talk that mathematicians normally have of sets in other terms. Maybe this has been done, but I’ve had a lot of conceptual difficulty trying to read Lawvere’s textbook treatment of this stuff. At any rate, I don’t think most mathematicians will find this any more congenial than working in ZFC directly. As a result, this sort of thing wasn’t what Feferman was talking about, but he did a good job clarifying the interplay of category theory and foundational work in the more usual frameworks.) Most of the talk was spent describing several systems that allow for some degree of satisfaction of all these criteria, but it looks like no one yet has come up with a framework that allows for full satisfaction of all of them. ## The Frameworks ### Mac Lane’s small and large categories This is probably the most familiar way of talking about these issues for mathematicians. In the background, we use a standard theory to talk about sets and proper classes (Bernays-Gödel in particular). This theory gives us the full power of ZFC when talking about sets, but also allows us to talk about proper classes whose elements are sets. On this picture a category is a class of objects, together with a class of morphisms between them, satisfying the standard axioms. If both of these classes are sets, then the category is said to be small, while if either one is a proper class, then the category is said to be large. We can see how well it fits all the criteria: 1. There are categories of all groups, of all sets, and all topological spaces, but each of these categories is itself large. There is a category of all small categories, but this category obviously doesn’t have any of the previous categories (or itself) as an object. Thus, it makes sense to talk about the fundamental group operation as a functor from the category of topological spaces to the category of groups, but this functor can’t itself be seen as a morphism in the category of categories. (Note that these categories might be seen to leave out some members. Once we can talk about proper classes, there are some “large” groups, sets, and topological spaces, just as there are large categories. An example of a large group is Conway’s surreal numbers, which form a field, but are a proper class, so they are a large group, and large ring, as well as a large field. So we might also think there’s a problem with the category of groups since it doesn’t tell us about morphisms from arbitrary groups into the surreal numbers.) 2. If A is a small category, then functors from A to B can be represented by sets of ordered pairs (in the usual way that functions are) so that there is in fact a category containing all these functors as objects. If B is small, then this category is small, and if B is large, then this category is large as well. However, if A is a large category, then functors would themselves be proper classes, and therefore can’t be members of classes, and therefore there is no category containing these functors. This restriction may or may not be problematic. (I don’t know enough about what mathematicians actually do with categories, so I don’t know if it’s important to have this functor category between large categories.) 3. Fortunately, since the background theory is built from a standard set theory, all standard set-theoretic constructions work just fine. 4. Bernays-Gödel set theory was constructed specifically to be a conservative extension of ZF. Therefore, it is consistent iff ZF is. And just as ZFC is conservative iff ZF is, Bernays-Gödel is consistent with the Axiom of Choice iff it is consistent by itself. ### Grothendieck’s “axiom of universes” This is another background I’ve heard of before. This framework basically extends Mac Lane’s hierarchy so that instead of just small and large categories, there are infinitely many levels of categories, each of which is “small enough” to be an object in a larger category. The official technical details are as follows. We assume standard ZF set theory, and then add the axiom that every set is contained in a universe. A universe is just a transitive set (meaning that it contains everything contained in any of its members), which also contains the powerset of any of its members, and also contains the range of any definable function applied to any set inside the universe. Basically, this just means that a universe is a standard model of ZF. In this framework, a category is just a set of objects together with a set of morphisms, following the standard axioms (none of the proper class stuff that we get with Mac Lane). 1. Now, since the collection of all groups forms a proper class, there is no category of all groups. However, for any universe U, since this universe is a set, the collection of all groups contained in U is itself a set, so we get a category of all U-groups. Similarly, for any universe U there is a category of all U-sets, all U-topological spaces, and (importantly) all U-categories. By the Axiom of Universes, we see that every set is contained in some universe. Since every category has some underlying set, this category is itself contained in some universe U, and thus is itself an object in the category of all U-categories. None of these categories contains itself, but no matter how big the categories are you’re talking about, there is some category of categories that contains everything that big. This lets you do lots of work that couldn’t be done in the Mac Lane framework, because on his framework, large categories aren’t objects in any category, while here we can always just go up one level. (However, Mac Lane does have the advantage that there is a single category containing all groups, while here no single category does.) 2. Again, if A and B are categories in some universe U (there must be some universe containing the set {A,B}, and this universe must contain both A and B) then every function from one to the other is itself in that universe U, and thus the category of functors from A to B is itself in U. This again is an improvement over the Mac Lane situation. 3. Since we’re using standard ZF set theory, this is straightforwardly satisfied. 4. There is a slight loss over Mac Lane’s framework here. The existence of a universe implies the consistency of ZF, because the universe is itself a model of ZF. Therefore, if the consistency of ZF implied the consistency of the Axiom of Universes, then the Axiom of Universes would prove its own consistency, which is impossible by Gödel’s theorem. Since this Axiom of Universes framework was implicitly used by Wiles in his proof of Fermat’s Last Theorem, we therefore don’t yet know that ZFC is strong enough to prove the result. However, the situation is not so bad. The existence of a universe containing a set S is just the same thing as the existence of an inaccessible cardinal with higher cardinality than S. Therefore, the Axiom of Universes is equivalent to the existence of unboundedly large inaccessible cardinals. This is provably consistent if we assume the existence of any single large cardinal that is even larger (such as a Mahlo cardinal, or a measurable cardinal), so set theorists are basically just as certain that this theory is consistent as they are of ZFC. You might have further worries about whether these cardinals actually exist (so that the system has no false premises, and therefore leads only to true conclusions), but those are the sorts of worries ordinary mathematicians like to ignore. ### Feferman’s 1969 system It turns out that Grothendieck’s system can be brought down in strength. Instead of using the Axiom of Universes, we just add a bunch of terms U1, U2, and so on (through all the ordinals, if necessary), and add some axioms. For each proposition P expressible in the language of set theory, add the axiom: PU_i iff P where PU_i is the same as P, with all quantifiers explicitly restricted to range only over Ui. This has the effect of making each set Ui “just like” the entire universe. Thus, we can just formulate Grothendieck’s system here. There are some slight differences: Grothendieck’s smallest universe satisfies the claim “there are no universes”, while other universes don’t – in this system, any two of these universes satisfy exactly the same sentences. But in general, those differences make this system better, because each universe represents the whole universe better than Grothendieck’s did. However, there’s a nice advantage – this system is apparently equiconsistent with ZFC, rather than requiring the existence of additional inaccessibles. Thus, proving a theorem in this framework uses less strength than Grothendieck’s system does. I don’t know if the two systems are quite similar enough to translate the proof of FLT into this one, but that would bring down the strength needed quite a bit. So in some sense, mathematicians probably should use this system, rather than Grothendieck’s. There are still the limitations that Grothendieck has on the first two conditions, but the third and fourth are much better satisfied. ### Feferman’s 1974 system using Quine’s “New Foundations” This was the system Feferman spent the most time discussing. This one is also the most confusing, because it uses Quine’s system NF instead of a more standard set theory. The basic way that this system works is that instead of the complicated set of axioms of ZFC, we just have two very intuitive axioms. One is the axiom of extensionality, which says that there are no two distinct sets with exactly the same members. The other is a restriced axiom of comprehension. The basic axiom of comprehension just says that for every property, there is a set consisting of all things that satisfy that property. But as Russell pointed out to Frege in 1902, this full axiom is inconsistent, because it leads to Russell’s paradox of the set of all sets that don’t contain themselves. In ZFC this paradox is avoided by using several axioms to prove the existence of sets defined by various types of properties. Quine decided instead to avoid this (and related) paradoxes by restricting the types of properties that can be used to define sets. The only properties he allowed were ones that could be written correctly in a theory of types. In particular, each variable in the formula could be assigned a number, such that in $x=y$, x and y get the same number, while in $x\in y$ the number assigned to y is exactly 1 greater than the number assigned to x. This prevents Russell’s paradox, because the formula $\lnot(x\in x)$ can’t be assigned numbers in this way. The nice thing about NF is that it allows sets to contain themselves, and it allows there to be a set of all sets (this set is just defined as the set of all things satisfying $x=x$). Of course, there are some very messy things that go on with the theory of cardinalities, because the powerset of the universe has to be smaller than the universe itself. There are also messy things because the standard definitions of ordered pairs and powersets and sets of functions from one set to another need to use formulas that aren’t “stratified” in the appropriate way. Feferman discussed some ways of fixing this so it could all work. A further difficulty arises in that so far no one yet knows what sort of system would be strong enough to prove that NF is consistent. However, if we allow for many objects that aren’t sets (this requires weakening the axiom of extensionality so that it only applies to things with at least one element) it turns out that this system is actually provably weaker than ZFC. (In particular, it can be proven to be equiconsistent with Zermelo set theory.) We can then define categories to be given by a set of objects and a set of morphisms. So now we can see how this theory stands up to the criteria. 1. Since there is a set of all sets, and sets can contain themselves, it turns out that in this theory we really do get a category of all groups, a category of all sets, a category of all topological spaces, and a category of all categories that really contains itself as an object! This is the only theory we’ve listed on which this really works, and we really get all categories into a single category. 2. Again, as long as A and B are categories, we really do get a category of all functors from A to B, with all the natural transformations as morphisms (I think). 3. Here is where we really run into trouble. We’ve patched things up so that we can take unions and intersections and products and powersets and the like. However, the Axiom of Choice is provably false in this theory. Additionally, sets of functions sometimes have bad problems. Feferman said in particular that the Yoneda Lemma can’t be proven in this system, as well as some other standard things that mathematicians want to use. I don’t really know what the problems are, but these seem bad. The problem is that no one yet really understands NF well enough to know how to fix these things. Perhaps for the sake of category theory, this would be worth doing. But, Dana Scott pointed out after the talk that NF has a history of destroying careers wasted a lot of people’s time, and it’s the only foundational-type system whose full consistency strength still hasn’t been decided, and it’s been around for 70 years. [Correction: I had mischaracterized what Scott said and updated the post to reflect this. He also points out that Jensen showed that NFU, allowing urelemente, is consistent, though I'm not sure what consistency strength it has.] 4. As mentioned above, allowing basic elements that aren’t sets makes this theory equiconsistent with Zermelo set theory, but this consistency proof is also part of the weakness of the theory, since it can’t do as much with functions as standard set theories. If we managed to fix that part, this would presumably blow up the consistency strength much higher than Grothendieck’s system. ## Conclusion Thus, there’s a bunch of interesting ways out there to get something like what mathematicians want from category theory. However, none of these systems really gets everything that we want. The ones that we know are consistent, and we know have enough set-theoretic power for everyday mathematics, we also know can’t really talk about the category of all categories or the category of all groups. There are various replacement categories around (like the category of all U-categories, or the category of all small groups), and for many purposes these are enough. But they don’t quite get the conceptual ideas right. (One could level the same criticism directly at the set theories though – none of them really gets a set of all sets, or a class of all classes, except for the poorly understood New Foundations.) This might motivate some further work by category theorists on foundations, and in particular something like Lawvere’s program, though his program requires a radical shift in understanding of the notions of set, function, collection, individual, and the like, so it doesn’t seem like it would be much more congenial to most mathematicians for fully formal work. Of course, most mathematicians don’t need to do this stuff fully formally, so the situation is generally comfortable. But it’s nice to see where foundational work in set theory really could impact the lives of ordinary mathematicians. ### Information • Date : December 1, 2007 • Categories : Set Theory ### 3 responses 1 12 2007 (19:21:38) : When pressed, most category theorists seem to mumble something about “large” and “small”, and perhaps decide that in fact they need “larger”. I usually mumble something about arbitrarily many sufficiently large cardinals, knowing that this is problematic. So instead, let me ask when we (pretending that I’m a category theorist) actually care about this. For one, we do want the category of all categories. Almost. In fact, the collection of all categories has more structure than just being a category: it has objects (categories), morphisms (functors), and 2-morphisms (natural transformations). And, in general, we really do want to be able to talk about n-categories, requiring that our classes go up and up and up. And really we want omega-categories. The natural definition of “omega-category” is (i) a collection of objects (ii) between any two objects there is an omega category of morphisms. I.e. the natural definition of “omega category” is “(weak) category enriched over OmegaCat”. I say “weak category” because the point is that composition, for instance, is not associative in an omega-category, but only associative up to something. So really, in the current language, the definition of an “omega-category” is that it is an omega-category enriched over OmegaCat. Anyway, so the natural structure of OmegaCat is as an omega-category, not as an (omega+1)-category, at least as far as I can tell. So that’s one thing that these proposals don’t deal with: we really want infinite chains of type-increase. But is there ever a time when a mathematician would want to really think much about functors that themselves break levels? To MacLane, a group, for instance, is always a small category (as it has only one object) and always a small group: a “category” is defined as “a category enriched over Set”, so there must only be a (small) set of morphisms between any pair of objects. In any case, we do want the talk about the category of representations of a group; i.e. the functor category from (small) Grp to (large) Vect. And in general people ask about representations of a category, meaning functors into Vect; e.g. (T)QFTs. But I don’t know if anyone has really wanted to know about representations of Cat, say. We do talk about functors, say, between Cat to BiCat (the former being a 2-category, latter being the collection (weak) 2-categories, equipped with its natural structure as a 3-category). For instance, there is a functor Cat \to BiCat that introduces only identity 2-morphisms, and its adjoint functor from BiCat to Cat that decategorifies by modding out by all 2-morphisms. (I don’t remember how adjoint these are; in one direction the composition is the identity, and in the other? And, of course, Cat and BiCat are not really in the same n-category, so we have to be precise what kind of functor we mean…) Certainly we do want to do things like quantizing and q-deforming Cat; doing so may require, e.g., allowing all (N- or C-)linear combinations of categories. So y’all logic people better give us a language that lets us make the vector space with basis _all categories_. Or at least _all topological spaces_ or something. 1 12 2007 (23:40:35) : If you’ve got proper classes, then there’s a natural way to talk about a vector space with a proper class as a basis, just as the class of all formal linear combinations of elements of this class. (I suppose you need to be slightly careful that none of the elements of this class are themselves formal linear combinations of each other in whatever set-theoretic structure we’re using to talk about formal linear combinations.) So that gives you a way to have a vector space whose basis is all small categories. If you’re working with universes, then the same trick lets you make a vector space in one universe whose basis is all topological spaces in all smaller universes. Unfortunately, I don’t know enough about the other stuff to really understand why you want these things. As for the n-category stuff, is it really foundationally any more difficult than doing it for 1-categories? You just need a class of objects, a class of 1-morphisms, …, and a class of n-morphisms, and you just need to write down whatever associativity axioms they need to satisfy. (That’s a problem for category theorists to solve, but I don’t see how it would mess up the set/class stuff underlying it.) I guess at the omega-category level things might get tougher because of the apparent non-well-foundedness of what’s going on. Presumably Jacob Lurie has written about these foundational issues in his tome on the subject? 2 12 2007 (09:12:35) : Maybe this has been done, but I’ve had a lot of conceptual difficulty trying to read Lawvere’s textbook treatment of this stuff. A more compact exposition is in the Chapter I of Lawvere’s Functorial Semantics of Algebraic Theories %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9481493830680847, "perplexity_flag": "head"}
http://mathoverflow.net/questions/96191/operator-theoretical-models-for-k-mathbbz-3
## Operator Theoretical Models for $(K(\mathbb{Z}, 3)$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am looking for a reference concerning operator theoretical Models of $K(\mathbb{Z},3)$. Stolz-Teichner briefly say in "what is an elliptic object" that a certain hyperfinite Type III-factor, called "local fermions on the circle" should be the right thing. Thanks - John Baez was wondering about a geometric model for it in Week 149 - math.ucr.edu/home/baez/week149.html - but I don't think ever found one. He discusses its role in classifying principal U(1) 2-bundles here - math.ucr.edu/home/baez/calgary/calgary.pdf. – David Corfield May 7 2012 at 9:49 See also mathoverflow.net/questions/44045 – Neil Strickland May 7 2012 at 18:16 ## 2 Answers Here is a $C^*$-algebraic version of the model described in Andre Henriques' answer (the latter was linked by David Corfield in the accepted answer above): Let $\mathcal{O}_2$ be the Cuntz algebra generated by two partial isometries $s_1$ and $s_2$ subject to the relations $s_i^*s_j = \delta_{i,j}$ and $s_1s_1^* + s_2s_2^* = 1$. This algebra has vanishing $K$-theory as was calculated by Cuntz. By the universal coefficient theorem and Bott periodicity, $KK(\mathcal{O}_2, S^n\mathcal{O}_2)$ should vanish as well, where $S^n\mathcal{O}_2$ denotes the $n$-fold suspension of $\mathcal{O}_2$. The automorphism group of the stabilized algebra $\mathcal{O}_2 \otimes \mathbb{K}$ (where $\mathbb{K}$ denote the compact operators on a separable Hilbert space) fits into a short-ish exact sequence ```\[ 1 \to U(1) \to U(M(\mathcal{O}_2 \otimes \mathbb{K})) \to Aut(\mathcal{O}_2 \otimes \mathbb{K}) \to Out(\mathcal{O}_2 \otimes \mathbb{K}) \to 1 \]``` where $M(\mathcal{O}_2 \otimes \mathbb{K})$ is the multiplier algebra. The homotopy groups of $Aut(A \otimes \mathbb{K})$ for so-called Kirchberg algebras have been calculated (yeah, I was surprised too :-). You can find them in a paper by Dadarlat called "The homotopy groups of the automorphism groups of Kirchberg algebras". The result is ```\[ \pi_n(Aut(A \otimes \mathbb{K})) \cong KK(A,S^nA) . \]``` Now, $\mathcal{O}_2$ fits into that class and therefore has weakly contractible automorphism groups, but - by a theorem of Mingo - $U(M(\mathcal{O}_2 \otimes \mathbb{K}))$ is contractible as well. Analyzing the above sequence, we see that $Out(\mathcal{O}_2 \otimes \mathbb{K})$ has the weak homotopy type of a $K(\mathbb{Z},3)$... at least if ```\[ 1 \to PU(M(\mathcal{O}_2 \otimes \mathbb{K})) \to Aut(\mathcal{O}_2 \otimes \mathbb{K}) \to Out(\mathcal{O}_2 \otimes \mathbb{K}) \to 1 \]``` is a fibration. In fact, it could very well be that the topology on the quotient $Out(\mathcal{O}_2 \otimes \mathbb{K})$ is quite horrible. - Oh, nice side observation: $Out(\mathcal{O}_2 \otimes \mathbb{K})$ is what is called the Picard group of $\mathcal{O}_2$, that is the group of isomorphism classes of self-Morita equivalences. – Ulrich Pennig May 8 2012 at 16:31 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The unitary group of any purely infinite von Neumann algebra is contractible (this is a generalization of Kuiper's theorem due to Brüning and Willgerodt, “Eine Verallgemeinerung eines Satzes von N. Kuiper”). Thus the projective unitary group of any purely infinite von Neumann algebra has the homotopy type of K(Z,2) and its classifying space has the homotopy type of K(Z,3). This result has nothing to do specifically with hyperfinite type III1 factors, they appear for a different reason in the cited paper by Stolz and Teichner. - Thanks for the answer. What is the reference for the contractibility? Is there an operator theoretical model for the delooping too? – Nicolas Boerger May 8 2012 at 7:23 Andre Henriques' answer mathoverflow.net/questions/44045/… to the question Neil mentioned points to what may be specific about hyperfinite type III factors here. – David Corfield May 8 2012 at 9:13 @Nicolas: I added a reference for the contractibility result. – Dmitri Pavlov May 8 2012 at 16:19 @David: True, but unfortunately this relation is only conjectural. – Dmitri Pavlov May 8 2012 at 16:19 @Nicolas: The model for delooping is explained by Andrew Stacey in this answer: mathoverflow.net/questions/44045/… – Dmitri Pavlov May 8 2012 at 16:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228864312171936, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/198274-tough-differential-equation-problem.html
# Thread: 1. ## Tough Differential Equation Problem I need to find a function F such that it is continuous everywhere and $y'(t)=F(y(t))$ and $y(0)=0$ The only thing i could think of is $y(t)=e^t$ but that obviously doesnt satisfy the initial value. Any help or hints is greatly appreciated. 2. ## Re: Tough Differential Equation Problem So you're supposed to find both F and y(t) for which the given conditions are true, or ...? How about $y(t)=\sin t$. Then $y(0)=\sin 0=0$. And also $y'(t)=(\sin t)'=\cos t =\sqrt{1-\sin^2t}=\sqrt{1-(y(t))^2}=F(y(t)).$ 3. ## Re: Tough Differential Equation Problem I forgot to mention that I need to find a function F such that the initial value problem has infinitely many solutions. I don't know if that changes anything.... 4. ## Re: Tough Differential Equation Problem So to clear things up: I don't need to find a function y since this function F should take any function y and spit out its derivative i think. 5. ## Re: Tough Differential Equation Problem The problem with this question is that the complexity of any derivative is arbitrary, for example, if we know that the function y is of the general form: $\large%20y(t)=at+b$ ... where a and b are constants. It is easy to create a function F that will reliably produce y's derivative, as in: $\large%20F(t)=\frac{at}{at+b}$ However, even a slight change in the general form of y will upset everything and F will no longer function, while, as stated earlier the above function F produces the derivative of y, it will not produce the derivative of x (with constants a, b and c): $\large%20y(t)=at+b$ $\large%20x(t)=(at+b)^c$ $\large%20G(t)=ac\times%20t^{1-\frac{1}{c}}$ However, a bit of manipulation will create a function that will work for both x and y, (above) function G, since they are quite similar. this won't work in all instances; consider the functions p and q, defined below: $\large%20p(t)=\frac{(ln(t)+abt)^{at}}{t^a-t^{b}sin(ab)}$ $\large%20q(t)=\sqrt[t^{2}+ab]{ab\times%20cot(tsin(\frac{ab}{t}))}$ Creating a function that will produce the derivatives of both these functions is an enormous, if not impossible, task. So, my conclusion is, it is possible to create a function F that will produce the derivative of a function y, but the general form of y must first be known. Otherwise we'd have to consider an infinite number of combinations of roots, fractions, logarithms, trigonometric functions and lord knows what else - which from my humble high-school perspective - seems impossible. Sorry. =( I hope someone else has a more positive answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340835213661194, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Wiles'_proof_of_Fermat's_Last_Theorem
# Wiles's proof of Fermat's Last Theorem (Redirected from Wiles' proof of Fermat's Last Theorem) Wiles' proof of Fermat's Last Theorem is a proof of the modularity theorem for semistable elliptic curves released by Andrew Wiles, which, together with Ribet's theorem, provides a proof for Fermat's Last Theorem. Wiles first announced his proof in June 1993[1] in a version that was soon recognized as having a serious gap. The widely accepted version of the proof was released by Andrew Wiles in September 1994, and published in 1995. The proof uses many techniques from algebraic geometry and number theory, and has many ramifications in these branches of mathematics. It also uses standard constructions of modern algebraic geometry, such as the category of schemes and Iwasawa theory, and other 20th-century techniques not available to Fermat. The proof itself is over 100 pages long and consumed seven years[1] of Wiles's research time. For solving Fermat's Last Theorem, he was knighted, and received other honors. ## Progress of the previous decades Fermat's Last Theorem states that no three positive integers a, b and c can satisfy the equation $a^n + b^n=c^n \!$ if n is an integer greater than two. In the 1950s and 1960s a connection between elliptic curves and modular forms was conjectured by the Japanese mathematician Goro Shimura based on ideas posed by Yutaka Taniyama. In the West it became well known through a 1967 paper by André Weil. With Weil giving conceptual evidence for it, it is sometimes called the Taniyama–Shimura–Weil conjecture. It states that every rational elliptic curve is modular. On a separate branch of development, in the late 1960s, Yves Hellegouarch came up with the idea of associating solutions (a,b,c) of Fermat's equation with a completely different mathematical object: an elliptic curve.[2] The curve consists of all points in the plane whose coordinates (x, y) satisfy the relation $y^2 = x(x-a^n)(x+b^n). \,$ Such an elliptic curve would enjoy very special properties, which are due to the appearance of high powers of integers in its equation and the fact that an + bn = cn is a nth power as well. In 1982–1985, Gerhard Frey called attention to the unusual properties of the same curve as Hellegouarch, now called a Frey curve. This provided a bridge between Fermat and Taniyama by showing that a counterexample to Fermat's Last Theorem would create such a curve that would not be modular. Again, the conjecture says that each elliptic curve with rational coefficients can be constructed in an entirely different way, not by giving its equation but by using modular functions to parametrize coordinates x and y of the points on it. Thus, according to the conjecture, any elliptic curve over Q would have to be a modular elliptic curve, yet if a solution to Fermat's equation with non-zero a, b, c and n greater than 2 existed, the corresponding curve would not be modular, resulting in a contradiction. As such, a proof or disproof of either of Fermat's Last Theorem or the Taniyama–Shimura-Weil conjecture would simultaneously prove or disprove the other.[3] In 1985, Jean-Pierre Serre proposed that a Frey curve could not be modular and provided a partial proof of this. This showed that a proof of the semistable case of the Taniyama-Shimura conjecture would imply Fermat's Last Theorem. Serre did not provide a complete proof of his proposal; the missing part became known as the epsilon conjecture or ε-conjecture (now known as Ribet's theorem). Serre's main interest was in an even more ambitious conjecture, Serre's conjecture on modular Galois representations, which would imply the Taniyama–Shimura conjecture. Although in the preceding twenty or thirty years much evidence had been accumulated to form conjectures about elliptic curves, the main reason to believe that these various conjectures were true lay not in the numerical confirmations, but in a remarkably coherent and attractive mathematical picture that they presented. Equally it could happen that one or more of these conjectures were actually untrue. Following this strategy, a proof of Fermat's Last Theorem required two steps. First, it was necessary to show that Frey's intuition was correct: that the above elliptic curve (now known as a Frey curve), if it exists, is always non-modular. Frey did not quite succeed in proving this rigorously; the missing piece (the so-called "epsilon conjecture", now known as Ribet's theorem) was noticed by Jean-Pierre Serre[citation needed] and proven in 1986 by Ken Ribet. Second, it was necessary to prove the modularity theorem - or at least to prove it for the sub-class of cases (known as semistable elliptic curves) which included Frey's equation. • Ribet's theorem - if proven - would show that any solution to Fermat's equation could be used to generate a semistable elliptic curve that was not modular; • The modularity theorem - if proven for Frey's equation - would show that all such elliptic curves must be modular. • The contradiction implies that no solutions can exist to Fermat's equation, thus proving Fermat's Last Theorem. In the summer of 1986, Ken Ribet succeeded in proving the epsilon conjecture. (His article was published in 1990.) He demonstrated that, just as Frey had anticipated, a special case of the Taniyama–Shimura conjecture (still unproven at the time), together with the now proven epsilon conjecture, implies Fermat's Last Theorem. Thus, if the Taniyama–Shimura conjecture is true for semistable elliptic curves, then Fermat's Last Theorem would be true. However this theoretical approach was widely considered unattainable, since the Taniyama-Shimura conjecture was itself widely seen as completely inaccessible to proof with current knowledge.[4]:203-205, 223, 226 For example, Wiles' ex-supervisor John Coates states that it seemed "impossible to actually prove",[4]:226 and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely inaccessible".[4]:223 Hearing of the 1986 proof of the epsilon conjecture, Wiles decided to begin researching exclusively towards a proof of the Taniyama-Shimura conjecture. Ribet later commented that "Andrew Wiles was probably one of the few people on earth who had the audacity to dream that you can actually go and prove [it]." [4]:223 ## Wiles' proof ### Overview Wiles opted to attempt to "count" and match elliptic curves to counted modular forms. He found that this direct approach was not working, so he transformed the problem by instead matching the Galois representations of the elliptic curves to modular forms. Wiles denotes this matching (or mapping) that, more specifically, is a ring homomorphism: $R_n \rightarrow T_n.$ R is a deformation ring and T is a Hecke ring. Wiles had the insight that in many cases this ring homomorphism could be a ring isomorphism. (Conjecture 2.16 in Chapter 2, §3) He realized that the map between R and T is an isomorphism if and only if two abelian groups occurring in the theory are finite and have the same cardinality. This is sometimes referred to as the "numerical criterion". Given this result, one can see that Fermat's Last Theorem is reduced to a statement saying that two groups have the same order. Much of the text of the proof leads into topics and theorems related to ring theory and commutation theory. The Goal is to verify that the map R → T is an isomorphism and ultimately that R=T. This is the long and difficult step. In treating deformations, Wiles defines four cases, with the flat deformation case requiring more effort to prove and is treated in a separate article in the same volume entitled "Ring-theoretic properties of certain Hecke algebras". Gerd Faltings, in his bulletin, on p. 745. gives this commutative diagram: or ultimately that R = T, indicating a complete intersection. Since Wiles cannot show that R=T directly, he does so through Z3, F3 and T/m via lifts. In order to perform this matching, Wiles had to create a class number formula (CNF). He first attempted to use horizontal Iwasawa theory but that part of his work had an unresolved issue such that he could not create a CNF. At the end of the summer of 1991, he learned about a paper by Matthias Flach, using ideas of Victor Kolyvagin to create a CNF, and so Wiles set his Iwasawa work aside. Wiles extended Flach's work in order to create a CNF. By the spring of 1993, his work covered all but a few families of elliptic curves. In early 1993, Wiles reviewed his argument beforehand with a Princeton colleague, Nick Katz. His proof involved extending approaches which had recently been developed by Kolyvagin and Flach,[5] which he adopted after the Iwasawa method failed.[6] In May 1993 while reading a paper by Mazur, Wiles had the insight that the 3/5 switch would resolve the final issues and would then cover all elliptic curves (again, see Chapter 5 of the paper for this 3/5 switch). ### General approach and strategy Given an elliptic curve E over the field Q of rational numbers $E(\bar{\mathbf{Q}})$, for every prime power $l^n$, there exists a homomorphism from the absolute Galois group $\mathrm{Gal}(\bar{\mathbf{Q}}/\mathbf{Q})$ to $\mathrm{GL}_2(\mathbf{Z}/l^n \mathbf{Z})$, the group of invertible 2 by 2 matrices whose entries are integers ($\mod l^n$). This is because $E(\bar{\mathbf{Q}})$, the points of E over $\bar{\mathbf{Q}}$, form an abelian group, on which $\mathrm{Gal}(\bar{\mathbf{Q}}/\mathbf{Q})$ acts; the subgroup of elements x such that $l^n x = 0$ is just $(\mathbf{Z}/l^n \mathbf{Z})^2$, and an automorphism of this group is a matrix of the type described. Less obvious is that given a modular form of a certain special type, a Hecke eigenform with eigenvalues in Q, one also gets a homomorphism from the absolute Galois group $\mathrm{Gal}(\bar{\mathbf{Q}}/\mathbf{Q}) \rightarrow \mathrm{GL}_2(\mathbf{Z}/l^n \mathbf{Z})$.: This goes back to Eichler and Shimura. The idea is that the Galois group acts first on the modular curve on which the modular form is defined, thence on the Jacobian variety of the curve, and finally on the points of $l^n$ power order on that Jacobian. The resulting representation is not usually 2-dimensional, but the Hecke operators cut out a 2-dimensional piece. It is easy to demonstrate that these representations come from some elliptic curve but the converse is the difficult part to prove. Instead of trying to go directly from the elliptic curve to the modular form, one can first pass to the ($\mod l^n$) representation for some l and n, and from that to the modular form. In the case l=3 and n=1, results of the Langlands-Tunnell theorem show that the (mod 3) representation of any elliptic curve over Q comes from a modular form. The basic strategy is to use induction on n to show that this is true for l=3 and any n, that ultimately there is a single modular form that works for all n. To do this, one uses a counting argument, comparing the number of ways in which one can lift a ($\mod l^n$) Galois representation to ($\mod l^{n+1}$) and the number of ways in which one can lift a ($\mod l^n$) modular form. An essential point is to impose a sufficient set of conditions on the Galois representation; otherwise, there will be too many lifts and most will not be modular. These conditions should be satisfied for the representations coming from modular forms and those coming from elliptic curves. If the original (mod 3) representation has an image which is too small, one runs into trouble with the lifting argument, and in this case, there is a final trick, which has since taken on a life of its own with the subsequent work on the Serre Modularity Conjecture. The idea involves the interplay between the (mod 3) and (mod 5) representations. See Chapter 5 of the Wiles paper for this 3/5 switch. ### Structure of Wiles' proof In his 1995 108-page article, Wiles divides the subject matter up into the following chapters (preceded here by page numbers): Introduction 443 Chapter 1 455 1. Deformations of Galois representations 472 2. Some computations of cohomology groups 475 3. Some results on subgroups of GL2(k) Chapter 2 479 1. The Gorenstein property 489 2. Congruences between Hecke rings 503 3. The main conjectures Chapter 3 517 Estimates for the Selmer group Chapter 4 525 1. The ordinary CM case 533 2. Calculation of η Chapter 5 541 Application to elliptic curves Appendix 545 Gorenstein rings and local complete intersections Gerd Faltings subsequently provided some simplifications to the 1995 proof, primarily in switch from geometric constructions to rather simpler algebraic ones.[7][8] The book of the Cornell conference also contained simplifications to the original proof.[9] ### Reading and notation guide Wiles' paper is over 100 pages long and often uses the specialized symbols and notations of group theory, algebraic geometry, commutative algebra, and Galois theory. One might want to first read the 1993 email of Ken Ribet,[10][11] Hesselink's quick review of top-level issues gives just the elementary algebra and avoids abstract algebra.,[12] or Daney's web page which provides a set of his own notes and lists the current books available on the subject. Weston attempts to provide a handy map of some of the relationships between the subjects.[13] F. Q. Gouvêa provides an award-winning review of some of the required topics.[14][15][16][17] Faltings' 5-page technical bulletin on the matter is a quick and technical review of the proof for the non-specialist.[18] For those in search of a commercially available book to guide them, he recommended that those familiar with abstract algebra read Hellegouarch, then read the Cornell book,[9] which is claimed to be accessible to "a graduate student in number theory". Note that not even the Cornell book can cover the entirety of the Wiles proof.[19] The work of almost every mathematician who helped to lay the groundwork for Wiles did so in specialized ways, often creating new specialized concepts and yet more new jargon. In the equations, subscripts and superscripts are used extensively because of the numbers of concepts that Wiles is sometimes dealing with in an equation. • See the glossaries listed in Lists of mathematics topics#Pure mathematics, such as Glossary of arithmetic and Diophantine geometry . Daney provides a proof-specific glossary. • See Table of mathematical symbols and Table of logic symbols • For the deformation theory, Wiles defines restrictions (or cases) on the deformations as Selmer (sel), ordinary (ord), strict (str) or flat (fl) and he uses the abbreviations list here. He usually uses these as a subscript but he occasionally uses them as a superscript. There is also a fifth case: the implied "unrestricted" case but note that the superscript "unr" is not an abbreviation for unrestricted. • Qunr is the unramified extension of Q. A related but more specialized topic used is crystalline cohomology. See also Galois cohomology. • Some relevant named concepts: Hasse-Weil zeta function, Mordell–Weil theorem, Deligne-Serre theorem • Grab bag of jargon mentioned in paper: cover and lift, finite field, isomorphism, surjective function, decomposition group, j-invariant of elliptic curves, Abelian group, Grossencharacter, L-function, abelian variety, Jacobian, Néron model, Gorenstein ring, Torsion subgroup (including torsion points on elliptic curves here[20] and here[21]), Congruence subgroup, eigenform, Character (mathematics), Irreducibility (mathematics), Image (mathematics), dihedral, Conductor, Lattice (group), Cyclotomic field, Cyclotomic character, Splitting of prime ideals in Galois extensions (and decomposition group and inertia group), Quotient space, Quotient group ## Announcement and subsequent developments Wiles' proof was initially presented in 1993. It was finally accepted as correct, and published, in 1995, because of an error in one piece of his initial paper. His work was extended to a full proof of the modularity theorem over the following 6 years by others, who built on Wiles' work. ### Announcement and final proof (1993 - 1995) During June 21 - 23, 1993, Wiles announced and presented his proof of the Taniyama–Shimura conjecture for semi-stable elliptic curves, and hence of Fermat's Last Theorem, over the course of three lectures delivered at the Isaac Newton Institute for Mathematical Sciences in Cambridge, England.[1] There was a relatively large amount of press coverage afterwards.[19] After the announcement, Katz was appointed as one of the referees to peer review Wiles' manuscript. In the course of his review, he asked Wiles a series of clarifying questions that led Wiles to recognize that the proof contained a gap. There was an error in one critical portion of the proof which gave a bound for the order of a particular group: the Euler system used to extend Flach's method was incomplete. The error would not have rendered his work worthless - each part of Wiles' work was highly significant and innovative by itself, as were the many developments and techniques he had created in the course of his work, and only one part was affected.[4]:289, 296-297 However without this part proven, there was no actual proof of Fermat's Last Theorem. Wiles and his former student Richard Taylor spent almost a year resolving this issue.[22][23] Wiles indicates that on the morning of September 19, 1994 he realized that the specific reason why the Flach approach would not work directly suggested a new approach based on his previous attempts using Iwasawa theory, which resolved the issue and resulted in a CNF that was valid for all of the required cases. On October 6 Wiles sent the new proof to three colleagues including Faltings,[citation needed] and on 24 October 1994, Wiles submitted two manuscripts, "Modular elliptic curves and Fermat's Last Theorem"[24] and "Ring theoretic properties of certain Hecke algebras",[25] the second of which was co-authored with Taylor and proved that certain conditions were met which were needed to justify the corrected step in the main paper. The two papers were vetted and finally published as the entirety of the May 1995 issue of the Annals of Mathematics. The new proof was widely analyzed, and became accepted as likely correct in its major components.[26][27] These papers established the modularity theorem for semistable elliptic curves, the last step in proving Fermat's Last Theorem, 358 years after it was conjectured. ### Popular accessibility Fermat himself famously claimed to "...have discovered a truly marvelous proof of this, which this margin is too narrow to contain",[28] but Wiles's proof is very complex, and incorporates the work of so many other specialists that it was suggested in 1994 that only a small number of people were capable of fully understanding at that time all the details of what he had done.[1][29] The number is likely much larger now with the 10-day conference and book organized by Cornell et al.,[9] which has done much to make the full range of required topics accessible to graduate students in number theory. In 1998, the full modularity theorem was proven by Christophe Breuil, Brian Conrad, Fred Diamond, and Richard Taylor using many of the methods that Andrew Wiles used in his 1995 published papers. A computer science challenge given in 2005 is "Formalize and verify by computer a proof of Fermat's Last Theorem, as proved by A. Wiles in 1995."[30] ## Notes 1. ^ a b c d Kolata, Gina (24 June 1993). "At Last, Shout of 'Eureka!' In Age-Old Math Mystery". The New York Times. Retrieved 21 January 2013. 2. Hellegouarch, Yves (2001). Invitation to the Mathematics of Fermat-Wiles. Academic Press. ISBN 978-0-12-339251-0. 3. Singh, pp. 194–198; Aczel, pp. 109–114. 4. Singh, Simon. Fermat's Last Theorem, 2002, p. 259. 5. Singh, Simon. Fermat's Last Theorem, 2002, p. 260. 6. ^ a b c G. Cornell, J. H. Silverman and G. Stevens, Modular forms and Fermat's Last Theorem, ISBN 0-387-94609-8 7. ^ a b 8. Wiles, Andrew (1995). "Modular elliptic curves and Fermat's Last Theorem" (PDF). Annals of Mathematics (Annals of Mathematics) 141 (3): 443–551. doi:10.2307/2118559. JSTOR 2118559. OCLC 37032255. 9. Taylor R, Wiles A (1995). "Ring theoretic properties of certain Hecke algebras". Annals of Mathematics (Annals of Mathematics) 141 (3): 553–572. doi:10.2307/2118560. JSTOR 2118560. OCLC 37032255. 10. NOVA Video, The Proof October 28, 1997, See also Solving Fermat: Andrew Wiles ## References • Aczel, Amir (January 1, 1997). Fermat's Last Theorem: Unlocking the Secret of an Ancient Mathematical Problem. ISBN 978-1-56858-077-7. Zbl 0878.11003. • John Coates (July 1996). "Wiles Receives NAS Award in Mathematics" (PDF). Notices of the AMS 43 (7): 760–763. Zbl 1029.01513. • Cornell, Gary (January 1, 1998). Modular Forms and Fermat's Last Theorem. ISBN 0-387-94609-8. Zbl 0878.11004.  (Cornell, et al.) • Daney, Charles (2003). "The Mathematics of Fermat's Last Theorem". Retrieved August 5, 2004. • Darmon, H. (September 9, 2007). "Wiles’ theorem and the arithmetic of elliptic curves". • Faltings, Gerd (July 1995). "The Proof of Fermat's Last Theorem by R. Taylor and A. Wiles" (PDF). Notices of the AMS 42 (7): 743–746. ISSN 0002-9920. Zbl 1047.11510. • Frey, Gerhard (1986). "Links between stable elliptic curves and certain diophantine equations". Ann. Univ. Sarav. Ser. Math. 1: 1–40. Zbl 0586.10010. • Hellegouarch, Yves (January 1, 2001). Invitation to the Mathematics of Fermat-Wiles. ISBN 0-12-339251-9. Zbl 0887.11003.  See review •   (collected by Lim Lek-Heng) • Mozzochi, Charles (December 7, 2000). The Fermat Diary. American Mathematical Society. ISBN 978-0-8218-2670-6. Zbl 0955.11002.  (see book review • Mozzochi, Charles (July 6, 2006). The Fermat Proof. Trafford Publishing. ISBN 1-4120-2203-7. Zbl 1104.11001. • O'Connor, J. J.; Robertson, E. F. (1996). "Fermat's last theorem". Retrieved August 5, 2004. • van der Poorten, Alfred (January 1, 1996). Notes on Fermat's Last Theorem. ISBN 0-471-06261-8. Zbl 0882.11001. • Ribenboim, Paulo (January 1, 2000). Fermat's Last Theorem for Amateurs. ISBN 0-387-98508-5. Zbl 0920.11016. •   Discusses various material which is related to the proof of Fermat's Last Theorem: elliptic curves, modular forms, Galois representations and their deformations, Frey's construction, and the conjectures of Serre and of Taniyama–Shimura. • Singh, Simon (October 1998). Fermat's Enigma. New York: Anchor Books. ISBN 978-0-385-49362-8 zbl = 0930.00002 Check `|isbn=` value (help).  ISBN 0-8027-1331-9 • Simon Singh   Edited version of ~2,000-word essay published in Prometheus magazine, describing Andrew Wiles's successful journey. • Richard Taylor and Andrew Wiles (May 1995). "Ring-theoretic properties of certain Hecke algebras" (PDF). Annals of Mathematics (Annals of Mathematics) 141 (3): 553–572. doi:10.2307/2118560. ISSN 0003486X. JSTOR 2118560. OCLC 37032255. Zbl 0823.11030. • Wiles, Andrew (1995). "Modular elliptic curves and Fermat's Last Theorem" (PDF). Annals of Mathematics (Annals of Mathematics) 141 (3): 443–551. doi:10.2307/2118559. ISSN 0003486X. JSTOR 2118559. OCLC 37032255. Zbl 0823.11029.  See also this smaller and searchable PDF text version. (The larger PDF misquotes the volume number as 142.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338840246200562, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/6234/whats-the-value-of-this-vieta-style-product-involving-the-golden-ratio?answertab=oldest
# What's the value of this Vieta-style product involving the golden ratio? One way of looking at the Vieta product $${2\over\pi} = {\sqrt{2}\over 2}{\sqrt{2+\sqrt{2}}\over 2}{\sqrt{2+\sqrt{2+\sqrt{2}}}\over 2}\dots$$ is as the infinite product of a series of successive 'approximations' to 2, defined by $a_0 = \sqrt{2}$, $a_{i+1} = \sqrt{2+a_i}$ (or more accurately, their ratio to their limit 2). This allows one to see that the product converges; if $|a_i-2|=\epsilon$, then $|a_{i+1}-2|\approx\epsilon/2$ and so the terms of the product go as roughly $(1+2^{-i})$. Now, the sequence of infinite radicals $a_0=1$, $a_{i+1} = \sqrt{1+a_i}$ converges exponentially to the golden ratio $\phi$, and so the same sort of infinite product can be formed: $$\Phi = {\sqrt{1}\over\phi}{\sqrt{1+\sqrt{1}}\over\phi}{\sqrt{1+\sqrt{1+\sqrt{1}}}\over\phi}\dots$$ and an equivalent proof of convergence goes through. The question is, what's the value of $\Phi$? The usual proof of Vieta's product by way of the double-angle formula for sin doesn't translate over, and from what I know of the logistic map it seems immensely unlikely that there's any function conjugate to the iteration map here in the same way that the trig functions are suitably conjugate to the version in the Vieta product. Is there any other approach that's likely to work, or is $\Phi$ just unlikely to have any formula more explicit than its infinite product? - FWIW, attempts to find `0.509490972847535755...` in the Plouffe inverter yielded nothing. – J. M. Oct 7 '10 at 1:27 As to how I got that number in Mathematica: `SequenceLimit[FoldList[Times, 1, NestList[Sqrt[1 + #] &, N[1, 50], 50]/GoldenRatio]]`. – J. M. Oct 7 '10 at 1:28 1 Yeah; I should've mentioned that I did a quick lookup on the value myself to little avail. My wild guess for the value would be along the lines of a hypergeometric function (or conceivably a theta function) evaluated at some Q[sqrt(5)] argument, possibly with some exponential factor, but that's purely speculation. I'm not even sure how to attack it. – Steven Stadnicki Oct 7 '10 at 3:28 4 – J. M. Oct 7 '10 at 4:51 2 I feel like the key to solving this problem will be similar to the ideas discussed in the article posted by J.M. If that is the case, then the function $f(z)=\frac{\sqrt{1+z}}{\phi} \cdot \frac{\sqrt{1+\sqrt{1+z}}}{\phi}\cdots$ should be of some interest. – Eric♦ Feb 24 '11 at 0:20 show 2 more comments ## 1 Answer What you're basically looking for is a function $f(x)$ such that $f(2x)=f^2(x)-1$ and $f(0)=\phi$, from there: \begin{align} 2f'(2x)&=2f(x)f'(x)\\ \frac{f'(2x)}{f'(x)}&=f(x)\\ \frac{f'(x)}{f'(x/2)}&=f(x/2)\\ \frac{f'(x)}{f'(x/2^n)}&=\prod_{k=1}^n f(x/2^k) \end{align} and, given a value $x_0$ such that $f(x_0)=1$, \begin{align} \Phi&=\prod_{k=1}^{\infty} \frac{f(x_0/2^k)}{\phi}\\ &=\lim_{n\rightarrow\infty}\phi^{-n} \prod_{k=1}^n f(x_0/2^k)\\ &=\lim_{n\rightarrow\infty}\phi^{-n} \frac{f'(x_0)}{f'(x_0/2^n)}\\ &=\lim_{h\rightarrow0}h^\alpha \frac{f'(x_0)}{f'(hx_0)} \end{align} where $\alpha=\frac{ln(\phi)}{ln(2)}$. Unfortunately, I have no idea how to get $f(x)$, and the fact that $f(x)=1+O(x^{1+\alpha})$ does not make finding this function look easy. - The paper I linked to in the comments constructs appropriate $f$'s for certain Vieta products, but I have not had the time to study this problem thoroughly. – J. M. Oct 31 '10 at 23:52 Unfortunately, I'm pretty sure that the functions in the paper all have Taylor series at all points on which they are defined, which means a function like the $f$ defined above in which $lim_{h\rightarrow0}f'(h)/h^\alpha=C>0$ would not be in that paper. – WAS Nov 1 '10 at 0:25 1 This boils down to the comment I made towards the end of the post; finding such a $f$ would be tantamount to being able to find a closed-form for the Logistic map for a particular value of r (or equivalently the monic centered quadratic map), and AFAIK there are only a couple of very special values for which closed forms are known... – Steven Stadnicki Nov 1 '10 at 17:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255513548851013, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/68178/finding-roots-of-polynomials-with-rational-coefficients
# Finding roots of polynomials with rational coefficients I'm looking for a general approach (or approaches) for finding the roots of polynomials with rational coefficients of higher degrees than $4$. The problem is that I need to find the exact roots and not their approximations. And also I need to find both real and complex roots. I know that there are no methods which will work for every polynomial, but I need to find at least several methods which will work for $5$ or $6$ degree. Could anyone please suggest a link or a book where this topic is discussed? - 2 There is no general method for polynomials of degree greater than 4. You could try and solve what you need with Maple. It's a pretty good solver, but I don't know what algorithm it uses. It definitely won't give you the exact roots for degree greater than 5. If this were possible, than it would contradict the impossibility of solving the polynomial equations of higher order. – Beni Bogosel Sep 28 '11 at 10:55 2 @Beni: Maple and Mathematica both use the Jenkins-Traub method (a fancy version of iterating with a companion matrix) for approximately solving polynomials. OP: As deoxy says, one can certainly solve polynomials in closed form if you allow the use of things like Riemann theta functions; see Umemura for instance. They're rather ungainly though; why do you need exact solutions anyway? – J. M. Sep 28 '11 at 16:41 @J.M., finding the exact roots is a part of a more complex problem I am trying to solve. I am to implement Risch's algorithm for indefinite integration and I need to factorize a given polynomial in order to be able to do this. And thank you for your comment. I think it will help me a lot. – superM Sep 28 '11 at 16:51 FWIW: most of the computing systems just represent roots as numbered roots of certain "minimal" polynomials; e.g. `Root[-1 - #1 + #1^3 & , 1, 0]` for Mathematica and `RootOf(x^3-x-1,x,1)` for Maple. For Risch's purposes, the integration of rational functions can be "implicitly" done; witness Mathematica's `RootSum[]`. (Maple sums implicitly over `RootOf()` objects.) – J. M. Sep 28 '11 at 17:05 I have tried the Kronecker's algorithm previously and I can boast that this way I computed an integral which Mathematica was unable to compute =) – superM Sep 28 '11 at 17:30 show 3 more comments ## 2 Answers The general quintic was solved in 1858 by Hermite using methods beyond the scope of the Abel-Ruffini theorem. Exact expressions are somewhat ungainly: See Bring radical and this: http://mathworld.wolfram.com/QuinticEquation.html which includes a general solution of the quintic in terms of the Jacobi theta functions. I am unaware of any algorithmic implementation of the methods described on the latter page. This question Hermite's solution of the general quintic in terms of theta functions. has links to an exposition of Hermite's method. The following two questions over at mathoverflow discuss solutions for polynomials of degree $n\geq 5$ *method of finding roots of polynomial equations with arithmetic operations and roots and other functions *Can Fuchsian functions solve the general equation of degree n? EDIT: I was poking around and found this preprint: Resolution of degree $≤ 6$ algebraic equations by genus two theta constants which provides a step by step algorithm. - Thank you very much! – superM Sep 29 '11 at 6:49 Have you read the pages below? It'd help if you told us why you need to do that. The only practical solutions for higher degrees are numerical methods (which can probably be better for degree 3 and 4 as well). Root isolation helps and here Descartes' rule of signs and Sturm's theorem are important. - The solution is known for degrees 1-4. The article gives no practical solution for higher degrees, but there were several useful links. Thank's a lot. – superM Sep 28 '11 at 10:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451802968978882, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/82349/uniformization-theorem-via-ricci-flows/82392
## uniformization theorem via ricci flows ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, I have a question about the positive Euler characteristic case. My question is: why is it so difficult as compared to the zero and negative cases? I am more interested in a pictorial/intuitive answer as compared to a very rigorous analytic answer. In other words, I want to get an intuitive "feel" of what goes wrong..... Thanks in advance. - 1 You're talking about 2-dimensional manifolds... so perhaps you could quantify what you mean by difficult? – Ryan Budney Dec 1 2011 at 6:54 3 "Difficult" in this case means the opposite of "easy" -- it took Ben Chow many years to fill in this case (Hamilton did the negative curvature case). – Igor Rivin Dec 1 2011 at 12:19 2 I think it's possible now using Perelman's work to prove Ricci flow on the 2-sphere converges to the round metric replacing the entropy of Hamilton/Chow with Perelman's entropy. The idea is that if there's a finite-time singularity, then it should be kappa-non-collapsed from Perelman's entropy monotonicity, and a blow-up limit should be a Ricci soliton with positive curvature. But the only solitons in dimension 2 with positive curvature are the cigar (which is collapsed) and the round 2-sphere. What I don't remember is whether the proof of uniqueness of solitons depends on uniformization. – Agol Dec 1 2011 at 17:25 ## 2 Answers I'm not an expert on Ricci flow but I believe the rough general reason for this is as follows. In dimension 2 the normalized Ricci flow gives the following evolution equation for scalar curvature $$\frac{\partial R}{\partial t}=\Delta_t R+R(R-r)$$ Where $r=\frac{\int_MR}{vol M}=2\pi \chi(M)$. The analysis of this equation is closely linked (via maximum principle) to that of the ODE $\frac{\partial R}{\partial t}=R(R-r)$. The nonzero stationary point of the ODE is $R=r=const$ (which is also the limit of $R$ under the Ricci flow as $t\to\infty$); it is stable when $r<0$ and unstable when $r>0$. Thus for $r<0$ the behaviors of the ODE and the PDE agree as both the diffusion laplacian term and the ODE term work in the "same direction" toward the stationary solution. This makes the convergence estimates in this case quite easy. In contrast for $r>0$ the laplacian term and the ODE term work in "opposite directions" (with the laplacian term ultimately winning) which makes the analysis in this case more delicate. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is a very nice exposition of the whole argument here. (or here, the spaces freak out some browsers)As that paper points out, Ricci flow did not produce a proof of Uniformization until 2009, although this is not entirely true: Ricci flow is the gradient flow of $\log \det \Delta$ in two dimensions, and Osgood-Phillips-Sarnak proved uniformization by optimizing $\log \det \Delta$ in a conformal class in the late '80s. - Link doesn't work for me – Deane Yang Dec 1 2011 at 14:14 OK, try the second link (I don't know why space confuse things...) – Igor Rivin Dec 1 2011 at 14:18 Another (possibly) useful link: dx.doi.org/10.1090/S0002-9939-06-08360-2 – YangMills Dec 2 2011 at 4:11 Did the Osgood-Phillips-Sarnak paper tackle the positive Euler characteristic case? – anonymous Dec 2 2011 at 22:57 Yes, it did, as I recall (and there, too it was the hardest case...) I – Igor Rivin Dec 3 2011 at 10:19 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316941499710083, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/26519-series-solution.html
# Thread: 1. ## series solution... Find a series solution y(x) of the IVP y'' -xy' + 5y = 0, y(0) = 0, y'(0) = 1 Show that for the given initial conditions the series terminates and hence the solution reduces to a fifth order polynomial. I seriously do not have a clue where to start on this problem! Can anyone please give me any help at all?! Thanks in advance! 2. Originally Posted by hunkydory19 Find a series solution y(x) of the IVP y'' -xy' + 5y = 0, y(0) = 0, y'(0) = 1 Show that for the given initial conditions the series terminates and hence the solution reduces to a fifth order polynomial. I seriously do not have a clue where to start on this problem! Can anyone please give me any help at all?! Thanks in advance! Let $y=\sum_{n=0}^{\infty}a_n x^n$ be a solution. Then, $\sum_{n=2}^{\infty}n(n-1)a_n x^{n-2} - x\sum_{n=1}^{\infty} na_nx^{n-1} + 5\sum_{n=0}^{\infty} a_n x^n = 0$ Change index on the first, $\sum_{n=0}^{\infty}(n+2)(n+1)a_{n+2}x^n - \sum_{n=1}^{\infty} na_n x^n + \sum_{n=0}^{\infty} 5a_n x^n=0$ Evaluate 1st and 3rd sums at $n=0$ and combine, $2a_2 + 5a_0 + \sum_{n=1}^{\infty} [(n+2)(n+1)a_{n+2} - (n-5)a_n]x^n = 0$. This means, (i) $2a_2 + 5a_0 = 0$ (ii) $(n+2)(n+1)a_{n+2} - (n-5)a_n = 0$ for $n\geq 1$. Note that (i) is implied from (ii) if we let $n=0$. Thus, we get the relation, $(n+2)(n+1)a_{n+2} - (n-5)a_n = 0 \implies a_{n+2} = \frac{(n-5)a_n}{(n+2)(n+1)}$ for $n\geq 0$. 3. Thank you SO much, genius!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8933736681938171, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/96687-jacobian-matrix-exercise.html
# Thread: 1. ## Jacobian matrix exercise Hi everyone, I'm trying to do an exercise about jacobian matrix but I don't really know how to do it can you help me? The exercise says: $H:R \rightarrow R^3$ is a $C^1$ vectorial field in $R$ and $G:R^2 \rightarrow R^3$ is defined as follow: $G(x,y) = H(x^2+3y)$ Calculate $JH(6)$ knowing that $JG(0,2) = \begin{pmatrix} 0 & 3 \\ 0 & 0 \\ 0 & -6 \end{pmatrix}$ 2. Originally Posted by jollysa87 Hi everyone, I'm trying to do an exercise about jacobian matrix but I don't really know how to do it can you help me? The exercise says: $H:R \rightarrow R^3$ is a $C^1$ vectorial field in $R$ and $G:R^2 \rightarrow R^3$ is defined as follow: $G(x,y) = H(x^2+3y)$ Calculate $JH(6)$ knowing that $JG(0,2) = \begin{pmatrix} 0 & 3 \\ 0 & 0 \\ 0 & -6 \end{pmatrix}$ Define $K:\mathbb{R}^2\to\mathbb{R}$ by $K(x,y) = x^2+3y$. Jacobians respect matrix multiplication (that's just a way of expressing the chain rule), so $JG(0,2) = J(H\circ K)(0,2) = JH(6).JK(0,2)$ (since $K(0,2) = 6$). 3. Thank you for the explanation... So I have that: $JG(0,2) = JH(6) \cdot JK(0,2) \Rightarrow \begin{pmatrix} 0 & 3 \\ 0 & 0 \\ 0 & -6 \end{pmatrix} = \begin{pmatrix} D_xH_1(6) \\ D_xH_2(6) \\ D_xH_3(6) \end{pmatrix} \cdot \begin{pmatrix} 0 & 3 \end{pmatrix}$ I can see that: $D_xH1(6)*3 = 3 \Rightarrow D_xH1(6) = 1$ $D_xH2(6)*3 = 0 \Rightarrow D_xH2(6) = 0$ $D_xH3(6)*3 = -6 \Rightarrow D_xH3(6) = -2$ So the solution is: $JH(6)=\begin{pmatrix} 1 \\ 0 \\ -2 \end{pmatrix}$ Am I right? 4. Originally Posted by jollysa87 So the solution is: $JH(6)=\begin{pmatrix} 1 \\ 0 \\ -2 \end{pmatrix}$ Am I right? Yes. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9599717855453491, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/179879-function-problem.html
# Thread: 1. ## Function problem hi ! can someone help me with this problem?? 2. I see you've written that a proof by contradiction is the way to go. What have you done so far in that direction? 3. erm i know is using prof by contradiction, however i do not know how to apply that in this question! 4. Originally Posted by yrlim1 erm i know is using prof by contradiction, however i do not know how to apply that in this question! Suppose that $\left( {\exists t \in B} \right)\left[ {g(t) \ne h(t)} \right]$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9697282314300537, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=661331
Physics Forums ## Condition of continuity of E field at a boundary I am trying to understand the derivation of Snell's law using Maxwell's equation and got stuck. My text book says that "the E field that is tangent to the interface must be continuous" in order to consider refraction of light. If it were static E field I understand this is true because in electrostatics rotE = 0 holds. However Snell's law describes how electromagnetic waves change their direction of propagation when going through an interface of two mediums. Since our E filed is changing dynamically, we should use the equation rotE = -∂B/∂t in stead. To me it is not obvious why this equation leads to the continuity condition. How does the continuity condition in Snell's law appears from Maxwell's equations? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Homework Help B also has boundary conditions. www.cem.msu.edu/~cem835/Lecture03.pdf Recognitions: Homework Help Science Advisor The continuity of E tangential comes from applying Stokes' theorem to rotE = -∂B/∂t. The area for $$\int{\bf dS}\partial_t{\bf B}$$ shrinks to zero. ## Condition of continuity of E field at a boundary Quote by Meir Achuz The continuity of E tangential comes from applying Stokes' theorem to rotE = -∂B/∂t. The area for $$\int{\bf dS}\partial_t{\bf B}$$ shrinks to zero. Stokes's theorem for rotE is $$\int{ rot\bf{E}}{\bf dS}= \oint _{∂S} \bf Edx = \oint _{∂S} \bf \partial_t B dx$$ How does this lead to the continuity condition? Mentor http://farside.ph.utexas.edu/teachin...es/node59.html Note the left-hand portion of the diagram at the top of the page, and start reading around equation (635). OK I see. It seems like the continuity condition is something to do with the fact that the interface has zero volume and the planar surface is sufficiently large. The path of line integration must be an infinitely thin rectangular when the area of the box approaches to 0. The shape of the box is the key because it will allow the line integration to become 0 even if rotE is non-zero. Thanks for all the replies. Will the same rule apply if there is a gradient layer between the two phases? Let's say the geometry is no longer flat but curved, and the curvature of radius is comparable to the thickness of gradient layer. I'm pretty sure that the parallel component of the E field strength will still be continuous at any point. But will the phase still be the same? Recognitions: Homework Help Quote by Gen1111 Will the same rule apply if there is a gradient layer between the two phases? Let's say the geometry is no longer flat but curved, and the curvature of radius is comparable to the thickness of gradient layer. I'm pretty sure that the parallel component of the E field strength will still be continuous at any point. But will the phase still be the same? You know what the answer has to be already - what usually happens to Snell's Law when the surface is curved or the interface is not sharp? You could try working it out for a simple setup - like a spherical interface (par-axial) - and see if the general boundary conditions give you the appropriate equations. Thread Tools | | | | |-----------------------------------------------------------------------|----------------------------|---------| | Similar Threads for: Condition of continuity of E field at a boundary | | | | Thread | Forum | Replies | | | Classical Physics | 4 | | | Calculus | 0 | | | Calculus & Beyond Homework | 7 | | | Classical Physics | 3 | | | Electrical Engineering | 0 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237404465675354, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/61704/complicated-with-the-quotient-topology-mobius-strip-two-distinct-quotients
# Complicated with the quotient topology (Möbius strip) two distinct quotients Let follow this definition of manifold. An n-manifold is a Hausdorff Topological space, Such That Each point you have an open neighborhood homeomorphic to the open disc $$U^n = \left\{ {x \in R^n :\left| x \right| < 1} \right\}$$ Let this set: $$X = \left\{ {\left( {x,y} \right) \in R^2 /\,\,x \in \left[ { - 10,10} \right],\,y \in \left[ { - 1,1} \right]} \right\}$$ Define the quotient (10, y) related to (-10,-y) for -1 In the book ( Massey) comes out that if we consider the edges $y=1 ,y=-1$ Would not be a manifold under our definition. I do not understand why anyway would say that a manifold with boundary, if it refers not to ask more things or completely different. I'm just learning this quotient topology and I can not imagine the folds and stuff) =. If someone can give me advice on what to do with that, I really appreciate it Edit: (TB) In order to give some context, let me quote the relevant passage from Massey, (assuming A basic course in algebraic topology, Springer GTM 127 was meant). From the bottom of page 3: The simplest example of a $2$-dimensional manifold exhibiting this phenomenon [non-orientability] is the well-known Möbius strip. As the reader probably knows, we construct a model of a Möbius strip by taking a long, narrow rectangular strip of paper and gluing the ends together with a half twist (see Figure 1.1). Mathematically, a Möbius strip is a topological space that is described as follows. Let $X$ denote the following rectangle in the plane: $$X = \{(x,y) \in \mathbf{R}^2 : -10 \leqq x \leqq +10, \;-1 \lt y \lt +1\}.$$ We then form a quotient space of $X$ by identifying the points $(10,y)$ and $(-10,-y)$ for $-1 \lt y \lt +1$. Note that the two boundaries of the rectangle corresponding to $y=+1$ and $y=-1$ were omitted. This omission is crucial; otherwise the result would not be a manifold (it would be a “manifold with boundary,” a concept we will take up in Chapter XIV [more precisely, XIV.§7, p.375ff]). Alternatively, we could specify a certain subset of $\mathbf{R}^3$ which is homeomorphic to the quotient space just described. Unfortunately, Google managed to garble Figure 1.1, so here it is in full: - Are you asking for the definition of a manifold with boundary? Or are you asking why this quotient is not a manifold (without boundary)? – Dylan Moreland Sep 4 '11 at 0:57 The difference is that the manifold with boundary it´s locally homeomorphic to $$R^n$$ or to $$R^n _ +$$ but how can i see geometrically this two folds )=? sorry for this stupids questions T_T – Daniel Sep 4 '11 at 1:07 My new interpretation is that you want to know how to show that $X$ is a $2$-manifold with boundary. – Dylan Moreland Sep 4 '11 at 1:49 Ok , That will help me )= – Daniel Sep 4 '11 at 1:54 3 What's the question? – Ryan Budney Sep 4 '11 at 1:57 ## 1 Answer So to show this is a 2 manifold with boundary you have to show that around each point there is a neighborhood that is either homeomorphic to $D^2$ or $D^2_+= \{(x,y)\in \mathbb{R} | \,\,\,\, y\geq 0, \,\,\,\, |(x,y)|<1 \}$. Let $X$ be the described set $X / \sim$ the quotient and $\pi$ the quotient homomorphisim. For $x \in \pi( \text{int} \, ( X )) = \text{int} \, (X)$ we are done, this set is homeomorphic to the disk. On $\text{int} \,(X)$, $\pi$ is a homeomorphism. For $x \in \pi( (-10, 10) \times \{1\})$ consider $\pi((-10, 10) \times [1,-1))$. Similarly for the other side. For $x \in \pi( \{10\} \times (-1,1) )$ it is more difficult. Here we have to somehow work with the twist. Let $f: [-10,-9) \cup (9,10] \times (-1,1) / \sim \,\, \to (-1,1)^2$ be: $$f(x,y) = \left\{ \begin{array}{lr} (x-10,y) & : x \in (9,10] \\ (x+10,-y) & : x \in [-10,-9) \end{array} \right.$$ I claim that this is continuous and bijective. Pulling $f$ back to $X$, i.e. considering $f \circ \pi : X \to (-1,1)^2$, it is continuous (this is the universal property of quotients). And it is bijective as $f \circ \pi$ is 1 to 1 except for the points that are identified where is is 2 to 1. But those points are identified so $f$ is 1 to 1 and onto. $f$ is also an open map, any open set in $X / \sim$ is the union of the images of an open sets from $X$ and $f \circ \pi$ is clearly an open map. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936914324760437, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/32189/does-gravity-slow-the-expansion-of-the-universe/32198
# Does gravity slow the expansion of the universe? Does gravity slow the expansion of the universe? I read through the thread http://www.physicsforums.com/showthread.php?t=322633 and I have the same question. I know that the universe is not being stopped by gravity, but is the force of gravity slowing it down in any way? Without the force of gravity, would space expand faster? Help me formulate this question better if you know what I am asking. - In GR, gravity is not a force, it is a curvature of spacetime; it is geodesic deviation. So, to formulate your question better, you should start with sharpening your notion of what "without gravity" means. – Alfred Centauri Jul 17 '12 at 1:10 1 @AlfredCentauri: Does that really matter? "If spacetime isn't curved , would the universe expand faster?" is essentially the same question. – MSalters Jul 17 '12 at 11:54 I'd like to add my personal modification: Is the term "gravity" clearly defined here, i.e. is there a measure of the amount of gravity in spacetime (maybe the action is a valid one)? And how do the expansion equations (Friedmann?) depend on this real parameter. I formulate it that way because it seems invalid to me to ask about the influence on gravity on the expansions of the universe like that (the terminology "slowing down" seems dubious to me), if gravity is what brings the expansion about. If merely energy has negative influence on metric expansion, not gravity is slowing things down. – Nick Kidman Jul 17 '12 at 13:04 @MSalters, what follows from if spacetime isn't curved? – Alfred Centauri Jul 17 '12 at 14:46 @AlfredCentauri: That's the question here! I don't have a background in astrophysics, so I can't give a good answer. – MSalters Jul 19 '12 at 8:46 show 5 more comments ## 6 Answers The Friedmann equations for the expansion of space are (assuming flat space for simplicity): $(1)\ (\frac{\dot a}{a})^2 = \frac{8 \pi G \rho + \Lambda}{3}$ $(2)\ \frac{\ddot a}{a}= -\frac{4 \pi G}{3}(\rho + 3P) + \frac{\Lambda}{3}$ where $a$ is the scale factor (roughly, how "expanded" space is), $\dot a$ is the rate of expansion and $\ddot a$ is the acceleration of the expansion. If, "without the force of gravity", you mean "with $G = 0$", then we have: $(3)\ (\frac{\dot a}{a})^2 = \frac{\Lambda}{3} \rightarrow a(t) = a(0)e^{\pm t \sqrt{\frac{\Lambda}{3}}}$ $(4)\ \frac{\ddot a}{a}= \frac{\Lambda}{3}$ So, "without gravity" in this particular sense, with $G = 0$, space is either expanding or contracting exponentially with time (for the special case of $\Lambda = 0$ , $\dot a = \ddot a = 0$) Now, in the context of your question about an expanding universe, by inspection of equation (2), see that introducing "gravity" via giving $G$ a positive value (and, of course, assuming there is a non-zero mass density), this term "opposes" the cosmological constant term and can even reverse the acceleration of the expansion of space by making $\ddot a$ negative thus slowing the expansion. - Could you elaborate more on the conclusion regarding "$G=0$" from the solution of the second order differential equation? – Nick Kidman Jul 17 '12 at 18:46 Sigh, is mentioning the decaying solution likely to help the OP's understanding? – Alfred Centauri Jul 17 '12 at 19:08 It's an honest question. You clearly start out with some finite $a$ and then, if you say "space expands exponentially" you will have to justify that you don't drop the constant of integration, which gives growth. Otherwise it's no explaination as the solution $a(t)=a(0)e^{-\sqrt{\Lambda/3}t}$ is a solution too and not a expanding one. – Nick Kidman Jul 17 '12 at 19:11 @NickKidman, edited to address your concerns. – Alfred Centauri Jul 17 '12 at 21:04 1 I specifically wrote "in the context of your question about an expanding universe" to indicate we're considering the positive root solution. $\rho$ depends on $a$. What are you intentions? – Alfred Centauri Jul 17 '12 at 21:27 show 1 more comment This answer is intended to address Nick Kidman's reformulation of the question: is there a measure of the amount of gravity in spacetime (maybe the action is a valid one)? And how do the expansion equations (Friedmann?) depend on this real parameter. The way that cosmologists answer this is in terms of the energy density of the universe. This energy can come from radiation, matter, a cosmological constant, or any other form of dark energy if it exists. The rest of the answer is very similar to the discussion found in textbooks such as Ryden. For simplicity, we'll consider the imaginary case where the energy density of the universe is dominated entirely by matter - that is, we'll ignore radiation energy and dark energy. This will allow us to discuss how expansion of the universe depends on a single parameter, the energy density of the matter (I'll just call it 'matter density' from now on). Including the other energies will complicate the picture but not change the fundamental nature of the answer. The Friedmann equations are second order in time. We'll choose our two integration constants based on the size and rate of expansion we observe now in today's universe (even though today's expansion is dominated by dark energy, this is just a choice of numbers to set a convenient point of reference). Then, we can vary the matter density and solve the Friedmann equations to see how the early and late phases of the universe's expansion would change. Here is a graph showing three possible scenarios: Let's focus on the middle one first. Here, the expansion rate $\dot{a}$ approaches zero asymptotically for $t \rightarrow \infty$. The magnitude of the density in today's universe corresponding to this type of expansion is called the critical density, and we can use it to define a dimensionless measure of density called the density parameter $\Omega$. The middle curve corresponds to $\Omega = 1$. The lower curve in the plot corresponds to $\Omega > 1$. Here the expansion eventually reverses iteself into a big crunch. The upper curve corresponds to $\Omega < 1$. In this case the expansion continues to accelerate at late times, leading to a 'big freeze' or 'big rip'. Closed form analytical solutions to the Friedmann equations in a matter-only universe with arbitrary $\Omega$, such as those used to generate the graph, can be found in many cosmology textbooks including the one I linked to above. There are other important things that change with $\Omega$, such as the topology and curvature of the universe. Now for some fine print: In our universe, we actually measure $\Omega$ to be close to 1, meaning that the topology and curvature of the universe appear to match what we expect for $\Omega = 1$. But we also think that the universe will continue to expand in an accelerated matter. This is because of the presence of dark energy, which modifies the the solutions to the Friedmann equations. - Also, since matter/energy isn't created or destroyed locally, the matter density will change with time, and so will the critical density that is used to normalize $\Omega$. However, you can show that if $\Omega$ starts out equal to 1, it will stay that way. It will also stay greater than 1 or less than 1 if it starts that way. – kleingordon Jul 20 '12 at 23:17 The answer is that yes gravity does slow the expansion of space (leaving aside dark energy for the moment), but to get a better grasp on what's going on you need to look into this a bit more deeply. If we make a few simplifying assumptions about the universe, e.g. it's roughly uniform everywhere, we can solve the Einstein equation to give the FLRW metric. This is an equation that tells us how spacetime is expanding, and actually it seems to be a pretty good fit to what we see so we can be reasonably confident it's at least a good approximation to the way the universe behaves. To reduce gravity you simply reduce the density of matter in the universe because after all it's the matter generating the gravity. At low densities of matter the FLRW metric tells us that the universe expands forever. As you increase the matter density the expansion slows, and for densities above a critical density (known as $\Omega$) the expansion comes to a halt and the universe collapses back again. So yes, gravity does slow the expansion and the FLRW metric tells us by how much. If you want to pursue this further try Googling for the FLRW metric. The Wikipedia article is very thorough but a bit technical for non GR geeks, but Googling should find you more accessible descriptions. - Short answer: yes, gravity slows the expansion of the universe, in the sense that we'd see even greater expansion if gravity* were (slightly) weaker, and everything else was kept the same. *more precisely the gravitational constant - What is this gravity thing? In a metaphysical context, gravity just is, it is fundamental, a primary. Gravity can't be explained in terms of other "more fundamental" things because that would be a contradiction. In a physics context, we have a mathematical model for gravity, the General Theory of Relativity which, in a nutshell, is: $\textbf{R} + (\Lambda - \frac{1}{2}R)\textbf{g} = \dfrac{8 \pi G}{c^4}\textbf{T}$ Now, the terms in these equations are geometric objects, tensors, (the Ricci tensor, the metric tensor, and the stress-energy tensor) and this equation relates these tensors. Moreover, the LHS of the equation involves the geometry of spacetime. The RHS involves the mass-energy content of spacetime. So, I think it is the case that, in this context, gravity is the relation between these tensors, between the geometry of spacetime and the contents of spacetime. Note that I'm not claiming that gravity is this equation; I'm claiming that gravity is a relation expressed by this equation, a fundamental relationship between spacetime geometry and mass-energy. So, to tie this in with the OP's interesting question starting with "Without the force of gravity...", let's consider what that actually means. Stipulating that gravity is a relationship between the geometric objects above, then "without gravity" must mean "without a relationship between these geometric objects". What would such a world be like? (more to come) - Is that an answer of a question? What are the (at least two) geometric objects? Or what is not a geometric object then? – Nick Kidman Jul 19 '12 at 18:57 A tensor is a geometric object. Now, it also the case that the metric tensor, a geometric object, is also a description of the geometry of spacetime. So the relation that I'm calling gravity relates the a description of the stress-energy in spacetime to the geometry of spacetime. I'll edit my answer for clarity. – Alfred Centauri Jul 19 '12 at 20:52 Mhm, okay. Notice that I'm foremostly interested in the main question. A definition of the expression gravity might be helpful, but only to communicate the answer about the expanding. – Nick Kidman Jul 19 '12 at 21:03 I'm primarily interested in the quoted question. I find it interesting and think that others might too. I hope to get some constructive comments on my answer and I may pose it as a question myself. ---- I don't mean this in other way than as a simple statement of fact: the question of what you're foremostly interested in never crossed my mind and it isn't likely too. – Alfred Centauri Jul 19 '12 at 21:12 I think that Gravity accelerates the expansion of space: Consider two objects falling one after the other, directly into a gravitational well. The closer one feels a stronger pull, so it accelerates away from the object behind. Then consider two objects falling side by side, directly into a gravitational well. Deeper in the well, the path that light takes between them is more greatly bent, and therefore longer. So, those distances are also greater. Any two galaxies in our cosmos are falling into the gravitational well of the cosmos as a whole. So, the distance between them must be increasing due to gravity. A lot of people get confused between the deflection of an object toward a gravitational well, and the effect of the gravitational well on the distances between objects falling into it. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343621730804443, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/175380/solution-af-a-system-of-2-quadratic-equations?answertab=active
# Solution af a system of 2 quadratic equations I have a system of two quadratic equations with unknowns $x$ and $y$: $$a_{1 1} x y + a_{1 2} x^2 + a_{1 3} y^2 + a_{1 4} x + a_{1 5} y + a_{1 6} = 0,\\ a_{2 1} x y + a_{2 2} x^2 + a_{2 3} y^2 + a_{2 4} x + a_{2 5} y + a_{2 6} = 0,$$ where $a_{i j}$ are arbitrary scalars. Is there an algebraic solution of the above system? - 1 easy case would be ,if we could complete square in each equation,get similar to circle equation – dato Jul 26 '12 at 9:06 – draks ... Jul 26 '12 at 9:11 Yes, I have tho conics and I am looking for the intersercion points. – tomto Jul 26 '12 at 9:32 ## 2 Answers From Intersecting two conics: The solutions to a two second degree equations system in two variables may be seen as the coordinates of the intersections of two generic conic sections. In particular two conics may possess none, two or four possibly coincident intersection points. The best method of locating these solutions exploits the homogeneous matrix representation of conic sections, i.e. a 3x3 symmetric matrix which depends on six parameters. The procedure to locate the intersection points follows these steps: • given the two conics $C_1$ and $C_2$ consider the pencil of conics given by their linear combination $\lambda C_1 + \mu C_2$ • identify the homogeneous parameters $(\lambda,\mu)$ which corresponds to the degenerate conic of the pencil. This can be done by imposing that $\det(\lambda C_1 + \mu C_2) = 0$, which turns out to be the solution to a third degree equation. • given the degenerate conic $C_0$, identify the two, possibly coincident, lines constituting it • intersects each identified line with one of the two original conic; this step can be done efficiently using the dual conic representation of $C_0$ • the points of intersection will represent the solution to the initial equation system - Ah, yes, that's another nice way to solve it! – Andrea Mori Jul 26 '12 at 10:11 @darks: Thank you very much! I think it is a good solution for me. I have only one question: If I compute coeficients of the degenerate conic $C_0$ the question is how to compute parameters of the lines? I have a solution but it is not elegant: For example I fix $x$ for some value and calculate $y$ from $C_0$. I repeat the procedure for some other value of $x$. I calculate lines parameters from these points. Is there any better (more elegant) solution? – tomto Jul 27 '12 at 7:16 @draks ... I am also curious about this. How would one programatically find the lines that constitute a degenerate conic (i.e. without completing the square by hand, etc.)? – David Doria Mar 5 at 17:27 Each equation describes a conic. Thus, you are trying to compute the intersection $\cal C_1\cap\cal C_2$ of two conics. In general, you have to expect four intersection points (e.g., think of two ellipses meeting transversally). There are special cases where the task is simplified. For instance, if one of the two conics splits as the union of two lines, or if one of the equations reduces easily to the form $y=f(x)$. In the general case you can either try Elimination Theory or you can exploit the fact that conics are rational curves, i.e. that there is a parametrization $$\Bbb R\ni t\mapsto (x(t),y(t))\in\cal C_1\qquad(*)$$ given by rational functions, i.e. quotient of polynomials in $t$. Then if you plug this "rational description" of $\cal C_1$ into the equation of $\cal C_2$ will eventually get a polynomial equation of degree 4 in the variable $t$ only, whose solutions correspond to the intersection points. In order to solve this degree 4 equation you need either some luck, or some patience to go browsing old Algebra books. In order to get (*), the usual method is to find just one point $(x_o,y_o)\in\cal C_1$, consider the lines $$\ell_t: y=t(x-x_o)+y_o$$ through it and describe the second intersection of $\cal C_1\cap\ell_t$ in terms of $t$. - Andrea, thank you for your answer. I will try to follow your advices. But the most I need is an exact formula, how to solve this problem. This formula can be complicated - this is not problem for me because I plan to put this formula to a computation algorithm. – tomto Jul 26 '12 at 9:51 @user36556: A final formula that includes all cases might be very very complicated. In any event the procedure to get the intersection point can certainly be translated into an effective algorithm, and I'm sure that it already has. – Andrea Mori Jul 26 '12 at 9:59 Maybe the formula that I am looking for can be generated by some algebra software (for example Matlab Symbolic Toolbox)? Actualy I don't have access to such software or I don't have knowledge about free software substitutes. – tomto Jul 26 '12 at 10:01 I'm not an expert in symbolic computation software. I do use occasionally Maple just because my institution purchased a licence. What software is best to use for the case in question may be subject of another question here. – Andrea Mori Jul 26 '12 at 10:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.921739935874939, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/159713/solve-y-x-y/159714
# Solve $y' = x + y$ I am suppose to use the substitution of $u = x + y$ $y' = x + y$ $u(x) = x + y(x)$ I actually forget the trick to this and it doesn't really make much sense to me. I know that I need to get everything in a variable with x I think but I am not sure how to manipulate the problem according to mathematical rules that will make sense. Also I know that at some point I will get an integral or something and that I have no idea how to do that with multiple variables. - ## 2 Answers $$y'=x+y$$ Then we let $u=x+y$ This gives $u'=1+y'$, so that the equation becomes $$u'-1=u$$ $$u'-u=1$$ Can you solve that for $u$? Hint $(e^x-1)'=e^x$ Moving on with the solution: $$\frac{du}{dx}-u=1$$ $$\frac{du}{dx}=1+u$$ And the classic abuse in DE's $$\frac{du}{u+1}=dx$$ Now $$\int\frac{du}{u+1}=\int dx$$ $$\log(u+1)=x+C$$ We take logarithms $$u+1=e^{x+C}$$ We use the property of the exponential function $f(x+y)=f(x)f(y)$ $$u+1=e^C e^x$$ Here $K=e^C$ $$y+x+1=Ke^x$$ $$y=K e^x-x-1$$ - This does not appear to be a seperable equation and that is all I know how to do. – Jordan Jun 18 '12 at 0:36 1 Well, you're supposed to use the method of integrating factor, so multiply both sides by $\exp(x)$, and then you get $(u'-u)\exp(x)=\exp(x)$. The left hand side is $(u\cdot\exp(x))'$, and you can integrate everything nicely. – Alex Nelson Jun 18 '12 at 0:38 @AlexNelson No one is supposed to do anything. One can choose a variety of paths. In this case, noting that $e^x-(e^x-1)=1$ is one simple solution. I don't think he knows about IFs. – Peter Tamaroff Jun 18 '12 at 0:42 1 @PeterTamaroff I know I don't know acronyms :\$ what does IF stand for? – Alex Nelson Jun 18 '12 at 0:43 @AlexNelson "use the method of **i**ntegrating **f**actor" – Peter Tamaroff Jun 18 '12 at 0:46 show 14 more comments Well, if $u = x + y$, then $y = u - x$. Take the derivative to both sides and we get $$y' = u' - 1$$ set this equal to the right hand side of our differential equation $$u' - 1 = x + y$$ But our substitution is $u=x+y$, so the right hand side simplifies becoming $$u' - 1 = u$$ thus we get a differential equation $$u' = 1 + u.$$ This can be solved, then we plug it back into the substitution to solve for $y$. - I still get a funtion I do not know how to work with. I do not know how to find the integral of something without a dy or dx or whatever. – Jordan Jun 18 '12 at 0:41 2 Well, you're supposed to divide both sides by $1+u$, right? You get $u'/(1+u)=1$. Integrate both sides with respect to $x$, so you get $\int (1+u)^{-1}\,du = \int dx$. Performing this integral gives you $\ln(1+u) = x-x_{0}$. Exponentiate both sides, and you're almost done... – Alex Nelson Jun 18 '12 at 0:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259440302848816, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?s=3c6e864be97c1387737927bfeafa691c&p=4232261
Physics Forums ## Mersenne Sieve Following is a list of 39 Mersenne numbers. Some are prime and some are not. These are generated with x =( 2^n) -1. Many of the largest primes known are Mersenne primes. I would like to point out that a sieve may be used to block out many non prime Mersenne numbers. For example for n = 2 x = 3 but on closer inspection every even value of n other than n = 2 is non prime furthermore they are all divisible by 3, Thus we can cross out n=4, n=6, … up to n=38. Also starting at n=3 and x = 7 we find that every third number larger than 3 is divisible by 7 so we could cross out n = 6, n=9, n=12 … up to n = 39. Now lets move to n=5 or x = 31. now we can cross out every fifth number larger than n=5 as they are all divisible by 31 if we move to n=7 and x = 127 we find that every 7th number larger than 7 is divisible by 127 and can be crossed out. 11 should be crossed off as 2047 is not prime My question is does this go on forever and can we use it to delete many Mersenne numbers as non primes. The next prime is 11 n 2^n - 1 n = 1 x = 1 n = 2 x = 3 n = 3 x = 7 n = 4 x = 15 n = 5 x = 31 n = 6 x = 63 n = 7 x = 127 n = 8 x = 255 n = 9 x = 511 n = 10 x = 1023 n = 11 x = 2047 n = 12 x = 4095 n = 13 x = 8191 n = 14 x = 16383 n = 15 x = 32767 n = 16 x = 65535 n = 17 x = 131071 n = 18 x = 262143 n = 19 x = 524287 n = 20 x = 1048575 n = 21 x = 2097151 n = 22 x = 4194303 n = 23 x = 8388607 n = 24 x = 16777215 n = 25 x = 33554431 n = 26 x = 67108863 n = 27 x = 134217727 n = 28 x = 268435455 n = 29 x = 536870911 n = 30 x = 1073741823 n = 31 x = 2147483647 n = 32 x = 4294967295 n = 33 x = 8589934591 n = 34 x = 17179869183 n = 35 x = 34359738367 n = 36 x = 68719476735 n = 37 x = 137438953471 n = 38 x = 274877906943 n = 39 x = 549755813887 PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member You should look at: http://en.wikipedia.org/wiki/Mersenne_primes There you can see, that n in your notatation must be prime to let 2**n - 1 be prime Recognitions: Gold Member As a consequence of Theorem 18 from Hardy-Wright, we have the following Corollary: For two natural numbers 1 < a and b: a${^b}$ - 1 is composite if a > 2 (because (a - 1) divides a$^{b}$ - 1); or in the case a = 2: if b = s * t (because 2$^{s}$ - 1 divides 2$^{s*t}$ - 1 ## Mersenne Sieve yes but consider the number 11 which is a prime while 2^11 -1 is not a prime. yes but this does not help Recognitions: Gold Member I think, my Corollary helps to delete all 2${^b}$ - 1 with a composite b; but unfortunately, there is no help for sieving the Mersenne primes: Theorem 18 by HW says (in short): 'If 2${^b} - 1$ is a prime, then b is a prime'; and the 'other way round' is not valid To show that (X^(2*n) ) - 1 has a factor of x-1 for any n. Consider x^2 - 1 = (x-1) * (x+1 ) so if x^2 -1 is a factor of a function then x-1 is a factor. Consider x^4 - 1 = ( x^2 -1 ) * ( x^2 + 1 ) so x-1 is a factor of x^4 -1 consider x^6 - 1 = (x^2 -1 ) * ( x^4 + x^2 + 1 ) so x-1 is a factor of x^6 -1 consider x^8-1 = ( x^2 -1 ) * ( x^6 + x^ 4 + x^ 2 + 1 ) so x-1 is a factor of x^8 -1 etc So it seems to me… x^(2*n) - 1 = (x^2 -1 ) * ( x^(2*(n-1)) + x^(2 * (n-2)) + ……. + 1 ) Thread Tools | | | | |-------------------------------------|---------------------------|---------| | Similar Threads for: Mersenne Sieve | | | | Thread | Forum | Replies | | | Linear & Abstract Algebra | 23 | | | Linear & Abstract Algebra | 2 | | | General Math | 6 | | | Computing & Technology | 14 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8794683814048767, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/55422/list
## Return to Answer 4 added 57 characters in body ADDED: here is a proof of the statement you need (namely the square free monomial ideal $I$ is a intersection of primes generated by subsets of parameters) without using the modularity property. We will use induction on $N=$ the total numbers of times the parameters appear in the generators of $I$. For example if $I=(xy, xz)$ then $N=4$. The statement is obvious if $N=1$. Suppose $I$ has a generator (say $f_1$) which involves at least $2$ parameters. Pick one of themthese parameters, say $x$ and WLOG, we can assume $I=(f_1,\cdots, f_n, g_1,\cdots,g_l)$ such that $x|f_i$ for each $i$ but $x$ does not divide any of the $g_j$s. Let $F_i=f_i/x$. We claim that: $$I = (I,x) \cap (I,F_1)$$ If the claim is true, we are done by applying the induction hypothesis on to $(I,x)$ and $(I,F_1)$. One containment is obvious, for the other one we need to show if $xu \in (I,F_1)$ then $xu\in I$. Write $$xu = f_2x_2 + \cdots f_nx_n + \sum g_jy_j + F_1x_1$$ which implies $$x(u- F_2x_2 +\cdots F_nx_n) \in (g_1,\cdots, g_l, F_1) = I'$$ $I'$ has minimal generators which do not contain $x$. By induction, $I'$ is an intersection of primes generated by other parameters, so $x$ is a NZD on $R/I'$. So $(u- F_2x_2 +\cdots F_nx_n) \in I'$, and therefore $xu \in I$, as desired. REMARK: note that for this proof to work, you only need that all subsets of the sequence (not necessarily parameters) generate primesprime ideals. I guess it fits with your other question. So from the comments I will take your question as proving $J\cap (K+L) = J\cap K + J \cap L$ for parameter ideals (by which I mean ideals generated by subsets of a fixed regular s.o.p). It will suffice to understand $I\cap J$ for two such ideals. To be precise, let $g(I)$ be the set of s.o.p generators of $I$. Let $P$ be the ideal generated by the intersection of $g(I),g(J)$, and $I', J'$ generated by $g(I)-g(P), g(J)-g(P)$. Then we need to show: $$I \cap J = P + I'J'$$ Since $R/P$ is still regular we can kill $P$ and assume that $g(I), g(J)$ are disjoint, and we have to prove $I \cap J = IJ$. This should be an easy exercise, but a slick and very general way is invoking Tor (which shows that this is even true for $I,J$ generated by parts of a fixed regular sequence). 3 added 154 characters in body; added 107 characters in body ADDED: here is a proof of the statement you need (namely the square free monomial ideal $I$ is a intersection of primes generated by subsets of parameters) without using the modularity property. We will use induction on $N=$ the total numbers of times the parameters appear in the generators of $I$. For example if $I=(xy, xz)$ then $N=4$. The statement is obvious if $N=1$. Suppose $I$ has a generator which involves at least $2$ parameters. Pick one of them, say $x$ and WLOG, we can assume $I=(f_1,\cdots, f_n, g_1,\cdots,g_l)$ such that $x|f_i$ for each $i$ but $x$ does not divide any of the $g_j$s. Let $F_i=f_i/x$. We claim that: $$I = (I,x) \cap (I,F_1)$$ If the claim is true, we are done by applying the induction hypothesis on $(I,x)$ and $(I,F_1)$. One containment is obvious, for the other one we need to show if $xu \in (I,F_1)$ then $xu\in I$. Write $$xu = f_2x_2 + \cdots f_nx_n + \sum g_jy_j + F_1x_1$$ which implies $$x(u- F_2x_2 +\cdots F_nx_n) \in (g_1,\cdots, g_l, F_1) = I'$$ $I'$ has minimal generators which do not contain $x$. By induction, $I'$ is an intersection of primes generated by other parameters, so $x$ is a NZD on $R/I'$. So $(u- F_2x_2 +\cdots F_nx_n) \in I'$, and therefore $xu \in I$, as desired. REMARK: note that for this proof to work, you only need that all subsets of the parameters generate primes. I guess it fits with your other question. So from the comments I will take your question as proving $J\cap (K+L) = J\cap K + J \cap L$ for parameter ideals (by which I mean ideals generated by subsets of a fixed regular s.o.p). It will suffice to understand $I\cap J$ for two such ideals. To be precise, let $g(I)$ be the set of s.o.p generators of $I$. Let $P$ be the ideal generated by the intersection of $g(I),g(J)$, and $I', J'$ generated by $g(I)-g(P), g(J)-g(P)$. Then we need to show: $$I \cap J = P + I'J'$$ Since $R/P$ is still regular we can kill $P$ and assume that $g(I), g(J)$ are disjoint, and we have to prove $I \cap J = IJ$. This should be an easy exercise, but a slick and very general way is invoking Tor (which shows that this is even true for $I,J$ generated by parts of a fixed regular sequence). 2 added 1237 characters in body ADDED: here is a proof of the statement you need without using the modularity property. We will use induction on $N=$ the total numbers of times the parameters appear in the generators of $I$. For example if $I=(xy, xz)$ then $N=4$. The statement is obvious if $N=1$. Suppose $I$ has a generator which involves at least $2$ parameters. Pick one of them, say $x$ and WLOG, we can assume $I=(f_1,\cdots, f_n, g_1,\cdots,g_l)$ such that $x|f_i$ for each $i$ but $x$ does not divide any of the $g_j$s. Let $F_i=f_i/x$. We claim that: $$I = (I,x) \cap (I,F_1)$$ If the claim is true, we are done by applying the induction hypothesis on $(I,x)$ and $(I,F_1)$. One containment is obvious, for the other one we need to show if $xu \in (I,F_1)$ then $xu\in I$. Write $$xu = f_2x_2 + \cdots f_nx_n + \sum g_jy_j + F_1x_1$$ which implies$$x(u- F_2x_2 +\cdots F_nx_n) \in (g_1,\cdots, g_l, F_1) = I'$$ $I'$ has minimal generators which do not contain $x$. By induction, $I'$ is an intersection of primes generated by other parameters, so $x$ is a NZD on $R/I'$. So $(u- F_2x_2 +\cdots F_nx_n) \in I'$, and therefore $xu \in I$, as desired. 1 So from the comments I will take your question as proving $J\cap (K+L) = J\cap K + J \cap L$ for parameter ideals (by which I mean ideals generated by subsets of a fixed regular s.o.p). It will suffice to understand $I\cap J$ for two such ideals. To be precise, let $g(I)$ be the set of s.o.p generators of $I$. Let $P$ be the ideal generated by the intersection of $g(I),g(J)$, and $I', J'$ generated by $g(I)-g(P), g(J)-g(P)$. Then we need to show: $$I \cap J = P + I'J'$$ Since $R/P$ is still regular we can kill $P$ and assume that $g(I), g(J)$ are disjoint, and we have to prove $I \cap J = IJ$. This should be an easy exercise, but a slick and very general way is invoking Tor (which shows that this is even true for $I,J$ generated by parts of a fixed regular sequence).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 117, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431530237197876, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/76970-poker-type-question.html
# Thread: 1. ## a poker-type question A standard 52 card deck is shuffled and you and your opponent are dealt two 'hole' cards each. now the next three cards from the deck are exposed (the flop) and it contains two of the deck's four aces. Does the fact that two aces were exposed make it less likely, more likely, or not important to the question 'what is the probability that my opponent was dealt an ace?' My gut leads me to think it shouldn't matter - that the cards that were dealt on the flop don't affect what your opponent was dealt in the hole, but I suspect that's not correct and that the presence of the two aces decreases the probability that my opponent has an ace in the hole. Not sure by how much. 2. Originally Posted by demere A standard 52 card deck is shuffled and you and your opponent are dealt two 'hole' cards each. now the next three cards from the deck are exposed (the flop) and it contains two of the deck's four aces. Does the fact that two aces were exposed make it less likely, more likely, or not important to the question 'what is the probability that my opponent was dealt an ace?' My gut leads me to think it shouldn't matter - that the cards that were dealt on the flop don't affect what your opponent was dealt in the hole, but I suspect that's not correct and that the presence of the two aces decreases the probability that my opponent has an ace in the hole. Not sure by how much. It does indeed decrease the probability that your opponent has an ace. Before the flop all you know is that there is a decreased chance of your opponent having any card you have, there are still 4 aces unaccounted for. If there's two aces on the flop then there are 47 cards left, and only two aces. $\frac{4}{50} > \frac{2}{47}$ "that the cards that were dealt on the flop don't affect what your opponent was dealt in the hole" Of course they don't literally affect what your opponent was dealt, but it strongly affects your knowledge of what your opponent might have. For example, if 3 aces came up on the flop, and the 4th ace came on the turn then you would know with 100% certainty that your opponent did not have an ace.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9877373576164246, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/higgs-field
# Tagged Questions The higgs-field tag has no wiki summary. 0answers 55 views ### Higgs boson/field symmetries and local symmetries In the SM with gauge group U(1)xSU(2)xSU(3), those factors are associated to the gauge bosons associated with a local symmetry and the Higgs field provides masses to the elementary fermions AND the ... 1answer 39 views ### Higgs VEV in terms of measurements on an ensemble? Let $A$ be a Hermitian operator corresponding to some observable. If we prepare $N$ identical systems in the state $\psi$ and measure this observable in each system, the average of the measurements ... 1answer 140 views ### Why do some particles have a greater mass than others? The property of mass that almost every particle possesses comes from the Higgs Field. It is this field, which permeates all of space, that particles interact with and hence obtain mass. But why do ... 1answer 170 views ### The theory of strings stretching between intersecting D-branes I am trying to understand various aspects of intersecting D-branes in terms of the gauge theories on the worldvolume of the D-branes. One thing I'd like to understand is the worldvolume action for ... 1answer 117 views ### why drag cause mass in higgs field ? how could drag cause mass? why in higgs field drag cause mass? drag is force in general not mass Higgs field- Inquiring Minds - Questions About Physics how drag of higgs field cause mass? 0answers 44 views ### Topological Solitons and the Higgs Condensate entanglement While focusing on resolutions to the Firewall controversy, and the possible implications of the Higgs field as it relates to the issue, the possibility of using EPR correlations in the Higgs ... 1answer 82 views ### Is the heat required to alter the Higgs field an 'absolute heat'? I have read and heard that manipulating the Higgs field would require heating up a local geometry to ridiculous temperature. I am trying to understand if there are stars or places in the universe ... 0answers 54 views ### Showing the equivalence of lagrangians? I have a lagrangian written as: \mathcal{L}_H = \text{Tr}\left[\,(D_\mu \Phi)^\dagger D^\mu \Phi\right] - \mu^2 \text{Tr}\left[\,\Phi^\dagger \Phi\right] - \lambda (\text{Tr}\left[\,\Phi^\dagger ... 1answer 156 views ### why are two higgs doublets required in SUSY? I can't really understand why two higgs doublets are required in SUSY. From the literature, I have found opaque explanations that say something along the lines of: the superpotential W must be a ... 2answers 176 views ### What is the expectation value of the number operator when the vacuum has a VEV? The number operator N applied to a field whose vacuum has zero VEV gives $N|0>=0$. What if we apply it to the Higgs field? The background of this question is that in popular scientific accounts, ... 0answers 66 views ### Could one theoretically build the Higgs equivalent of a Faraday cage? My understanding is, within quantum mechanics, in a pure vacuum, all known fields have a lowest energy state of zero. The Higgs field is the only exception -- it's lowest energy state is not zero. ... 2answers 147 views ### do Higgs Bosons happen in nature all the time? Rarely? Or do they only happen when the Higgs field is excited in a particle accelerator? I'm trying to reconcile an apparent contradiction between explanations given by Dr. Cox in 2009 and 2012, and those given by a panel of Berkeley professors. I'm not a physicist, and so I realize this ... 1answer 141 views ### Higgs field existence and zero energy If the Higgs field permeates all space, why some claim, that total universe energy equals (or is very close to) zero? 1answer 86 views ### Higgs Field - Is its discovery truly “around the corner”? Rather surprised I haven't seen many questions or discussion regarding the rumored confirmation of the Higgs field. As I understand it, the energies where they saw things were actually quite a bit ... 1answer 792 views ### How to interpret vacuum instability of Higgs potential If the Higgs mass is in a certain range, the quartic self-coupling of the Higgs field becomes negative after renormalization group flow to a high energy scale, signalling an instability of the vacuum ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904504656791687, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/177613-how-find-largest-square-inside-triangle-print.html
# How to find the largest square inside a triangle? Printable View • April 11th 2011, 10:12 PM Medusa How to find the largest square inside a triangle? I am wondering how you would go about to find the largest square that would fit inside a triangle? An example of what i am asking is: What is the side length of the largest square that would fit inside a right angled triangle with the sides 5,12, and 13? Any help would be appreciated Medusa • April 11th 2011, 10:26 PM johnny The largest rectangle that would fit inside a right triangle with the sides 5, 12, 13 is a square. Let the square have side x. By Pythagorean theorem, $(12\,-\,x)^2\,+\,x^2\,=\,(13\,-\,\sqrt{2x^2\,-\,10x\,+\,25})^2.$ Solving the equation gives x = 60/17. • April 11th 2011, 10:33 PM Medusa Thank you for the help. I trying to use this method but i kept stuffing up the second part. You have made it a lot clearer now. Thanks Medusa • April 24th 2011, 05:54 AM Medusa After doing further research into this question and many discussions with my math genius friends i was wondering if anyone could explain how the formula: bh/b+h works in this situation? b = base, h = height. I am unsure how this works and any help explaining it would be appreciated. • April 24th 2011, 08:19 AM Soroban Hello, Medusa! Quote: Can anyone explain how the formula: bh/(b+h) works in this situation? b = base, h = height. Code: ```-  - Ao :  :  | * : h-x |  * :  :  |    * :  - Do-------oE h  :  |  x  | * :  :  |      |  * :  x  |      x|    * :  :  |      |      * :  :  |      |          * -  -  o-------o-------------o       B - x - F - - b-x - - C       : - - - -  b  - - - - :``` We have right triangle ABC with AB = h, BC = b. We have square BDEF with sides x. Since ∆ABC ~ ∆ADE, we have: . . .h. . . .h - x . . --- .= .----- . . → . . hx .= .b(h - x) . . → . . hx .= .bh - bx . . .b. . . . . x . . bx + hx .= .bh . . → . . (b + h)x .= .bh . . . . . . . . . . . . . . .bh Therefore: . x .= .------- . . . . . . . . . . . . . .b + h • April 24th 2011, 08:33 AM Medusa So the formula would work for a right angled triangle.. What about if the triangle was isoceles? Would the formula still work if the triangle isn't right angled is pretty much what i am asking. Medusa • April 24th 2011, 10:16 AM earboth 1 Attachment(s) Quote: Originally Posted by Medusa So the formula would work for a right angled triangle.. What about if the triangle was isoceles? Would the formula still work if the triangle isn't right angled is pretty much what i am asking. Medusa 1. Draw a sketch. see attachment 2. Use similar triangles. You'll get the proportions: http://latex.codecogs.com/png.latex?...dfrac{bh}{b+h} • April 26th 2011, 03:29 AM Medusa Thanks for the help :) I sat there and worked it out using your advice and it makes sense after completely working it out by hand :) Thanks for help Medusa All times are GMT -8. The time now is 01:04 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9062846302986145, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/216471/show-that-there-exists-a-bijection-from-abc-into-ab-times-c?answertab=active
# Show that there exists a bijection from $(A^B)^C$ into $A^{B \times C}$ Notation: Let A and B be sets. The set of all functions $f:A \rightarrow B$ is denoted by $B^A$. Problem: Let A, B, and C be sets. Show that there exists a bijection from $(A^B)^C$ into $A^{B \times C}$. You should first construct a function and then prove that it is a bijection. Actually this question hasn't been posted by me, but has already been answered and closed as Find a bijection from $(A^B)^C$ into $A^{B \times C}$ I don't agree, since this doesn't seen at least for me to be correct. Maybe I haven't got through the answer but in my view, the correct answer should be, following the same letters for the functions: my Answer Let $f \in (A^B)^C, g \in A^{B \times C}$. Define $\Phi: (A^B)^C \to A^{B \times C}$ by setting $$\Phi(f)(b,c) = f(c)(b)$$ This is a bijection because it has an inverse $\Psi: A^{B \times C} \to (A^B)^C$ $$\Psi(g)(c)(b) = g(b,c)$$ I would like to know if my editions to the functions really answer the question or if the previous answer Find a bijection from $(A^B)^C$ into $A^{B \times C}$ was indeed correct. Thanks. - If you want to define $\Phi: (A^B)^C \to A^{B \times C}$, then $\Phi(f)$ should be an element of $A^{B \times C}$, i.e., a function from $B\times C$ to $A$. So you plug pairs (elements of $B\times C$) into the function $\Phi(f)$. – Martin Sleziak Oct 18 '12 at 19:02 1 I am very confused as for why this question has a duplicate banner. (And generally why is it a copy-paste of math.stackexchange.com/questions/178277 for its first half) – Asaf Karagila Oct 18 '12 at 19:04 1 @user45147 I believe that it would be better if, instead of putting the identical text as the stackexchange software includes in questions which are closed, you would explain that your question is related to the other one and that are in fact asking about clarification of one point of the proof. Otherwise it looks very confusing (as Asaf mentioned in his comment). We are all used to see that banner on closed questions only. – Martin Sleziak Oct 18 '12 at 19:16 ok... i am going to clarify the answerc, and sorry, what banner do you talk about martin ? I'm new to this – user45147 Oct 18 '12 at 20:51 Ok @MartinSleziak but then i think that it should be written $f(c)(b)$ instead of $f(b)(c)$ shouldnt it ? – user45147 Oct 18 '12 at 22:31 show 1 more comment ## 1 Answer Show that there is a bijection between set $X$ and set $Y$. Let $x\in X$, $y\in Y$. Define $\Phi\colon X\to Y$ by setting $$\Phi(x)=y.$$ This is a bijection because it has an inverse $\Psi\colon Y\to X$ $$\Psi(y)=x.$$ See Bijection iff Left and Right Inverse at ProofWiki. If we show that $\Phi$ has an inverse, then $\Phi$ is bijective. This is what is done in the answer to linked question. OP is asking whether definition of $\Phi$ and $\Psi$ suggested there works. (He proposes another maps.) Of course, after we define $\Phi$ and $\Psi$, we must also show that they are inverse to each other. I certainly agree with that. – Martin Sleziak Oct 19 '12 at 6:23 so @MartinSleziak you are saying that the proof in the answer is incomplete, thats it ? Although he has defined a function from, say, x to y, and another from y to x, he has not shown that they are inverse to each other, what is left to be proved on the exercise. is that ? And about that I said above, that it should be $f(c)(b)$ instead of $f(b)(c)$ ? Thanks for your answer – user45147 Oct 19 '12 at 16:03 @user45147 I am not sure what you mean in your comment. Who do you mean, when you say "he" in your comment. Where do you want to change $f(b)(c)$ to $f(c)(b)$. – Martin Sleziak Oct 19 '12 at 17:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9618949294090271, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/102504/help-with-tensor-product-of-two-bit-operators-in-quantum-computing
# Help with tensor product of two bit operators (in quantum-computing) I'm having trouble understanding how to put together two one-bit operators to get a two-bit operator. For example, suppose I have two electrons in the spin state: $$\frac{1}{\sqrt{2}}(|\text{up},\text{up}\rangle+|\text{down},\text{down}\rangle)$$ If I'm understanding things correctly, then to measure the state of the first one (along the z axis) I would tensor $\sigma_z$ with the identity operator. This seems to work. My problem comes in if I want to measure the state of both of them. I would have thought that I would tensor $\sigma_z$ with itself. But when I do that I get an operator that has degenerate eigenvalues. In other words, I get an eigenvalue of $+1$ for either of the possible outcomes. I'm not sure if the whole approach is wrong, or if I'm just making a mistake somewhere. I hope this description is clear. I'm somewhat of a novice at this. Also I imagine that there might be a better existing tag for this question than "linear-algebra" but I couldn't find it. Any comments appreciated. - 1 There was a time when "two-bit" as an adjective meant cheap. Literally "two bits" was 25 cents. – Michael Hardy Jan 26 '12 at 2:25 Yeah, I was actually thinking about that when I typed it in :-) – Mike Witt Jan 26 '12 at 5:44 ## 1 Answer Measuring the state of a quantum system with an operator $A$ will project the system into an eigenspace of $A$. The operator $\sigma_z\otimes\sigma_z$ has two eigenspaces: one is spanned by $\{|\uparrow>\otimes|\uparrow>, |\downarrow>\otimes|\downarrow>\}$ and has eigenvalue $+1$, and the other is spanned by $\{|\uparrow>\otimes|\downarrow>, |\downarrow>\otimes|\uparrow>\}$ and has eigenvalue $-1$. The state vector $(1/\sqrt{2})(|\uparrow>\otimes|\uparrow>+|\downarrow>\otimes|\downarrow>)$ is already in the first of these eigenspaces, so applying $\sigma_z\otimes\sigma_z$ will leave it unchanged and give a measurement of $+1$. Looking at the spanning sets, you can see that $\sigma_z\otimes\sigma_z$ measures the exclusive OR of the two qubits. If you wanted a measurement with four outcomes, you could either apply an operator with four different eigenvalues (e.g., $(\sigma_z\otimes 1)+\frac12 (1\otimes \sigma_z)$), or perform two successive measurements, each of which has two possible outcomes (e.g., measure $\sigma_z\otimes 1$ and follow this by measuring $1\otimes \sigma_z$.) - That answers my question. Thanks! – Mike Witt Jan 26 '12 at 5:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.961945652961731, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/160631-stuck-proof-induction.html
# Thread: 1. ## Stuck on proof by induction Hello, I start to solve this, but got stacked. Have no idea how to move on Anybody help me ? Code: ```5+8+11+…+ (3n+2) = 1/2n(3n+7) n=1 5+8+11+…+ (3*1+2) = 1/2 *1 (3*1+7) => 5=5 n=k 5+8+11+…+ (3k+2) = 1/2 k(3k+7) n=k+1 5+8+11+…+ (3(k+1)+2) = 1/2(k+1) (3(k+1)+7) 5+8+11+…+ (3k+5) = 1/2 (k+1) (3k+10) 5+8+11+…+ (3k+2) + (3k+5) = 1/2 (k+1) (3k+10) 1/2 k(3k+7) + (3k+5) = 1/2 (k+1) (3k+10) 1/2 [3k^2 + 10k + 5] = 1/2 (k+1) (3k+10) =>``` 2. You are correct until the last line, which should be 1/2 [3k^2 + 7k + 6k + 10] = 1/2 (k+1) (3k+10) 3. Can you give me some explanation about how you came to 6k+10 ? I really don't understand how that can be ... 4. Originally Posted by Scorpy6 Can you give me some explanation about how you came to 6k+10 ? I really don't understand how that can be ... The $\frac{1}{2}$ in the LHS of the line before the last one only applies to $k(3k+7)=3k^2+7k$ , so if you want to keep it instead of $3k+5$ you must put $6k+10$... Tonio 5. Hello everybody I've got some trouble again with exercise like this one .. Code: `http://twitpic.com/4m55bq` 6. 1. Where is your base step? 2. For your inductive step, you need to work on the LHS and show that you get the RHS, you can't keep bringing down the desired RHS. Q.E.D. 7. Originally Posted by Prove It 1. Where is your base step? 2. For your inductive step, you need to work on the LHS and show that you get the RHS, you can't keep bringing down the desired RHS. Q.E.D. 1. I did not mention the base step here. 2. Guess that you did not understand me... I was working on the LHS untill I got k(k+3)(k+3)+4/4(k+1)(k+2)(k+3).... So the problem was that i didn't know how to transform k(k+3)(k+3)+4 to (k+1)(k+4), like on the RHS. But after consultations with the professor, I've learn that Thanks anyway 8. You could also simplify to remove denominators as shown in the attachment. Attached Files • Induction 2.pdf (33.4 KB, 11 views) 9. ## Re: Stuck on proof by induction Hello everybody, again me Now i'm trying to solve or prove this one: 1^2+2^2+3^2+....+n^2=n(n+1)(2n+1)/6 And I came to the third step where i got this one: 2k^3+10k^2+13k+6 on the LHS. I know that to solve this and get the same like on RHS, I need to work with division with polynoms, but even if i try to do that, still can't prove LHS=RHS. Can anybody help me somehow? Thanks 10. ## Re: Stuck on proof by induction I think it should be (2k^3+9k^2+13k+6) / 6. 11. ## Re: Stuck on proof by induction One mistake and everything is screwed up But it's great when there is someone that can fix that Thanks a lot, I solve it 12. ## Re: Stuck on proof by induction Originally Posted by Scorpy6 Hello everybody, again me Now i'm trying to solve or prove this one: 1^2+2^2+3^2+....+n^2=n(n+1)(2n+1)/6 And I came to the third step where i got this one: 2k^3+10k^2+13k+6 on the LHS. I know that to solve this and get the same like on RHS, I need to work with division with polynoms, but even if i try to do that, still can't prove LHS=RHS. Can anybody help me somehow? Thanks Base Step: $\displaystyle n = 1$ $\displaystyle \begin{align*} LHS &= 1^2 \\ \\ RHS &= \frac{1(1 + 1)(2\cdot 1 + 1)}{6} \\ &= \frac{1 \cdot 2 \cdot 3}{6} \\ &= 1 \\ &= LHS \end{align*}$ Inductive Step: Assume the statement is true for $\displaystyle n = k$, so assume $\displaystyle 1^2 + 2^2 + 3^2 + \dots + k^2 = \frac{k(k+1)(2k+1)}{6}$. Now we need to prove the statement is true for $\displaystyle n = k + 1$, i.e. show that $\displaystyle 1^2 + 2^2 + 3^2 + \dots + k^2 + (k + 1)^2 = \frac{(k+1)(k + 2)(2k + 3)}{6}$. $\displaystyle \begin{align*} LHS &= 1^2 + 2^2 + 3^2 + \dots + k^2 + (k + 1)^2 \\ &= \frac{k(k + 1)(2k + 1)}{6} + (k + 1)^2 \\ &= \frac{k(k + 1)(2k + 1) + 6(k + 1)^2}{6} \\ &= \frac{(k + 1)\left[k(2k + 1) + 6(k + 1)\right]}{6} \\ &= \frac{(k + 1)(2k^2 + k + 6k + 6)}{6} \\ &= \frac{(k + 1)(2k^2 + 7k + 6)}{6} \\ &= \frac{(k+1)(2k^2 + 4k + 3k + 6)}{6} \\ &= \frac{(k + 1)[2k(k + 2) + 3(k + 2)]}{6} \\ &= \frac{(k + 1)(k + 2)(2k + 3)}{6} \\ &= RHS \end{align*}$ Q.E.D. 13. ## Re: Stuck on proof by induction Or... Show that $1^2+2^2+....+k^2+(k+1)^2=\frac{(k+1)(k+2)(2k+3)}{6 }$ if $1^2+2^2+...+k^2=\frac{k(k+1)(2k+1)}{6}$ Then $\frac{k(k+1)(2k+1)}{6}+\frac{6(k+1)^2}{6}=\frac{(k +1)(k+2)(2k+3)}{6}\;\;\;?$ $[k+1][k(2k+1)+6(k+1)]=[k+1][(2k+3)(k+2)]\;\;\;?$ $k(2k+1)+6(k+1)=(2k+3)(k+2)\;\;\;?$ $2k^2+k+6k+6=2k^2+4k+3k+6\;\;\;?$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9465665221214294, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/6023/is-it-safe-to-use-rsa-as-a-proof-of-work-system/6024
# Is it safe to use RSA as a proof-of-work system? Suppose I devised the following challenge-response proof-of-work system: A server generates a 2048-bit RSA modulus, and uses the "public" exponent (usually 65537) to sign a random nonce a fixed number of times, say, 256. The server then sends the signed nonce to the client, along with the "private" exponent (which is much larger) and modulus, asking it to un-sign it using the private key 256 times, sending the answer back for the server to verify. Because the private operation takes about 300 times longer at 2048 bits, the client will take longer to un-sign the key than the server took to sign it. Is this usable/secure? Or does the generation of the modulus outweigh any gains in the difference in signing time? If so, how could I make it usable, if at all? - "uses the public exponent to sign" - You lost me. In RSA, signing always uses the private exponent, not the public exponent, so I'm not sure what you are referring to here. Can you be more precise about exactly what the process is? Are you calculating $((r^d)^d)^{...} = r^{d^{256}} \bmod n$, where $r$ is the random nonce and $d$ is the private exponent? – D.W. Jan 19 at 8:28 I should also ask: Why do you want to use RSA in particular, as opposed to any of the many other proof-of-work systems? It is possible there might be other, even better ways to achieve your requirements, depending upon what your requirements may be. I don't know if that interests you or not. – D.W. Jan 19 at 8:32 @D.W. This is a rather hypothetical question. Any public-key algorithm should work. – Joe Z. Jan 19 at 14:54 I think this scheme is broken; see my answer for details. – D.W. Jan 19 at 20:51 ## 2 Answers Edit: I just realized that the scheme that was proposed has serious security problems, and should not be used. The question is rather confused and ambiguous, but there seems to be a serious security problem. If I understand the proposal correctly, you are giving the client the private exponent $d$ and the modulus $n$ and the iteration count $k$ (say, $k=256$) and a challenge $a$, and it is the client's job to respond with $$\textrm{proof} = a^{d^k} \bmod n.$$ This is not secure. Once the client knows $d$ and $n$, it is trivial to factor $n$ into $n=pq$. Thus the client learns the primes $p$ and $q$. Once the client knows this factorization, the scheme falls apart, as explained below. In particular, the client can use the Chinese remainder theorem and Fermat's little theorem to greatly speed up the computation of the proof. First, the client works out the value of $\textrm{proof} \bmod p$, as follows: $$\textrm{proof} \equiv a^{d^k} \equiv a^{d'} \pmod p,$$ where $d' = d^k \bmod p-1$. Note that $d'$ can be computed efficiently: first we reduce $d$ modulo $p$, then we compute $d'$ using square-and-multiply (or another efficient exponentiation algorithm). Through a similar process, the client can recover the value of $\textrm{proof} \bmod q$. Finally, the client can combine these values using the Chinese remainder theorem to recover the value of $\textrm{proof} \bmod n$ and send it to the server. How much work does the client have to do? If $k=256$ and $e=3$, the client does 8 squarings modulo $p-1$, 8 squarings modulo $q-1$, one modular exponentiation modulo $p$, one modular exponentiation modulo $q$, and then a Chinese remainder computation (which is very fast and basically requires one or two extended Euclidean algorithm computations on $\gcd(p,q)$). This is not much more work than the server has to do to create the challenge. In summary: the client can solve the puzzle using not much more computation than it takes the server to create the puzzle in the first place. There are shortcut methods that the client can use to greatly speed up its computation, and as a result, the server has little or no advantage over the client. This is not a good proof-of-work system. There are other ways to build proof-of-work schemes other than using RSA. Some of them are secure, and do give the server a major advantage over the client. For example, one standard scheme is to give the client a random 20-bit string $r$, and ask the client to respond with a value $x$ such that the first 20 bits of $\textrm{SHA256}(x)$ are equal to $r$. It will take the client about $2^{20}$ steps of computation to find such an $r$, but the server can check the client's answer very quickly -- merely by hashing a single value -- which is much faster than RSA. Therefore, you should avoid the RSA scheme described in this question. Instead, use other proof-of-work schemes that have been more carefully vetted. - "However, I wanted to point out that there are other ways to build proof-of-work schemes other than using RSA. Some of them may be superior to RSA-based schemes." I know; I was just wondering whether that thing worked or not. – Joe Z. Jan 19 at 20:26 re: last edit, in fact I suspect the client can compute the proof asymptotically just as fast as the server can as $k$ increases, making the scheme useless. I will update my answer as well as it is now misleading. – Thomas Jan 19 at 21:02 Is this usable? Or does the generation of the modulus outweigh any gains in the difference in signing time? No, it does not work. The generation cost itself isn't too important, because signing any one nonce doesn't help the client sign any other nonce, so you can just use a single key all the time (if you had to create a new keypair for every puzzle, this would be very impractical and nobody would use it). But the problem is that the advantage of the server on the client is asymptotically more or less zero, so this is quite useless as a proof of work scheme. See D.W.'s answer for one way of breaking the scheme. Another way similar to D.W.'s is to realize that your scheme requires the client to compute, given a random nonce $a$ and a work factor $k$, modulus $n$ and private exponent $d$: $$\text{proof} = ((a^d)^d)^{\cdots ~ k ~ \text{times}} ~ \text{mod} ~ n$$ And this is equal to: $$\text{proof} = a^{d^k} ~ \text{mod} ~ n$$ But because the client knows $d$ (you gave it to him!) he can factorize $n$ in constant time and compute $\varphi{(n)} = (p - 1)(q - 1)$, which gives him the ability to reduce the inner exponent: $$\text{proof} = a^{d^k ~ \text{mod} ~ \varphi{(n)} } ~ \text{mod} ~ n$$ The client can calculate this in $\log{k}$ modular multiplications or squarings by $d$, and $\log{n}$ (on average) modular multiplications or squarings by $a$. But the server has to do pretty much the same amount of work - the only difference is it saves some time because $e$ is smaller than $d$. The gain is minimal and very difficult to control, since there's a point where you just can't make $e$ any smaller and need to increase $n$ to compensate. In conclusion, this is not a good proof-of-work scheme, as it provides basically no control over the work factor desired and the server will need to work about just as much as the client for realistic values of $n$ and $k$. If so, how could I make it usable, if at all? However, it's possible to take your scheme a step further to make it cleaner (and not depend on RSA). The idea is to choose a "difficulty factor" $t$, a random number $a$, a large $n = pq$, and ask the client to compute: $$\text{proof} = a^{2^t} ~ \text{mod} ~ n$$ The client will require $t$ squarings to achieve this (there is no known way to do better, and it cannot be parallelized to any significant degree), while the server, knowing the factorization of $n$ (since it generated it) can just compute: $$e = 2^t ~ \text{mod} ~ \varphi{(n)}$$ And hence, from Euler's Theorem: $$\text{proof} = a^e ~ \text{mod} ~ n$$ Which the server can then compute in logarithmic time. In effect, the server only needs $O(\log{nt})$ time, but the client needs $O(t)$ time. By making $t$ arbitrarily large, you can fine-tune the amount of work you want the client to do, linearly, but the work stays logarithmic for the server, which is practical. The generation cost is also irrelevant, because the server can generate it once and reuse it all the time with different values of $a$ (that's not always true, but in this case it is - the attacker can't use the results of a previous proof of work to quickly solve another assuming $a$ is random and $n$ is large). Note your scheme doesn't care if the client can factorize $n$ since you are basically giving away $d$, but in this scheme it is imperative that $n$ be large enough so that factorization is not possible. But this is easy to address, just create a 2048-bit modulus (or 4096-bit if you are paranoid) and be done with it. For more details and supporting theory on this method, see: Time-lock puzzles and timed-release Crypto, Ron Rivest, Adi Shamir, David Wagner. Puzzle description for the MIT time capsule. - Another option is to have the server generate two primes and give the client the product of the two primes. The client must factor the product and return the smaller prime. – David Schwartz Jan 19 at 6:36 1 @DavidSchwartz Your proof of work scheme can be parallelized. – Thomas Jan 19 at 6:45 In the vast majority of realistic applications, any proof of work system can be parallelized simply by requesting many challenges from the server. – David Schwartz Jan 19 at 8:06 1 There are some math errors in here: $((a^d)^d)^{\cdots}$ is not the same as $a^{kd}$. Rather, it is $a^{d^k}$. – D.W. Jan 19 at 8:31 @D.W. Fixed, thank you. – Thomas Jan 19 at 8:43 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 70, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388567209243774, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/6810/seifert-surfaces-of-torus-knots/6815
## Seifert surfaces of torus knots ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does anyone know a nice description of a Seifert surface of a torus knot? I can construct such surfaces in band projection, but what I get is ugly and unwieldy. Is there some elegant description for Seifert surfaces for such knots which I'm missing? (I'm not sure precisely what I mean by elegant...) - ## 3 Answers There's the usual description of the Seifert surface for a general cable obtained by taking copies of a Seifert surface for the knot and a fiber for the cable in the solid torus. See Ken Baker's discussion. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This picture--for a (7,2) torus knot--shows a geometric pattern you can extend to any (n,2) torus knot. The image is part of Figure 18 in the visually rich paper by Jarke van Wijk and Arjeh Cohen: "Visualization of the Genus of Knots." Proceedings IEEE Visualization 2005. You can even download the SeifertView software they used to generate the picture. - Beautiful! Gets my +1. – Joel Fine Nov 25 2009 at 16:20 Torus knot complements fiber over $S^1$. So the minimal Seifert surface for a $(p,q)$-torus knot is a once punctured surface of genus $\frac{(p-1)(q-1)}{2}$. You get it as the Milnor fibre of the map from $\mathbb C^2 \to \mathbb C$ given by $f(z_1,z_2)=z_1^p-z_2^q$. That's pretty elegant to me. The monodromy is an automorphism of the surface of order $pq$, it is a free action except on two orbits -- one orbit has $p$ elements, the other orbit has $q$ elements. These details are mostly in Milnor's "Singular points of complex hypersurfaces", also Eisenbud and Neumann's "Three-dimensional link theory and invariants of plane curve singularities". I also have a sketch of it in my JSJ-decompositions paper, on the arXiv. I got the idea for this computation by fleshing out an example of Paul Norbury's (from Walter Neumann's canonical decompositions paper on his webpage). I'd like to add, Eisenbud and Neumann describe the Seifert surfaces of all knots whose complements are graph manifolds in this way. Well, the ones that fibre. They also characterise the knots with graph manifold complements that fiber over S^1. edit: alternatively you could construct the Seifert surface and monodromy from the Seifert-fiber data, as I sketch in this thread: http://mathoverflow.net/questions/7746/periodic-mapping-classes-of-the-genus-two-orientable-surface/7747#7747 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8903761506080627, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Hodge_conjecture
# Hodge conjecture The Hodge conjecture is a major unsolved problem in algebraic geometry which relates the algebraic topology of a non-singular complex algebraic variety and the subvarieties of that variety. More specifically, the conjecture says that certain de Rham cohomology classes are algebraic, that is, they are sums of Poincaré duals of the homology classes of subvarieties. It was formulated by the Scottish mathematician William Vallance Douglas Hodge as a result of a work in between 1930 and 1940 to enrich the description of De Rham cohomology to include extra structure which is present in the case of complex algebraic varieties. It received little attention before Hodge presented it in an address during the 1950 International Congress of Mathematicians, held in Cambridge, Massachusetts, U.S. The Hodge conjecture is one of the Clay Mathematics Institute's Millennium Prize Problems, with a prize of \$1,000,000 for whoever can prove or disprove the Hodge conjecture using "some argument". ## Motivation Let X be a compact complex manifold of complex dimension n. Then X is an orientable smooth manifold of real dimension 2n, so its cohomology groups lie in degrees zero through 2n. Assume X is a Kähler manifold, so that there is a decomposition on its cohomology with complex coefficients: $H^k(X, \mathbf{C}) = \bigoplus_{p+q=k} H^{p,q}(X),\,$ where Hp, q(X) is the subgroup of cohomology classes which are represented by harmonic forms of type (p, q). That is, these are the cohomology classes represented by differential forms which, in some choice of local coordinates z1, ..., zn, can be written as a harmonic function times $dz_{i_1} \wedge \cdots \wedge dz_{i_p} \wedge d\bar z_{j_1} \wedge \cdots \wedge d\bar z_{j_q}$. (See Hodge theory for more details.) Taking wedge products of these harmonic representatives corresponds to the cup product in cohomology, so the cup product is compatible with the Hodge decomposition: $\cup : H^{p,q}(X) \times H^{p',q'}(X) \rightarrow H^{p+p',q+q'}(X).\,$ Since X is a compact oriented manifold, X has a fundamental class. Let Z be a complex submanifold of X of dimension k, and let i : Z → X be the inclusion map. Choose a differential form α of type (p, q). We can integrate α over Z: $\int_Z i^*\alpha.\!\,$ To evaluate this integral, choose a point of Z and call it 0. Around 0, we can choose local coordinates z1, ..., zn on X such that Z is just zk + 1 = ... = zn = 0. If p > k, then α must contain some dzi where zi pulls back to zero on Z. The same is true if q > k. Consequently, this integral is zero if (p, q) ≠ (k, k). More abstractly, the integral can be written as the cap product of the homology class of Z and the cohomology class represented by α. By Poincaré duality, the homology class of Z is dual to a cohomology class which we will call [Z], and the cap product can be computed by taking the cup product of [Z] and α and capping with the fundamental class of X. Because [Z] is a cohomology class, it has a Hodge decomposition. By the computation we did above, if we cup this class with any class of type (p, q) ≠ (k, k), then we get zero. Because H2n(X, C) = Hn, n(X), we conclude that [Z] must lie in Hn-k, n-k(X, C). Loosely speaking, the Hodge conjecture asks: Which cohomology classes in Hk, k(X) come from complex subvarieties Z? ## Statement of the Hodge conjecture Let: $\operatorname{Hdg}^k(X) = H^{2k}(X, \mathbf{Q}) \cap H^{k,k}(X).$ We call this the group of Hodge classes of degree 2k on X. The modern statement of the Hodge conjecture is: Hodge conjecture. Let X be a projective complex manifold. Then every Hodge class on X is a linear combination with rational coefficients of the cohomology classes of complex subvarieties of X. A projective complex manifold is a complex manifold which can be embedded in complex projective space. Because projective space carries a Kähler metric, the Fubini–Study metric, such a manifold is always a Kähler manifold. By Chow's theorem, a projective complex manifold is also a smooth projective algebraic variety, that is, it is the zero set of a collection of homogenous polynomials. ### Reformulation in terms of algebraic cycles Another way of phrasing the Hodge conjecture involves the idea of an algebraic cycle. An algebraic cycle on X is a formal combination of subvarieties of X, that is, it is something of the form: $\sum_i c_iZ_i.\,$ The coefficients are usually taken to be integral or rational. We define the cohomology class of an algebraic cycle to be the sum of the cohomology classes of its components. This is an example of the cycle class map of de Rham cohomology, see Weil cohomology. For example, the cohomology class of the above cycle would be: $\sum_i c_i[Z_i].\,$ Such a cohomology class is called algebraic. With this notation, the Hodge conjecture becomes: Let X be a projective complex manifold. Then every Hodge class on X is algebraic. The assumption in the Hodge conjecture that X be algebraic (projective complex manifold) cannot be weakened. In 1977 Zucker showed that it is possible to construct a counterexample to the Hodge conjecture as complex tori with analytic rational cohomology of type (p,p), which is not projective algebraic. (see the appendix B: in Zucker (1977)) ## Known cases of the Hodge conjecture ### Low dimension and codimension The first result on the Hodge conjecture is due to Lefschetz (1924). In fact, it predates the conjecture and provided some of Hodge's motivation. Theorem (Lefschetz theorem on (1,1)-classes) Any element of H2(X, Z) ∩ H1,1(X) is the cohomology class of a divisor on X. In particular, the Hodge conjecture is true for H2. A very quick proof can be given using sheaf cohomology and the exponential exact sequence. (The cohomology class of a divisor turns out to equal to its first Chern class.) Lefschetz's original proof proceeded by normal functions, which were introduced by Henri Poincaré. However, Griffiths transversality theorem shows that this approach cannot prove the Hodge conjecture for higher codimensional subvarieties. By the Hard Lefschetz theorem, one can prove: Theorem. If the Hodge conjecture holds for Hodge classes of degree p, p < n, then the Hodge conjecture holds for Hodge classes of degree 2n − p. Combining the above two theorems implies that Hodge conjecture is true for Hodge classes of degree 2n − 2. This proves the Hodge conjecture when X has dimension at most three. The Lefschetz theorem on (1,1)-classes also implies that if all Hodge classes are generated by the Hodge classes of divisors, then the Hodge conjecture is true: Corollary. If the algebra $\operatorname{Hdg}^*(X) = \sum_k \operatorname{Hdg}^k(X)\,$ is generated by Hdg1(X), then the Hodge conjecture holds for X. ### Abelian varieties For most abelian varieties, the algebra Hdg*(X) is generated in degree one, so the Hodge conjecture holds. In particular, the Hodge conjecture holds for sufficiently general abelian varieties, for products of elliptic curves, and for simple abelian varieties[citation needed]. However, Mumford (1969) constructed an example of an abelian variety where Hdg2(X) is not generated by products of divisor classes. Weil (1977) generalized this example by showing that whenever the variety has complex multiplication by an imaginary quadratic field, then <Hdg2(X) is not generated by products of divisor classes. Moonen & Zarhin (1999) proved that in dimension less than 5, either Hdg*(X) is generated in degree one, or the variety has complex multiplication by an imaginary quadratic field. In the latter case, the Hodge conjecture is only known in special cases. ## Generalizations ### The integral Hodge conjecture Hodge's original conjecture was: Integral Hodge conjecture. Let X be a projective complex manifold. Then every cohomology class in H2k(X, Z) ∩ Hk, k(X) is the cohomology class of an algebraic cycle with integral coefficients on X. This is now known to be false. The first counterexample was constructed by Atiyah & Hirzebruch (1961). Using K-theory, they constructed an example of a torsion Hodge class, that is, a Hodge class α such that for some positive integer n, n α = 0. Such a cohomology class cannot be the class of a cycle. Totaro (1997) reinterpreted their result in the framework of cobordism and found many examples of torsion classes. The simplest adjustment of the integral Hodge conjecture is: Integral Hodge conjecture modulo torsion. Let X be a projective complex manifold. Then every non-torsion cohomology class in H2k(X, Z) ∩ Hk,k(X) is the cohomology class of an algebraic cycle with integral coefficients on X. This is also false. Kollár (1992) found an example of a Hodge class α which is not algebraic, but which has an integral multiple which is algebraic. ### The Hodge conjecture for Kähler varieties A natural generalization of the Hodge conjecture would ask: Hodge conjecture for Kähler varieties, naive version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of the cohomology classes of complex subvarieties of X. This is too optimistic, because there are not enough subvarieties to make this work. A possible substitute is to ask instead one of the two following questions: Hodge conjecture for Kähler varieties, vector bundle version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of Chern classes of vector bundles on X. Hodge conjecture for Kähler varieties, coherent sheaf version. Let X be a complex Kähler manifold. Then every Hodge class on X is a linear combination with rational coefficients of Chern classes of coherent sheaves on X. Voisin (2002) proved that the Chern classes of coherent sheaves give strictly more Hodge classes than the Chern classes of vector bundles and that the Chern classes of coherent sheaves are insufficient to generate all the Hodge classes. Consequently, the only known formulations of the Hodge conjecture for Kähler varieties are false. ### The generalized Hodge conjecture Hodge made an additional, stronger conjecture than the integral Hodge conjecture. Say that a cohomology class on X is of level c if it is the pushforward of a cohomology class on a c-codimensional subvariety of X. The cohomology classes of level at least c filter the cohomology of X, and it is easy to see that the cth step of the filtration Nc Hk(X, Z) satisfies $N^cH^k(X, \mathbf{Z}) \subseteq H^k(X, \mathbf{Z}) \cap (H^{k-c,c}(X) \oplus\cdots\oplus H^{c,k-c}(X)).$ Hodge's original statement was: Generalized Hodge conjecture, Hodge's version. $N^cH^k(X, \mathbf{Z}) = H^k(X, \mathbf{Z}) \cap (H^{k-c,c}(X) \oplus\cdots\oplus H^{c,k-c}(X)).$ Grothendieck (1969) observed that this cannot be true, even with rational coefficients, because the right-hand side is not always a Hodge structure. His corrected form of the Hodge conjecture is: Generalized Hodge conjecture. Nc Hk(X, Q) is the largest sub-Hodge structure of Hk(X, Z) contained in $H^{k-c,c}(X) \oplus\cdots\oplus H^{c,k-c}(X).$ This version is open. ## Algebraicity of Hodge loci The strongest evidence in favor of the Hodge conjecture is the algebraicity result of Cattani, Deligne & Kaplan (1995). Suppose that we vary the complex structure of X over a simply connected base. Then the topological cohomology of X does not change, but the Hodge decomposition does change. It is known that if the Hodge conjecture is true, then the locus of all points on the base where the cohomology of a fiber is a Hodge class is in fact an algebraic subset, that is, it is cut out by polynomial equations. Cattani, Deligne & Kaplan (1995) proved that this is always true, without assuming the Hodge conjecture. ## References • Atiyah, M. F.; Hirzebruch, F. (1961), "Vector bundles and homogeneous spaces", Proc. Sympos. Pure Math. 3: 7–38 • Cattani, Eduardo; Deligne, Pierre; Kaplan, Aroldo (1995), "On the locus of Hodge classes", 8 (2): 483–506, doi:10.2307/2152824, JSTOR 2152824, MR 1273413 . • Grothendieck, A. (1969), "Hodge's general conjecture is false for trivial reasons", 8 (3): 299–303, doi:10.1016/0040-9383(69)90016-0 . • Hodge, W. V. D. (1950), "The topological invariants of algebraic varieties", Proceedings of the International Congress of Mathematicians (Cambridge, MA) 1: 181–192 . • Kollár, János (1992), "Trento examples", in Ballico, E.; Catanese, F.; Ciliberto, C., Classification of irregular varieties, Lecture Notes in Math. 1515, Springer, p. 134, ISBN 3-540-55295-2 . • Lefschetz, Solomon (1924), L'Analysis situs et la géométrie algébrique, Collection de Monographies publiée sous la Direction de M. Emile Borel (in French), Paris: Gauthier-Villars  Reprinted in Lefschetz, Solomon (1971), Selected papers, New York: Chelsea Publishing Co., ISBN 978-0-8284-0234-7, MR 0299447 . • Moonen, B. J. J.; Zarhin, Yu. G. (1999), "Hodge classes on abelian varieties of low dimension", 315 (4): 711–733, arXiv:math/9901113, doi:10.1007/s002080050333 . • Mumford, D. (1969), "A Note of Shimura's paper "Discontinuous groups and abelian varieties"", 181 (4): 345–351, doi:10.1007/BF01350672 . • Totaro, B. (1997), "Torsion algebraic cycles and complex cobordism", Journal of the American Mathematical Society 10 (2): 467–493, arXiv:alg-geom/9609016, doi:10.1090/S0894-0347-97-00232-4, JSTOR 2152859 . • Voisin, Claire (2002), "A counterexample to the Hodge conjecture extended to Kähler varieties", Int Math Res Notices 2002 (20): 1057–1075, doi:10.1155/S1073792802111135 . • Weil, A. (1977), "Abelian varieties and the Hodge ring", Collected papers III: 421–429 • Zucker, S. (1977), "The Hodge conjecture for cubic fourfolds", Comp. Math 34: 199–209  http://archive.numdam.org/ARCHIVE/CM/CM_1977__34_2/CM_1977__34_2_199_0/CM_1977__34_2_199_0.pdf
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8562259078025818, "perplexity_flag": "head"}
http://mathoverflow.net/questions/121205/periodicity-of-a-specific-non-linear-ode-of-second-order
## Periodicity of a specific non-linear ODE of second order ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider the second-order ODE: $$\ddot{x} + x+x^2=0,$$ here $\ddot{x}$ is the second derivative w.r.t. $t$. Take initial values $x(0)=0.5$ and $\dot{x}(0)=0.$ Question: is the solution periodic or not? Comment: Numerical experiments seems to show that the solution is periodic when $x(0)<0.5$ and if $x(0)>0.5$ then the solution fails to be periodic, in fact, $x(t)\rightarrow -\infty.$ (Asked by Prof. J.E. Björk, Stockholm Univ.) - 1 Since solution trajectories are level sets of the Hamiltonian $H(x,\dot{x}) = \frac{1}{2}\dot{x}^2 + \frac{1}{2}x^2 + \frac{1}{3}x^3$, are you asking about the level set of $H$ running through (.5,0)? – Aaron Hoffman Feb 8 at 16:04 Aaron Hoffman: Yes, that is correct. – Per Alexandersson Feb 9 at 9:37 ## 1 Answer As Aaron Hoffman pointed out, all trajectories lie on the level lines of $p^2+x^2+(2/3)x^3=c$. The LHS has two critical points: the local minimum $(0,0)$ with critical value $c=0$, and the saddle $(-1,0)$ with critical value $c=1/3$. The behavior for large $(x,p)$ is also clear. So it is easy to sketch all these level lines, and the conclusion is that the trajectory starting from $(x_0,0)$ is bounded if and only if $x_0^2+(2/3)x_0^3\leq 1/3$, and closed when this inequality is strict. For $x_0=0.5$ we obtain that the trajectory is not really closed, but tends to $(-1,0)$ as time goes to infinity. You cannot detect this on computer because the point $(-1,0)$ is unstable. It takes infinite time to approach it on the trajectory, but once you miss, no matter how little, you will be either on a closed trajectory or escape to infty. And it will take you long time to find out, if you miss very little. EDIT. By the way, this system is called the classical anharmonic oscillator. An explicit solution in elliptic functions exists, but physicists prefer to consider perturbative expansions. See, for example, L. Landau and E. Lifshitz, Mechanics (Course of Theoretical Physics, vol. I). - Thank you! J.E Bjork tells me he has some ideas as well on how to find the length of the period for initial values <0.5... – Per Alexandersson Feb 11 at 9:24 Finding the length of the period is simple. You write your equation as $(dx/dt)^2=P(x)$. This is separable, and the period $T=\int dx/\sqrt{P},$ where the integration is over the closed trajectory. This is an elliptic integral; it can be brought to a standard form, and this gives an explicit answer. – Alexandre Eremenko Feb 11 at 14:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9142199754714966, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/70399/list
## Return to Answer 3 added proof that the probabilities form a dense set EDIT to add some details:Given a universal TM $M$ with tape alphabet $A$, and given a subinterval of $[0,1]$, choose an integer $n$ so large that your given interval includes one of the form $[k/|A|^n,(k+1)/|A|^n]$. Let $S$ be a set of $k$ words of length $n$ over $A$, and let $w$ be another such word that is not in $S$. Modify $M$ to $M'$ that works as follows. If the first $n$ symbols on the tape are a word from $S$, then march to the right forever, ignoring everything else. If they are the word $w$, then simulate $M$ on the remainder of the tape (the part after $w$), moving any final answer into the right location, as in my previous edit. Finally, if the word consisting of the tape's first $n$ letters is neither in $w$ nor in $S$, then halt immediately. Then the probability that $M'$ moves infinitely to the right will be at least $k/|A|^n$ (the probability that the initial $n$-word on the tape is in $S$) and at most $(k+1)/|A|^n$ (the probability that this $n$-word is either $w$ or in $S$) and therefore within the originally given interval. 2 added 1209 characters in body EDIT to take into account the revision of the question:Given a universal TM, you can make trivial modifications that maintain universality but change the probability $p$ of going infinitely far to the right. For example, modify your original machine $M$ to an $M'$ that works like this: If the first symbol $x$ on the program tape is 0, then halt immediately; otherwise, move one step to the right and work like $M$ on the program minus the initial symbol $x$ (and, just to guarantee universality, if the computation halts, go back to $x$, erase it, and move $M$'s answer one step to the left so that it's located where answers should be). That modification decreases the probability $p$. You can increase $p$ by having an initial 0 in the program trigger a race to the right by $M'$ --- it just keeps marching to the right regardless of what symbols it sees. You can achieve some control over the amount by which $p$ increases or decreases by having the modification $M'$ begin by checking more than just one symbol at the beginning of the program. As far as I can tell, such modifications, carried out with enough care (which I don't have time for just now) should give you a dense set of $p$'s. 1 As far as I can see, if you consider a single TM, then you get only one specific probability, not a dense set, whereas if you let the TM vary then $\Omega$ will vary also, and the set of probabilities will contain all rational numbers in $[0,1]$ (and some other numbers too). If you fix the number of symbols but let the TM vary, it's not so clear that you'll get all the rationals, but you'll still get a dense set.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9333786368370056, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/194585/integration-of-int-sqrtx-sqrt2x-dx-and-int-3x-ex-dx?answertab=oldest
# Integration of $\int \sqrt{x\sqrt{2x}} dx$ and $\int 3^x e^x dx$ I try to evaluate this two integrals, but I don't know how to proceed: i) $\int \sqrt{x\sqrt{2x}} dx = \int {2^{\frac{1}{4}}\cdot x^{\frac{3}{4}}}$ ii) $\int 3^x e^x dx$ What's the best way to evaluate them? Substitution or Intergration by parts? Any hints are appreciated. - Should ii) be as in the title or as in the question? – mrf Sep 12 '12 at 7:53 as in the title, I edited it, thanks. – ulead86 Sep 12 '12 at 7:58 2 You don't solve integrals, you evaluate them. – Stefan Smith Sep 12 '12 at 14:03 ## 1 Answer You’ll do better with the first one if you correct the algebra: $$\sqrt{x\sqrt{2x}}=\left(x(2x)^{1/2}\right)^{1/2}=\left(x\cdot2^{1/2}x^{1/2}\right)^{1/2}=\left(2^{1/2}x^{3/2}\right)^{1/2}=2^{1/4}x^{3/4}\;.$$ Now you have $\displaystyle\int2^{1/4}x^{3/4}~dx=2^{1/4}\int x^{3/4}~dx$, which is just a power rule integration. In the second problem, use the fact that $3^xe^x=(3e)^x$; I’m sure that you’ve been shown how to integrate $a^x$ for a constant $a$. You don’t need any special techniques for either of them. - Thanks a lot, I edited the mistake. And yes, I know how to integrate $a^x$. Thanks for the answer. – ulead86 Sep 12 '12 at 7:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8826068639755249, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/48085/not-universal-operator-and-computational-basis/48087
# NOT Universal Operator and Computational basis This is the relationship between density operator and Bloch vector: $$\rho= \frac{1}{2}({\bf{\hat{1}}}+{\bf{b}}.\boldsymbol{\hat{\sigma}})$$ We define the NOT Universal Operator in the following way: $$U: {\bf{b}}\to -{\bf{b}}/3$$ My question is - How does NOT Universal Operator act on the elements of the computational basis: $|0\rangle, |1\rangle$? - 1 – twistor59 Jan 1 at 17:06 1 I put some bold symbols in your question because the quantities involved are vectors. Can I ask you to confirm 1) the bold is OK, and that 2) you really meant to have the /3? I thought UNOT just mapped antipodally, so there should be no -3 there. – twistor59 Jan 1 at 17:29 This definition of the Universal NOT seems to differ from the one given in the related question: the OP's (current) definition maps states antipodally but introduces a "damping" of the Bloch vector. – Juan Bermejo Vega Jan 1 at 17:45 ## 1 Answer Note As it has been said in the comments, this definition of Universal-NOT gate seems to differ from others discussed in other posts [1]. This answer uses the definition proposed by the OP, i.e. $$\rho=\frac{1}{2}(I+\vec{b}\cdot\vec{\sigma}) \quad \longrightarrow \quad U(\rho)=\frac{1}{2}\left(I-\frac{1}{3}\vec{b}\cdot\vec{\sigma}\right)$$ Where I use the symbol $I$ for the identity matrix to avoid confusion with 1 and $\vec{b}\in \mathbb{R}^3$ denotes the Bloch vector. We write the density matrices of the computational basis states explicitly: $$\rho_a=|a\rangle\langle a |= \frac{1}{2}(I+\vec{b}_a\cdot\vec{\sigma}),$$ where $a\in\{0,1\}$. Expanding this expression readily yields the vectors $b_a$: $$\vec{b}_a=(0,0,\pm1).$$ Applying your definition of $U$ to these density operators, the action of the operator on basis states can be obtained directly: $$U(|0\rangle\langle 0|)= \frac{1}{2}(I- \frac{1}{3}\sigma_z)=\frac{1}{3}|0\rangle\langle 0| +\frac{2}{3}|1\rangle\langle 1 |$$ $$U(|1\rangle\langle 1|)= \frac{1}{2}(I+ \frac{1}{3}\sigma_z)=\frac{2}{3}|0\rangle\langle 0| +\frac{1}{3}|1\rangle\langle 1 |$$ We can observe that, because the factor $1/3$ that "damps" the Bloch vector, pure basis states evolve into mixed states; notice that, intuitively, states get closer to the totally mixed state $I/2$ if you make the Bloch vector $\vec{b}$ go to zero. - 1 +1 Interested to know if the /3 was intentional.. – twistor59 Jan 1 at 17:49 Thank you very much for your answer. – user15940 Jan 1 at 18:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9097470045089722, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/string?sort=votes&pagesize=15
# Tagged Questions This tag is for non-relativistic material strings, such as, e.g., a guitar string. PLEASE DO NOT USE THIS TAG for relativistic strings and string theory. 5answers 1k views ### Why Won't a Tight Cable Ever Be Fully Straight? I posted this picture of someone on a zipline on Facebook. One of my friends saw it and asked this question, so he could try to calculate the speed at which someone on the zipline would be going ... 1answer 252 views ### How do I find constraints on the Nambu-Goto Action? Let $X^\mu (t,\sigma ^1,\ldots ,\sigma ^p)$ be a $p$-brane in space-time and let $g$ be the metric on $X^\mu$ induced from the ambient space-time metric. Then, the Nambu-Goto action on $X^\mu$ is ... 4answers 4k views ### Rope tension question If two ends of a rope are pulled with forces of equal magnitude and opposite direction, the tension at the center of the rope must be zero. True or false? The answer is false. I chose true though and ... 1answer 262 views ### Lagrangian density for a Piano String So I'm trying to do this problem where I'm given the Lagrangian density for a piano string which can vibrate both transversely and longitudinally. $\eta(x,t)$ is the transverse displacement and ... 2answers 247 views ### What happens when two strings collide? I have a question, that perhaps someone with a much better understanding of physics can help me answer. Please correct me if I'm wrong. From what I understand, a string in string theory is basically ... 2answers 231 views ### Can the study of the quantum information structure in QFT with holographic duals be relevant to string theory? I'm interested in characterizing the behaviour of measures of quantum information in strongly correlated quantum field theories which admit a gravity dual description, e.g through AdS/CFT duality. In ... 1answer 170 views ### About Holographic Model of Magnetism and Superconductor I have a question about this paper http://arxiv.org/abs/1003.0010 In their model, when consider holographic paramagnetic-ferromagnetic phase transition, they need Yang-Mills field itself to ... 1answer 46 views ### Boundary conditions on wave equation I am having trouble understanding the boundary conditions. From the solutions, the first is that $D_1(0, t) = D_2(0, t)$ because the rope can't break at the junction. The second is that ... 1answer 174 views ### Flux compactification How flux compactification solves the moduli space problem in string theory? Please provide some details and, if posible, an example. Thanks in advance. 2answers 770 views ### Does String Theory disagree with General Relativity? I would like to expand on what I mean by the title of this question to focus the answers. Normally whenever a theory (e.g. General Relativity) replaces another (e.g. Newtonian Gravity) there is a ... 3answers 182 views ### Why must the excitation of closed strings in String Theory be spin-2? In String Theory it is predicted that as a result of the closed strings we have spin-2 gravitons. 1) How do we know there must be an excitation of spin-2 particles? 2) Why does a spin-2 particle ... 0answers 168 views ### Shape of a string/chain/cable/rope? The height of a string in a gravitational field in 2-dimensions is bounded by $h(x_0)=h(x_l)=0$ (nails in the wall) and also $\int_0^l ds= l$. ($h(0)=h(l)=0$, if you take $h$ as a function of arc ... 0answers 103 views ### Polyakov action as broken symmetry effective action I would like to ask if it is possible to regard the Polyakov action as an effective action that describes the broken symmetric phase of a more general model. Could someone draw an analogy with O(N) ... 1answer 123 views ### Other Gross-Neveu like theories? By "Gross-Neveu like" I mean non-supersymmetric QFTs whose partition function/beta-function (or any n-point function) is somehow exactly solvable in the large $N_c$ or $N_f$ or 't Hooft limit. ... 1answer 256 views ### Why are there Gravitons among the modes of oscillation in String Theory? Why are gravitons present among the modes of oscillation of the 'strings' in String Theory? 1answer 149 views ### How can Hilbert spaces be used to study the harmonics of vibrating strings? The overtones of a vibrating string. These are eigenfunctions of an associated Sturm–Liouville problem. The eigenvalues 1,1/2,1/3,… form the (musical) harmonic series. How can Hilbert spaces be ... 1answer 195 views ### Drawbacks of Standard model I am a graduate level student, interested in String theory. I was reading a paper on "String Theory and Einstein's Dream" published in Current Science, vol. 89, No. 12, p 2045, Dec. 25, 2005. and I ... 3answers 301 views ### Does the Fundamental Frequency in a Vibrating String NOT Necessarily Have the Strongest Amplitude? I am doing some experiments on musical strings (guitar, piano, etc.). After performing a Fourier Transform on the sound recorded from those string vibrations, I find that the fundamental frequency is ... 2answers 60 views ### Pressure in waves on a string We know that when we speak sound waves are created. The air particles compress and rarefy and pressure is more at the nodes and less at anti-nodes. But can we say the same thing about waves on a ... 4answers 113 views ### Is it possible to whirl a point mass (attacted to a string) around in a horizontal circular motion *above* my hand? I'm studying circular motion and centripetal force in college currently and there is a very simple question but confuses me (our teacher doesn't know how to explain either :/), so I hope we can sort ... 1answer 189 views ### How much is important the role of Planck length in the strings theory? this is Planck length: $$\ell_p=\sqrt {\frac {G.\hbar}{c^3}} .$$ How much is important the role of this length in the strings theory? is this planck's length or newton's length! or maybe both of ... 1answer 227 views ### Do Neveu-Schwarz conditions make sense? When putting fermions on the string, we have to choose boundary conditions for our spinor fields - Ramond or Neveu-Schwarz. NS conditions on the closed string have antiperiodic conditions such as ... 0answers 242 views ### Neglecting friction on a pulley? So, this is how the problem looks: http://www.aplusphysics.com/courses/honors/dynamics/images/Atwood%20Problem.png Plus, the pulley is suspended on a cord at its center and hanging from the ceiling. ... 1answer 53 views ### The second resonance of string? What is the relationship between "the second resonance " and string and the wavelength. Like in this question: if the length of the string is 2cm with second resonance, then what is wavelength? 2answers 199 views ### How can particles being closed strings in String Theory create solidity in objects? I understand how particles with certain masses can form to make atoms, which create solidity in objects due to Pauli's Exclusion Principle and what have you. These particles actually have mass and to ... 1answer 76 views ### Finding the acceleration of Block attached using tricky string setup Below shown is a setup, and block B starts from rest and moves towards right with a constant acceleration. Does the acceleration differ for the blocks ? I am a bit confused because of the tricky ... 1answer 683 views ### Understanding tension I'm trying to understand tension. So here it goes: I'll start from the beginning. Let's assume I'm in space and can move around and apply forces. Let's say a rope is attached to a body(which is in ... 1answer 263 views ### How do we find the frequency of wave propagated along the x-axis? I don't know how to solve question like this: A transverse wave is propagated in a string stretched along the x-axis. The equation of the wave, in SI units, is given by:y = 0.006 cos π(46t - 12x). ... 2answers 257 views ### Young's Modulus and Vibrating String Harmonics I was wondering how Young's Modulus effects the resonant harmonics of a vibrating (string instrument) string. I know that the string's fundamental frequency is \frac{1}2 \times \text{Length} \times ... 0answers 80 views ### How to figure out an elastic constant? [closed] Im doing this study and i have this question which I'm not 100% sure on, got me pretty stumped. anyone think they can help me?! When a bowstring is pulled back in preparation in preparation for ... 1answer 198 views ### resonance frequency [closed] A string has a mass per unit length of 9 10–3 kg/m. What must be the tension in the string if its second harmonic has the same frequency as the second resonance mode of a 2m long pipe open at one end? ... 1answer 81 views ### Work done by an ideal, massless, inextensible, non-relativistic string [closed] what is the work done by an ideal, massless, inextensible, non-relativistic string in an isolated system with respect to a reference frame at rest? If it is zero, then why?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9146352410316467, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/56572/homogeneous-system-of-polynomial-equations/56976
## Homogeneous system of polynomial equations ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi all, Previously I asked a question that currently has no satisfactory answer http://mathoverflow.net/questions/55939/least-sum-squares-given-constraints-on-subcomponents It comes from an engineering problem. I was thinking to formulate it differently and hope that someone becomes interested and/or know how to solve it. By formulating differently, I will have the following system: $\mathbf{D} \mathbf{R} \mathbf{\theta} = \mathbf{0}_{2N \times 1}$ D is a $2N \times 6N$ block diagonal matrix that contains unknowns $\mathbf{D} = diag(\mathbf{x}^T - a_1 \mathbf{z}^T, \mathbf{y}^T - b_1 \mathbf{z}^T, \dots, \mathbf{x}^T - a_N \mathbf{z}^T, \mathbf{x}^T - b_N \mathbf{z}^T)$ where $\mathbf{x}, \mathbf{y}, \mathbf{z}$ are 3x1 orthogonal unit vectors that we need to find (ie., $\mathbf{x}^T \mathbf{x} = 1, \mathbf{x}^T \mathbf{y} = 0, \dots$). $a_i, b_i$ are known parameters. R is a $6N \times M$ matrix that contains only numerical entries (measured and computed). $\mathbf{\theta}$ is a vector of M other unknown parameters. By doing so, I isolated unknowns in two separate matrices (vectors). However, it is still not trivial. I tried to solve this, but there is no obvious way. One way is that I tried to find $\mathbf{x}, \mathbf{y}, \mathbf{z}$ such that $\mathbf{D} \mathbf{R}$ is rank-deficient. I'm not sure if this can be solved, either exactly or in least-squared, in closed-form or numerically. Any idea, discussion is appreciated. edit: I was not clear. If I set determinant of any MxM sub-matrix of $\mathbf{D} \mathbf{R}$ to 0 and express it as a function of elements in $\mathbf{x}, \mathbf{y}, \mathbf{z}$, then I have a number of homogeneous polynomials. That's why the title comes. edit2: M is much smaller than 2N. - ## 1 Answer One suggests that we can have a good prior for $\theta$ (but not for $\mathbf{D}$). Then, he proposed that we fix $\theta$, find $\mathbf{R}$ for least square $|| \mathbf{D} \mathbf{R} \theta||$ given the constraints above; then fix $\mathbf{R}$ and find $\theta$ for least square $|| \mathbf{D} \mathbf{R} \theta||$ given $||\theta|| = 1$. Then keep repeating (fix $\theta$ then $\mathbf{R}$) until they converges. The latter least square is easy, and the former, I believe there is a closed form or numerical solution. But my question is if such numerical scheme works? Is there any proof for it? What is the name of this method? I could not recall this numerical scheme. Please enlighten. - I don't understand why you can vary $R$. Doesn't that represent some measurement which is fixed? From your question I inferred that $D$ and $\theta$ contain all the unknowns you want to solve for. Assuming you meant $D$ rather than $R$, then the iterative procedure you described is simply partial optimization; for instance to minimize $f(x,y)$, we first fix $x$ and minimize $y$ and then minimize $x$ for the fixed $y$. This certainly doesn't converge to the global minimum in general, but will if the graph of $f$ is strictly convex. – John Jiang Mar 14 2012 at 0:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9534894824028015, "perplexity_flag": "head"}
http://nrich.maths.org/7060/solution
### Weekly Challenge 43: A Close Match Can you massage the parameters of these curves to make them match as closely as possible? ### Weekly Challenge 44: Prime Counter A weekly challenge concerning prime numbers. ### Weekly Challenge 28: the Right Volume Can you rotate a curve to make a volume of 1? # Weekly Challenge 7: Gradient Match ##### Stage: 5 Short Challenge Level: There are various intuitive ways to think about such results but creating really clear arguments is somewhat more difficult. The descriptions given here do not constitute proofs as such, but do introduce advanced analytical ways of thinking which are refined at university. Consider the first case; the others are similar. Clearly the special case of the curve joining the two points $(0, 0)$ and $(8, 8)$ with a straight line has gradient $1$ everywhere. Any other curve between these two points can be considered as a deformed version of this line. Wherever the curve is deformed outwards point bulges will necessarily occur. Imagine dragging the line $y=x$ up or down. It will cut through each bulge but eventually pass out of each bulge. As it passes out of each bulge it will touch each bulge at a single point. These are the points with gradient 1. Analytical proofs proceed along these sorts of lines: Sketch: Imagine the curve being sketched starting from the origin. Imagine that the gradient of your curve is always less than or equal to some number $M$ which satisfies $M< 1$. Then in $8$ units of $x$ the $y$ value of the curve can increase by at most $8M$, which is less than $8$. Thus, it could not pass through the point $(8, 8)$; therefore the maximum achieved gradient cannot be less than $1$. Similarly the minimum achieved gradient cannot be greater than $1$. It is thus intuitively clear that a gradient of $1$ is achieved somewhere. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9351791739463806, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/62889?sort=newest
## Question on PDE ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am looking for a reference where the following situation or something similar could have been studied. As a foreword, my question may not be very technical since I am from an engineering background. I have tried to provide an intuitive explanation of the problem. I am also looking for a mildly technical reference or answer: Heat equation can be used to study diffusion of heat on a surface. On a plane the boundary for a such a heat equation is a circle. I am looking for a system of three heat equation type PDEs (call them $a_{1}$, $a_{2}$ and $a_{3}$) so that some conditions are satisfied. 1) $a_{1}$, $a_{2}$ and $a_{3}$ describe propagation of heat starting at three different points $A_{1}$, $A_{2}$ and $A_{3}$ on the plane. 2) Stopping time of $a_{i}$ is when intersection of boundary of $a_{i}$ and union of boundaries $a_{j}$ and $a_{k}$ is non empty with $i \ne j \ne k \ne i$ $\forall i \in {1,2,3}$. Let the stopping time of $a_{i}$ be $t_{i}$. After giving a simple description of pdes that could satisfy the above conditions, I also need to find expressions for $t_{i}$ which is what I am truly after since it finding $t_{i}$ could provide distance between $A_{i}$ to it closest neighbor without using the euclidean formula. Can one get the PDE to stop diffusing without introducing an artificial stopping time? Can one generalize this to many points in $n$-dimensions (real or complex). Such a system could capture closest neighbors to each given point. - When you say "the boundary for a such a heat equation is a circle" do you mean the level sets for a fundamental solution are circles? – S. Carnahan♦ Apr 25 2011 at 4:15 I think that is what I am implying. – unknown (google) Apr 25 2011 at 5:46 ## 1 Answer This answer makes some assumptions about what the OP is asking. In particular I am using Scott's interpretation that the boundaries of interest correspond to level sets of the fundamental solution. I suppose you have to specify which level set you are talking about. Since the heat equation has infinite propagation speed, you can make the $t_i$ as close to zero as you like by looking at level sets $a_i = \epsilon$ for $\epsilon$ sufficiently small. If you fix $\epsilon$ (or $\epsilon_i$) then you are looking at a collection of three circles in the plane with radius $r_i(t) = \left(4kt \log(1/\epsilon) + 2kt \log(4 \pi k t)\right)^\frac{1}{2}$ (assuming the standard heat equation with diffusion constant k) and you are asking when these circles intersect. The circle about $A_i$ will intersect the circle about $A_j$ when $r_i + r_j \ge |A_i - A_j|$. - Hi Aaron: Thank you for the answer. Can you write explicitly the system of PDEs that will start diffusing and stop at the closest neighbor's level set? – unknown (google) Apr 26 2011 at 1:54 I am wondering what the system would look like for more than $3$ points in $n \ge 3$ dimensions. – unknown (google) Apr 26 2011 at 1:55 Caveat: Maybe I'm not understanding your question. I am imagining $u_t = D\Delta u$ where $u(x,t) \in \R^m$ and $x \in \R^n$ and $D$ is a diagonal matrix with entries $k_i$. This is just a collection of uncoupled heat equations in $\R^n$ whose fundamental solution is just a vector of Gaussians. It sounds like the solution that you are interested in corresponds to $u_i$ a Gaussian centered at $A_i$. The stopping time is introduced artificially by picking levels $\epsilon_i$ and taking logarithms of the Gaussians to solve for the radius of the spherical level set as a function of time ... – Aaron Hoffman Apr 26 2011 at 2:24 (continued) ... and then call a collision any time that $|A_i - A_j| \ge r_i + r_j$. Getting the PDE to stop diffusing without introducing an artificial stopping time would be more subtle. – Aaron Hoffman Apr 26 2011 at 2:27 "Getting the PDE to stop diffusing without introducing an artificial stopping time would be more subtle." Actually I think I am interested in exactly this "automation" of the stopping times so that the description of the PDEs themselves would capture the closest neighbors. Such a description could provide an analytical gadget to some euclidean graph problems – unknown (google) Apr 26 2011 at 2:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311748147010803, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/2604/what-do-we-get-from-having-higher-generations-of-particles/3201
# What Do We Get From Having Higher Generations of Particles? Background: I have written a pop-science book explaining quantum mechanics through imaginary conversations with my dog-- the dog serves as a sort of reader surrogate, popping in occasionally to ask questions that a non-scientist might ask-- and I am now working on a sequel. In the sequel, I find myself having to talk about particle physics a bit, which is not my field, and I've hit a dog-as-reader question that I don't have a good answer to, which is, basically, "What purpose, if any, do higher-generation particles serve?" To put it in slightly more physics-y terms: The Standard Model contains twelve material particles: six leptons (the electron, muon, and tau, plus associated neutrinos) and six quarks (up-down, strange-charm, top-bottom). The observable universe only uses four, though: every material object we see is made up of electrons and up and down quarks, and electron neutrinos are generated in nuclear reactions that move between different arrangements of electrons and up and down quarks. The other eight turn up only in high-energy physics situation (whether in man-made accelerators, or natural occurances like cosmic ray collisions), and don't stick around for very long before they decay into the four common types. So, to the casual observer, there doesn't seem to be an obvious purpose to the more exotic particles. So why are they there? I'm wondering if there is some good reason why the universe as we know it has to have twelve particles rather than just four. Something like "Without the second and third generations of quarks and leptons, it's impossible to generate enough CP violation to explain the matter-antimatter asymmetry we observe." Only probably not that exact thing, because as far as I know, there isn't any way to explain the matter-antimatter asymmetry we observe within the Standard Model. But something along those lines-- some fundamental feature of our universe that requires the existence of muons and strange quarks and all the rest, and would prevent a universe with only electrons and up and down quarks. The question is not "why do we think there are there three generations rather than two or four?" I've seen the answers to that here and elsewhere. Rather, I'm asking "Why are there three generations rather than only one?" Is there some important process in the universe that requires there to be muons, strange quarks, etc. for things to end up like they are? Is there some reason beyond "we know they exist because they're there," something that would prevent us from making a universe like the one we observe at low energy using only electrons, up and down quarks, and electron neutrinos? Any pointers you can give to an example of some effect that depends on the presence of the higher Standard Model generations would be much appreciated. Having it already in terms that would be comprehensible to a non-scientist would be a bonus. - 7 Since you've eliminated CP violation I doubt you are going to get a good answer to this. It's just like having a world with cats and bunnies as well as squirrels. It gives some of us humans more to chase. – pho Jan 7 '11 at 19:12 This is still an open question in particle physics. One of the checks on a theory that unifies gravity with the other three forces is, in fact, whether that theory predicts precisely three generations of particles. My understanding is that this constrains the way in which you compactify the extra dimensions in string theories, and it also constrains the way in which you break supersymmetry. But it's also not my field at all, either. – Jerry Schirmer Jan 7 '11 at 19:21 1 I didn't mean to completely eliminate CP violation as a response, if that is a valid response. It was just a guess at something that might be a possible response (since CP violation was discovered in kaons, which contain strange quarks), with a hedge against people replying "no, no, the Standard Model can't provide enough CP violation..." If that's the best example of something along these lines, I'm happy to go with it (though a pointer to a reasonably accessible explanation would be a bonus). – Chad Orzel Jan 7 '11 at 19:42 2 CP violation is the best response I know of, in that you just don't have it in the Standard Model with 1 or 2 generations. I agree though that it seems you need other sources of CP violation to explain the cosmological baryon asymmetry. You might have a look at the Nobel web page and/or lectures when Nambu, Kobayashi and Maskawa got the Nobel Prize for an explanation accessible to your dog and/or readers. – pho Jan 7 '11 at 21:55 1 Nit picked: neutrino oscillation is such that all flavors are present in respectable fractions from all originating reaction flavors, and the muon- and tau- types are not known to "decay" as such. But I think the basic answer is still "Who ordered that?". – dmckee♦ Jan 7 '11 at 23:30 show 5 more comments ## 7 Answers Dear Chad, I thought you were an atheist. Most atheists tend to realize that many things that exist in the Universe have no "purpose". The existence of the Universe has no "purpose" that may be scientifically demonstrated. Even if life could exist in a Universe with 1 generation of quarks and leptons, which I find plausible (although I couldn't instantly produce any string compactification with 1 generation), one could still ask why the Universe only has 1 of them if it can have several generations. The idea that 1 generation is inevitably "qualitatively more likely" or "qualitatively more natural" than 3 generations is just flawed. Life could arguably exist somewhere in a Universe with 1 generation. The parameters and molecules relevant for life - and phase diagrams of QCD etc. - would have to be recalculated but no proof is known that would show that life would be impossible in such a Universe. Otherwise, the fact that there are 3 generations in a particular Universe can be derived from deeper properties of string theory (half of the Euler character of the Calabi-Yau shape, assuming a conventional heterotic compactification for a while), and as I have hinted, even at this very point, it might be possible to show that the number of generations cannot be one, among other forbidden values. While three-generation models are known, it's not fully known at this moment whether 3 generations is a unique solution to some conditions or whether it's a coincidence, as the anthropic reasoning wants us to immediately believe. - At present, I'd have to agree with dmckee's "Who ordered that?" quote, in that the Standard Model must take the list of fundamental particles as an input, i.e., it provides no explanation (just as it does not explain color charge). I'd argue that CP violation isn't so much required by the theory (feel free to correct me here), as it is an observation of reality, like the particles themselves. Some theories in development, such as String Theory, do provide a reason for precisely three families (as Jerry mentions). In the case of String Theory, it comes about because of allowable string oscillations, which themselves are dependent on the number of dimensions (compactified and extended). (The number of dimensions and the way they are compacted is more fundamental than the number of particle families, so I would argue that, while we may have gotten the number of dimensions in part to make the particle families work out, many other things, such as predicting the properties of the still theoretical graviton, also depend on the dimensional parameters. This leads me to make the claim that choosing the number of dimensions is more than a parameter dictated by the number of particle families, i.e. that dimensions predict three families, rather than three families being used to choose the number and form of the dimensions.) So, from a pop-sci standpoint, I'd have to say that currently accepted theory can't really explain why we have three families of particles, but theorists are hard at work on new theories, some of which can explain it as a consequence of something deeper (with an appropriate side note that the existing theories are fantastically good at explaining our world, but that we know they have very specific shortcomings in very special cases, and we won't be satisfied until we've cleared those up. I add this because I get tired of arguments of a religious nature that take the very small bit we don't understand and use that to claim we don't understand anything.) - 3 I'm down voting this answer because of several inaccuracies. 1) The massless modes of the string that show up as generations of fermions have nothing to do with the oscillations of the strings, they arise from the zero modes and 2) the answer gives the impression that string theory provides a reason for 3 families, which it does not. The number of families depends on the geometry of the compatification space (the Euler number of the Calabi-Yau manifold for compactifications of the heterotic string) and you can get many answers besides 3. – pho Jan 8 '11 at 15:52 @Jeff Thank you for the clarification. I certainly should have said modes rather than oscillations, but don't the zero modes depend on the number of dimensions? On the second point, the dependence on geometry is a very appropriate addition to a correct answer, but I would still claim that the "choice" of geometry is deeper than simply getting the correct number of observed families, so that it can be claimed as an explanation of that phenomenon. – Mitchell Jan 9 '11 at 1:37 1 The number of generations depends on the Euler number of the CY manifold which is a different concept than the number of dimensions. I think it is fair to say that compactification of the heterotic string on a CY space explains why there are chiral generations since this a generic phenomenon true for any CY with nonzero Euler number, but not the number of such generations which varies from 0 to 480 for currently known CY spaces. – pho Jan 9 '11 at 14:57 The question: "I'm wondering if there is some good reason why the universe as we know it has to have twelve particles rather than just four." The short answer: Our current standard description of the spin-1/2 property of the elementary particles is incomplete. A more complete theory would require that these particles arrive in 3 generations. The medium answer: The spin-1/2 of the elementary fermions is an emergent property. The more fundamental spin property acts like position in that the Heisenberg uncertainty principle applies to consecutive measurements of the fundamental spin the same way the HUP applies to position measurements. This fundamental spin is invisible to us because it is renormalized away. What's left is three generations of the particle, each with the usual spin-1/2. When a particle moves through positions it does so by way of an interaction between position and momentum. These are complementary variables. The equivalent concept for spin-1/2 is "Mutually unbiased bases" or MUBs. There are only (at most) three MUBs for spin-1/2. Letting a particle's spin move among them means that the number of degrees of freedom of the particle have tripled. So when you find the long time propagators over that Hopf algebra you end up with three times the usual number of particles. Hence there are three generations. The long answer: The two (more or less classical) things we can theoretically measure for a spin-1/2 particle are its position and its spin. If we measure its spin, the spin is then forced into an eigenstate of spin so that measuring it again gives the same result. That is, a measurement of spin causes the spin to be determined. On the other hand, if we measure its position, then by the Heisenberg uncertainty principle, we will cause an unknown change to its momentum. The change in momentum makes it impossible for us to predict the result of a subsequent position measurement. As quantum physicists, we long ago grew accustomed to this bizarre behavior. But imagine that nature is parsimonious with her underlying machinery. If so, we'd expect the fundamental (i.e. before renormalization) measurements of a spin-1/2 particle's position and spin to be similar. For such a theory to work, one must show that after renormalization, one obtains the usual spin-1/2. A possible solution to this conundrum is given in the paper: Found.Phys.40:1681-1699,(2010), Carl Brannen, Spin Path Integrals and Generations http://arxiv.org/abs/1006.3114 The paper is a straightforward QFT resummation calculation. It assumes a strange (to us) spin-1/2 where measurements act like the not so strange position measurements. It resums the propagators for the theory and finds that the strange behavior disappears over long times. The long time propagators are equivalent to the usual spin-1/2. Furthermore, they appear in three generations. And it shows that the long time propagators have a form that matches the mysterious lepton mass formulas of Yoshio Koide. Peer review: The paper was peer-reviewed through an arduous process of three reviewers. As with any journal article it had a managing editor, and a chief editor. Complaints about the physics have already been made by competent physicists who took the trouble of carefully reading the paper. It's unlikely that someone making a quick read of the paper is going to find something that hasn't already been argued through. The paper was selected by the chief editor of Found. Phys. as suitable for publication in that journal and so published last year. The chief editor of Found. Phys. is now Gerard 't Hooft. His attitude on publishing junk is quite clear, he writes How to become a bad theoretical physicist On your way towards becoming a bad theoretician, take your own immature theory, stop checking it for mistakes, don't listen to colleagues who do spot weaknesses, and start admiring your own infallible intelligence. Try to overshout all your critics, and have your work published anyway. If the well-established science media refuse to publish your work, start your own publishing company and edit your own books. If you are really clever you can find yourself a formerly professional physics journal where the chief editor is asleep. http://www.phys.uu.nl/~thooft/theoristbad.html One hopes that 't Hooft wasn't asleep when he allowed this paper to be published. Extensions: My next paper on the subject extends the above calculation to obtain the weak hypercharge and weak isospin quantum numbers. It uses methods similar to the above, that is, the calculation of long time propagators, but uses a more sophisticated method of manipulating the Feynman diagrams called "Hopf algebra" or "quantum algebra". I'm figuring on sending it in to the same journal. It's close to getting finished, I basically need to reread it over and over and add references: http://brannenworks.com/E8/HopfWeakQNs.pdf - Published, peer reviewed article gets two down votes and no comments. – Carl Brannen Jan 19 '11 at 4:15 I did not down vote, but it does seem very convoluted for the request in the question. Maybe a summation in a phrase on the lines :"thus the existence of spin leads mathematically to the existence of three generations". – anna v Feb 16 '11 at 6:37 Okay, so here's a third down-vote and a comment: first four paragraphs don't mention generations at all. This alone would suffice for a down-vote. You also make some pretty weird statements about QM which would need to be clarified (and I would discuss those if the answer was otherwise good; which it isn't). At the end you say that all of the (obviously irrelevant) stuff in first four paragraphs implies three generations. But you don't say anything about how or why. This is one of those answers where I wish to be able to down-vote more than once... – Marek Feb 17 '11 at 22:42 How about instead of discussing why you paper is perfect and can't possibly have any flaw you'd discuss its contents? I suppose that as its author you'd be the most competent to do it. Instead you spend half of your answer talking about trivialities of QM and the other about yourself being perfect. Way to go ;) – Marek Feb 18 '11 at 10:35 By the way, to be more specific: your paper is definitely interesting. Of course I can't tell whether it's correct or not after spending just few minutes with it but if you edited the answer to talk more about the paper itself (e.g. mentioning how precisely does the number 3 arise, mentioning the Koide formula, etc.) rather than talking about the publishing process, I'd consider giving you an up-vote ;) – Marek Feb 18 '11 at 14:17 show 11 more comments I am looking at this question as a particle physicist and as a reader. I suppose you have explained to your dog about potentials and quantum mechanical solutions which allow electrons to be trapped around nuclei, so the dog is familiar with the quantum nature of the world :). You could illustrate with a harmonic oscillator and show that given different strengths the energy levels change accordingly. Then you are ready to do an analogue . Each energy level is a "particle" in potentia, if the right material is there. If you have a hydrogen atom yo have one proton and one electron, and you have only one atom of hydrogen, even though there are many energy levels. If you get a helium you fill two energy levels and the rest of the potential lines are free. You can talk about adding energy to get to an excited state and still have the same atom. You can make an analogue of the standard model, see for example the graph in the particle physics book figure 14.4. Energy input raises a nucleon (three quarks) to a higher "quasi stable" excited state, that contains new generations of quarks. This gives the argument that the quarks and leptons that make up our world are the analogue lowest energy levels filled that create the matter we depend on. The extra generations are there in the same way that the extra levels are there in the simple quantum mechanical problem and may be filled and appear given enough energy. They are there because of the form of the "potential" that makes them possible in order and groupings that are necessary given the stable matter solutions we observe, which are still at the frontier of current theoretical studies in physics. It is true that higher order terms in QCD will include all the generations and it might be that the nucleon solution would not be stable if these higher generations were not there, but maybe somebody else could think of an analogue for that. - – Carl Brannen Feb 20 '11 at 0:44 I've decided to elevate my comment on your question to an answer: As a result of dmckee's comment (+1), I Googled "cosmological effects of neutrino mixing". There are relevant results, though I'm not qualified to sift them for you. Less cosmologically, neutrino mixing might modify the star-forming effects of supernovae in gas clouds nearby. Google for "supernova neutrino mixing" seems to me much more interesting. It may even affect whether there are supernovae at all. If there were none of those, there would be no dogs or bunnies, though, who knows, there might still be squirrels. A more specific question about Supernovae and neutrino mixing might be good. This is not to say that neutrino mixing causes Supernovae, even if it were the case that Supernovae would not happen if there were no neutrino mixing. It is contingently the case that we observe Supernovae, and most models take Supernovae to be a principal source of heavy metals, particularly iron, and it is contingently the case that we observe neutrino mixing. There's bound to be someone on Physics SE who knows straight off whether neutrino mixing plays a significant role in current astrophysics models for Supernovae. At the end of the day, however, this is just to say that effects that are very subtle at small scales may have manifest consequences at large scales. In Dog World —which typically doesn't care about butterflies, even if someone speculates that they might cause a hurricane somewhere—, if Emmy doesn't eat anything for 2 hours, she might not notice, but if Emmy doesn't eat for three days, everybody would notice. I do want to change a detail of my comment — if there might be a metaphysical category of things that behave like “squirrels”, even without iron in the world, because, counterfactually, we modified the Universe so that there is no metaphysical category of things that behave like “neutrino quantum fields that mix”, surely there would also be a metaphysical category of things that behave like “dogs”. I can't pull it off, but I'm envious of your dog trope. - The number of fermionic helicity states of the supersymetric standard model with massive neutrinos --if they are right neutrinos, as expected from seesaw and GUT-- and massive gauge bosons is 126. Of course the number of bosonic states is the same :-). You can add another two helicity states if the mass mechanism is the MSSM one. And of course, you can add another two helicity states if you put the gravitino in the bag. So, with 3 generations and now that the neutrinos are massive, you win the possibility of fitting the game in a 128 fermion, which happens to be the dimension of a D=11 fermion. With any game of neutrinos, if you put them apart the extant fermions, they amount to 84 helicities. With three generations and a massive top, you can consider neutrinos as in the previous paragraphs and simply put apart the top quark and squark; again the extant "light fermions" amount to 84 helicities, and so they superpartners. Nothing useful has emerged of this, but it comes out from having three generations. Note that in D=11 SUGRA, an 84-component bosonic object is forced to exist, that complements the 44-component graviton. - Would the down-voters care to leave a comment regarding their decisions? This answer appears rock solid to me. It might not qualify the OP's request to be in terms that would be comprehensible to a non-scientist. But then again it is comprehensible to scientists (or at least one scientist, me) and that is who this site is intended to attract. – user346 Jan 16 '11 at 6:52 The first down-voter did it when the answer only had the 3rd paragraph. In any case, the up/down arrows are labeled as "useful/not useful". So I am assuming that it is because the answer, particularly the 3rd p., can be considered "not useful" (of course not useful for the OP bonus goal, as you say, but also not useful for specialists, as this kind of facts do not hint any way to exploit them in model building). Thanks for your +1, space_cadet!. – arivero Jan 16 '11 at 9:59 edited to remove the original 3rd paragraph. – arivero Feb 17 '11 at 21:09 – arivero Feb 21 '11 at 17:43 This question is related to the problem of family structure. I will restrict this to QCD, where the gauge group is $SU(3)$, and there are the gauge multiples. The irreducible representation of $SU(3)$ are ${\bf 3}\times {\bar{\bf 3}}$ and ${\bf 8}~+~{\bf 1}$. The first irrep describes the quark doublet structure or “flavors,” while the second defines the “color scheme,” which is how quarks may carry two of the gauge coupling charges $r,g,b$ and anti $r,g,b$. The “one” defines “white” which is color neutral. So it turns out that the gauge field has a certain group structure and the carriers of these charges also have the same group structure. We might think of the gauge potential $A_\mu$ as written according to $A_\mu~=~A^b_\mu\lambda^b$ where the sum is over the $\bf 8$ color scheme. The currents for the theory $J_\mu~=~{\bar\psi}^i\gamma_\mu\psi^i$ are determined by the quarks. Here the index $i$ is with respect to the $\bf 8$ of $SU(3)$. We have from electromagnetism the Maxwell-Faraday equation $$\nabla\times {\bf H}~=~{\bf J}~+~\frac{\partial {\bf D}}{\partial t}$$ where the time derivative of electric displacement vector is the “displacement current.” In effect the field theory says, “The left hand side sums these up and computes a value for $\bf H$ independently of the particular values of each.” The two play identical roles. With QCD we have a similar gauge covariant form of this expression, and the magnetic intensity analogue in QCD obeys a similar rule. Consequently, the current and gauge field should be interchangeable by changing the irrep --- to put is somewhat loosely I have to admit. This does not constitute a proof, but it is suggestive of why the source for the fields has a similar structure as the fields themselves. The one exception is electromagnetism, which is a $U(1)$ gauge group and all particles carry a charge. The connection with particle families and electromagnetism is with hypercharge and the Gell-Mann–Nishijima formula that interchanges the centers of higher groups with $U(1)$. - 1 Except for the hint in the last line, this seems a reply to a different question, the structure of particles as representations of a gauge group. But gauge group happens for each generation, question is: What Do We Get From Having Higher Generations of Particles? – arivero Feb 18 '11 at 14:58 I was specifically addressing the question of generations of particles. It turns out that for an SU(n) gauge group the fermionic source fields have the same symmetry. – Lawrence B. Crowell Feb 19 '11 at 0:50 1 So your short answer should we that "3 generations allow us to arrange them with a SU(3) horizontal flavour group", is it? – arivero Feb 19 '11 at 1:39 1 In a nutshell that is about it. The family structure or group system of flavours is isomorphic to the gauge symmetry they act as a source for by carrying a color charge. This is actually one of those unclosed questions. What I indicate is suggestive, but it is not a proof. It might be that the flavours and the colours form mixed eigenstates in some way, though I am not sure how to make this work in an irrep. – Lawrence B. Crowell Feb 19 '11 at 13:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486746788024902, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/179762-logarithm-question.html
# Thread: 1. ## Logarithm question If the sum is 2^x = 8, how do I work out x? When I use my calculator, I press 'log', then 2 and then in brackets (8), so it looks like this "log2(8)". Why do I get the answer of 2.4082399..... 2. Originally Posted by yorkey If the sum is 2^x = 8, how do I work out x? When I use my calculator, I press 'log', then 2 and then in brackets (8), so it looks like this "log2(8)". Why do I get the answer of 2.4082399..... There are a lot of different ways to solve this equation. I'm going to show you 2 of them: 1. Change 8 into a power of 2: $2^x = 8~\implies~2^x=2^3$ Two powers of the same base are equal if the exponents are equal too. 2. Use logarithms (but correct) and the base-change-formula: $2^x = 8~\implies~x=\log_2(8) ~\implies~x=\dfrac{\log(8)}{\log(2)} = \dfrac{\ln(8)}{\ln(2)}$ 3. Using logs you can say $\displaystyle 2^x = 8$ $\displaystyle \log_22^x = \log_28$ $\displaystyle x = \log_22^3$ $\displaystyle x = 3\log_22$ $\displaystyle x = 3\times 1$ $\displaystyle x = 3$ Seems like the long way home though, do you agree? 4. Easy peazy - now that I know what to do, and my questions won't get more complicated than this. Thank yuo! 5. By pressing log2(8) you're not saying base 2, index 8. Calculators by default use the 10 base. You'll have to use the base change formula, which is log of the index (by default base 10)/ log of the base (by default base 10). #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8939749002456665, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/79769-solving-differential-equations-using-change-variable.html
Thread: 1. Solving differential equations using a change of variable My book is telling me d/du((e^u)dy/dx) =(e^u)dy/dx + (e^u)(d2y/dx2).(dx/du) I understand where the first part has come from, but not sure how the second part was derived. Would anyone kindly provide me with an explanation? 2. Erghhhhhhh....:p Originally Posted by Erghhh My book is telling me d/du((e^u)dy/dx) =(e^u)dy/dx + (e^u)(d2y/dx2).(dx/du) I understand where the first part has come from, but not sure how the second part was derived. Would anyone kindly provide me with an explanation? This will make it clear $\frac{d}{du}\{e^u\frac{dy}{dx}\}$ $= \frac{dy}{dx}\frac{d}{du}(e^u) +e^u \frac{d}{du}(\frac{dy}{dx})$ Differnetiation of dy/dx wrt u will be given by (using chain rule) $= \frac{y'}{dx} \cdot \frac{du}{dx}$ ------------------------------------------ So differnetiation will be $=\frac{dy}{dx}e^u + e^u \frac{d^2(y)}{dx^2} \cdot \frac{dx}{du}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249181151390076, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2572/how-can-we-find-public-key-have-only-8-or-16bits-how-many-messages-does-eve-nee/2573
# How can we find Public key have only 8 or 16bits? How many messages does Eve need to know the Public key in RSA? Suppose Alice sends messages to Bob by encrypting the messages with Bob's public key. Eve knows that the data is encrypted using RSA, but does not know the public key. Can Eve figure out the public key just by observing the encrypted messages? And if so, approximately how much data would it take for Eve to discover the public key? If we know that the public key has only 16bits. It's have over 6500 primes smaller than 2^16. How long we can find the public key ?? speed : 1.000.000 cal/s. - One can't even apply padding to 16 bit RSA. So do you want to use textbook RSA? – CodesInChaos May 9 '12 at 9:55 1 RSA below 512 bits is ridiculously broken, and RSA below 1024 bits is still pretty weak. If you want small keys/blocks, go with elliptic curves, but even they become weak below 160 bits. – CodesInChaos May 9 '12 at 9:58 a public key is actually two numbers: $(e, N)$. While small $e$ is o.k., small $N$ implies small $d$, the private key. This is very weak. – yarek May 11 '12 at 18:50 ## 1 Answer If the plaintext is easily recognizable, one message is sufficient. Simply brute force all 16 bit RSA keys, decrypt the ciphertext. If the result "looks" like plaintext, you have found the key. A 16 bit RSA keyspace should be easy to brute force. - 2 To be technical, one probably isn't enough. After all, the public key consists of a modulus and a public exponent; even if you're given both P and C, for a candidate modulus M, there is a significant probability that there will be a value e such that $P^e \equiv C \mod M$, and this value $e$ has a good chance to be relatively prime to $\phi(M)$. Hence, there are likely to be multiple $(M, e)$ pairs, and so you'll need a second encrypted message to have a chance to figure out which one it might be. – poncho May 9 '12 at 2:58 Also, for a 16-bit message there is not much of "recognizing". But the key point is: don't use 16-bit RSA. – Paŭlo Ebermann♦ May 9 '12 at 7:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237948656082153, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21781?sort=newest
## Oriention-Reversing Diffeomorphisms of a Manifold ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am trying to figure out when a closed, oriented manifold admits an orientation reversing diffeomorphism. My naive argument that the orientation cover should allow you to switch orientations is apparently wrong, since not every manifold admits such a diffeomorphism. Can anyone give me some criteria for when such a morphism should exist, or why some of the standard counterexamples (such as $\mathbb{P}^{2n}$) fail to admit one? Thanks - 9 Dude, If you turn in any answers you get off of here as your solution to a homework problem, I am totally turning you in. – Charlie Frohman Apr 18 2010 at 22:09 When you say $\mathbb{P}^{2n}$, do you mean complex projective space? I think real projective spaces of even dimension are very rarely orientable. – S. Carnahan♦ Apr 18 2010 at 22:37 Yes I mean $\mathbb{C}\mathbb{P}^{2n}$. Sorry if this question is actually easy, but I am not a differential geometer so I'm unsure of how to approach this. I checked google and noticed a few theses on when manifolds admit such a morphism so I assumed it wasn't completely trivial. – Randy Reddick Apr 18 2010 at 22:45 6 Can you down-vote a comment? – Makhalan Duff Apr 19 2010 at 17:56 3 I accidentally wrote "orientational" instead of orientation in the title, so I apologize for that. I'm not sure how you deduce from that and that I used the common abbreviation $\mathbb{P}^n$ for complex projective space that I had no idea what I was talking about / that this is a homework question. As I said, I'm not a topologist and since every complex manifold is orientable it seemed natural to ask when you can reverse the orientation. I know we don't want to answer calculus questions here but it seems rather silly that I can't ask questions outside my area of specialty. – Randy Reddick Apr 26 2010 at 16:46 show 2 more comments ## 4 Answers Such an endomorphism of $M$ gives an automorphism of the cohomology ring that acts by $-1$ on top cohomology. The cohomology ring of your example $M = {\mathbb C \mathbb P}^{2n}$ doesn't have such automorphisms. - Yes thank you. My definition of orientation was in terms of homology so I was missing out on the ring structure. – Randy Reddick Apr 19 2010 at 2:30 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A large number of manifolds of dimension $4k$ can't admit an orientation-reversing diffeomorphism just because of their cobordism type. That is, if $f: M\rightarrow \overline{M}$ is an orientation preserving diffeomorphism, then the cobordism class $[M^n]$ is a 2-torsion element of the cobordism group of oriented $n$-manifolds: Since $M\sqcup M \cong M\sqcup\overline{M}$ bounds the cylinder $M\times[0,1]$, thus $2[M] = [M\sqcup \overline{M}] = 0 \in \Omega^{\rm SO}_n$. By the Thom-Pontryagin theorem, if $M$ has a nonzero Pontryagin number (which requires that the dimension of $M$ to be a multiple of 4), then $[M]$ is generates a free abelian subgroup of $\Omega^{\rm SO}_n$ and is not a 2-torsion element. Thus, $M$ will not admit an orientation-reversing diffeomorphism. In particular, this applies if the signature of $M$ is nonzero, since by Hirzebruch's signature theorem the signature is computable in terms of Pontryagin numbers. The previously mentioned examples of $\mathbb{CP}^{2k}$ and $\mathbb{HP}^k$ are special cases of this statement, since both have nonzero signature and hence are do not represent 2-torsion elements of the oriented cobordism group. - You might want to have a look at the paper "Orientation reversal of manifolds" by Daniel Muellner. - The same technique Allen mentioned also shows that $\mathbb{H}P^{2n}$ doesn't admit any orientation reversing diffeomorphisms. However, it's also true that $\mathbb{H}P^{2n+1}$ doesn't admit any orientation reversing diffeomorphisms unless n = 0. This is because the first Pontryagin class $p_1 = 2(n-1)x$ for $x\in H^4(\mathbb{H}P^n)$ a generator. Any diffeomorphism must take $p_1$ to itself, so it must take $x$ to itself (unless $n=1$). The ring structure of $\mathbb{H}P^n$ then implies that the diffeomorphism preserves orientation. (By contrast, the map $[z_0:...:z_n]\rightarrow [\overline{z_0}:...:\overline{z_{n+1}}]$ is an orientation reversing map for $\mathbb{C}P^{2n+1}$). Another class of (perhaps surprising) examples is exotic spheres: many of them don't admit orientation reversing diffeomorphisms, though, or course, they admit orientation reversing homeomorphisms. This is because the collection of oriented diffeomorphism classes of an $n$-sphere ($n\neq 4$) form an abelian group under connect sum. The inverse of an oriented diffeomorphism class of sphere is the same diffeomorphism class equipped with the opposite orientation. Thus, exotic spheres with orientation reversing diffeomorphisms correspond to elements of order 1 or 2 in this group. Then, for example, in dimension 7, the group is isomorphic to $\mathbb{Z}/28\mathbb{Z}$, so there are precisely two spheres which admit orientation reversing diffeomorphisms. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939746618270874, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/203330-query-about-implementing-linear-discriminant-analysis.html
# Thread: 1. ## Query about implementing Linear Discriminant Analysis Hi, I am trying to implement LDA on some fMRI data. Trying to do it myself rather that using a package as I'd like to understand whats going on under the bonnet as it were. I have obtained a weight vector, $\mathbf{w}$, according to the formula $\mathbf{w} \propto \mathbf{\Sigma}^{-1}_{w} (\mathbf{m}_{2} - \mathbf{m}_{1})$ where $\mathbf{\Sigma}^{-1}_{w}$ is the (inverse of the) total within-class covariance matrix and $\mathbf{m}$ denotes the mean of a class (class 2 and 1 in this case) I can renormalise $\mathbf{w}$ to counter any numerical issues, since it is the direction and not the magnitude of this vector that is important. So far so good. I should then be able to find a discriminant $c$, such that a new datum $\mathbf{x}$ is classified as belong to class 1 if $\mathbf{w} \cdot \mathbf{x} > c$ and class 2 otherwise. I see intuitively that if the prior on each class is the same then (I think) $c = 1/2 \cdot (\mathbf{w} \cdot \mathbf{m}_{1} + \mathbf{w} \cdot \mathbf{m}_{2})$ Is that correct? But more importantly, is there a simple but principled way to establish $c$ in the case of asymmetric priors? Many thanks in advance, MD
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352680444717407, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/2853-induction-proof-print.html
# Induction Proof Printable View • May 6th 2006, 08:51 PM AfterShock Induction Proof Here is the problem: "Let S_n stand for the sum of all the products of the integers, taken two at a time, from 1 to n. For example, S_4 means: S_4 = 1*2 + 1*3 + 1*4 + 2*3 + 2*4 + 3*4 Prove that S_n is given explicitly by: S_n = [n*(n^2-1)*(3*n+2)]/24". So, I was thinking this was the perfect candidate for an induction proof. I thought it would be best to derive the formula first: Derive formula S_(n+1) = S_n + 1(n+1) + 2(n+1) + ... + n(n+1) = S_n + n(n + 1)^2 / 2 Now what; base case of course works out. How do I work toward the induction assumption. • May 6th 2006, 11:32 PM CaptainBlack Quote: Originally Posted by AfterShock Here is the problem: "Let S_n stand for the sum of all the products of the integers, taken two at a time, from 1 to n. For example, S_4 means: S_4 = 1*2 + 1*3 + 1*4 + 2*3 + 2*4 + 3*4 Prove that S_n is given explicitly by: S_n = [n*(n^2-1)*(3*n+2)]/24". So, I was thinking this was the perfect candidate for an induction proof. I thought it would be best to derive the formula first: Derive formula S_(n+1) = S_n + 1(n+1) + 2(n+1) + ... + n(n+1) = S_n + n(n + 1)^2 / 2 Now what; base case of course works out. How do I work toward the induction assumption. Suppose that for some $k \ge 2$ that $S_k = \frac{k(k^2-1)(3k+2)}{24}$ Now you need to show that: $S_{k+1}=\frac{(k+1)((k^2+2k)(3k+5)}{24}\ \ \ \dots (1)$. But you already know that: $S_{k+1}=S_k + \frac{k(k + 1)^2}{ 2}=$ $\frac{k(k^2-1)(3k+2)}{24} + \frac{k(k + 1)^2}{ 2}\ \ \ \dots (2)$ So you need to show that the RHS of $(1)$ is equal to the RHS of $(2)$. Then you will have shown that if the formula for $S_k$ is true for some $k \ge 2$ it is true for $k+1$. Which is what you need with the base case to complete the proof. RonL All times are GMT -8. The time now is 03:32 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9245322942733765, "perplexity_flag": "middle"}
http://quantumweird.wordpress.com/category/quantum-mechanics/
Explanation of the (formanlly) Weird Quantum World # Category Archives: Quantum mechanics ## Thought Experiment – Photons up Close Posted on January 3, 2008 Recently I published a paper on radio frequency photons:  Thought Experiment- Photons at Radio Frequencies in which I described a photon from the time of emission from a radio antenna as it propagated outward until it separated into photons and was later captured by an antenna. What I found was that the photon started as a whorl or vortex, if you wish, traveling initially in patterns of counter-rotating fields that eventually became identified as individual photons.  These whorls/vortexes have a specific size (diameter) and energy defined by the frequency of the emission.   A point on the rotating photon describes sinusoidal patterns that fall behind the photon in the classic electromagnetic patterns.   The thought experiment allowed me to calculate the maximum diameter of the photon at 105 mhz to be about 0.9 meters and a visible-light blue photon to have a maximum diameter of 143 nm. Having learned from that, I decided to do some more thinking about photons in general.  What applies at radio frequencies should also apply to photons of light and higher energies.   It occurs to me that we can learn a lot about photons by experimenting with them at radio frequencies.   We know that radio signals diffract around sharp structures and even exhibit double slit diffraction if passed between sets of tall structures with sharp edges.   I don’t know of any single-photon experiments at radio frequencies but I suspect that the results would be the same; diffraction still occurs in which the photon interferes with itself. Having looked at it from a whorl or vortex photon standpoint (as opposed to a wave standpoint), it is easy to imagine a photon nearly 1 meter in diameter passing around both sides of a telephone pole or being pulled around a corner of a building as one edge drags on the sharp edge there. The same thing should happen to a red, blue or green photon encountering superfine wires or sharp edges of a razor blade or slit. Not having the equipment nor the results of any such experiments at radio frequencies, I’m going to move this into a thought experiment and follow a photon up close, drawing on the earlier radio frequency thought experiment and adding details that agree with what we know about light photons and see where we go.  In this case I’ll consider a 450 nm blue photon.   I mention a blue photon only to help differentiate it from a radio frequency photon in the following discussion.  It doesn’t matter what it is, they should behave the same. ### Blue Photon by James Tabb  (ripples greatly exaggerated) A blue photon is emitted when a source (the emitter) such as, for example an electron that changes energy levels from a higher level to a lower one, shedding the excess energy as a photon.     I imagine it like a sudden elastic-like release of energy in which the energy packet moves away instantly to light speed.  If the packet follows Einstein’s equations (see graphic below) for space distortion, then a blue photon is immediately flattened into a disk of 143 nm diameter (see graphic above) because the lengthwise direction shrinks to zero at velocity c.   (This diameter was derived as d = λ/Π from my previous article and depends on the wavelength) In my description of a radio photon, the energy in the packet is rotating around the perimeter of the packet at c as well as moving away from the emitter at c.   The limit of c in the circular direction also limits the diameter of the packet. I can picture photons that slosh back and forth left to right or up and down or in elliptical shapes.   All of these shapes and directional sloshing, and rotation are equivalent to various polarization modes – vertical, horizontal, elliptical and circular.   I can also imagine that these shapes/polarizations are created as photons are beaten into these modes while passing though lattices or slits that encourage the photon to go into one mode or the other or to filter out those going in the wrong direction.   I can begin to see that when photons at light wavelengths are thought of as rotating whorls, it becomes easier to think of how this all works.   None of the modes involve back and forth motion because to do so, the portion going backward would never catch up to the forward mode or it would exceed c. Now that the photon has been emitted and begins its flight, we are purely in a relativistic mode.  Einsteins equations for space distortion and time dilation tell us that the path in front of the photon shrinks to zero and the time of flight shrinks to zero as well.   This has always raised a troubling problem because we know that some photons take billions of years to fly across the universe and move about 1 nanosecond a foot of travel. In order to resolve this problem, I’m now imagining an experiment in which an excellent clock is built into a special photon that starts when the photon is emitted and stops when it arrives. (Good luck reading it, but this is only a thought experiment, so I’m good to go.)  Perhaps the path is a round trip by way of a mirror or some sort of light pipe such that a timer triggered at the start point also stops again when the photon comes back. If the round trip is about 100 feet then you might expect the timer and the photon’s clock to both register about 100 nanoseconds more or less for the trip. When the experiment is run, the photon’s clock is still zero when it arrives and the other timer does indeed read very close to 100 nanoseconds. The photon seems to have made the trip instantly whereas we measured a definite trip time that turns out to agree with the velocity of c for the photon throughout its trip.  I decided that is the correct outcome based on the time dilation equations of Einstein when using velocity = c. So we see that Einstein’s time dilation equation applies to the photon in its reference frame, not ours.  There are nuances here that we should consider for the photon: (1) Since the distance the photon travels is zero, the time it takes is zero as well.  That is why the photon’s clock does not change.   Therefore, I claim that the space/time jump is instantaneous and therefore the landing point is defined at the moment the photon is created regardless of the distance between the two points. (2) Since we know that the photon packet cannot go faster than c and by experiment, it does not arrive faster than c, it appears obvious to me that the instantaneous space jump is not completed instantly, only defined and virtually connected.  I visualize that for one brief moment, both ends of the path are (almost) connected; emitter to photon, photon to its destination through a zero length virtual path. The photon does not transfer its energy to the destination at that moment because the path is only a virtual one. (3) I visualize the photon’s forward path shortened to zero, an effect which has everything forward to it virtually plastered to its nose, like a high powered telescope pulling an image up with infinate zoom capability.   All of space in front of it is distorted into a zero length path looking at a dot, its future landing point. (4) The photon immediately moves away from the emitter at light speed. As it does so, the path beside and behind the photon expands to its full length (the distance already traveled, not the total path) with a dot representing the destination and the entire remaining path virtually plastered to its nose.   A zero-length path separates the nose of the photon from the landing point. The path already traveled expands linearly as the photon moves away from the emitter along that path at a velocity of c. (5) I claim that the photon’s zero-length virtual path is effectively connected all the way through, including all the mediums such as glass, water, vacuum, etc.  However, the photon only experiences the various mediums as the path expands as it moves along.  I make this claim because it explains all of the quantum weird effects that we see described in the literature and thus appears to be verified by experimental results.  My next paper will detail this for the reader. The landing point only experiences the photon after the entire path is expanded to its full length. In the example, the starting and ending points are 100 feet apart with a mirror in between, but the entire distance between (for the photon) is zero and the time duration (for the photon) is also zero (with maybe a tiny tiny bump when it reverses at the mirror). For one brief instant, the emitter is connected to the photon and the photon to the mirror and back to the timer through two zero-length paths, but it is a virtual connection, not yet actually physically connected. The mirror and landing point remains virtually attached to the nose of the photon which moves away from the emitter at light speed, c. The photon’s clock does not move and the photon does not age during the trip, but the photon arrives at the timer after 100 nanoseconds (our time) and transfers its energy to the timer’s detector. (6) I also claim that all the possible paths to the destination are conjoined into one path that is impossibly thin and impossibly narrow, much like a series of plastic light pipes all melted into one path that has been drawn into a single extremely thin fiber.   This is a result of the fact that the distances to every point in the forward path is of zero length, and therefore all the paths are zero distance apart. In effect the entire path is shrunk to zero length at the time of emission due to a severe warp in space. Zero length implies zero duration for the trip as well, and the photon is in (virtual) contact with the mirror (and also with the finish line) instantly, but the space it is in expands at the rate of c as it moves away from the emitter. Everything in front of the photon is located as a dot in front of it. It experiences the mirror after 50 nanoseconds of travel time. The reflected photon is still stuck to the finish point as the space behind it expands throughout a second 50 nanosecond time lapse and the finish line timer feels the impact at the correct total 100 nanosecond time while the photons clock never moves. The major point learned in this thought experiment is that the photon’s path and landing point is perfected at the time it is emitted whether the path is a few inches or a billion light years long due to the relativistic space/time warp. This is a major point in explaining why quantum weirdness is not really weird, as I will discuss later in a followup paper that clarifies the earlier posts on this subject. ### Wormhole Concept I visualize the photon as entering a sort of wormhole, the difference is that the photon “sees” the entire path through the wormhole but does not crash through to the other side until the wormhole expands to the full length of what I call the “Long Way Around (LWA)” path. Unlike a wormhole, it is not a shortcut as it merely (as I call it) Defines the Path and Destination (DPD).  This concept also applies to any previously described wormhole – see my previous paper, Five Major Problems with Wormholes Here is the important point: The photon in this wormhole punches through whatever path it takes instantly at the moment of creation and defines the DPD. Every point in the DPD is some measurable LWA distance that is experienced by the photon as the path expands during its transition along the path. The LWA includes any vacuum and non vacuum matter in its path such as glass, water or gas. So now we have a real basis for explaining why quantum weirdness is not weird at all – it is all a matter of relativity, as I will explain in my followup paper. Oldtimer Copyright 2007  - James A. Tabb   (may be reproduced in full with full credits) → 1 Comment Posted in DPD, Einstein, graphic, image, LWA, photon, picture, Quantum mechanics, quantum weird, radio, Relativity, science, space distortion, thought experiment, time dilation, warp, wormhole Tagged DPD, Einstein, graphic, image, LWA, photon, picture, Quantum mechanics, quantum weird, radio, Relativity, science, thought experiment, time and space warp, time dilation, wormhole ## Five Major Problems with Wormholes Posted on December 31, 2007 # Five Major Problems with Wormholes Wormholes are supposed to be shortcuts from one time and place to another time and place.   For example, drive your spaceship into one end and exit near some other star, perhaps 1000 light years away.   Drive back through and return to earth.  Simple enough. Wormhole drawing from Wikipedia If a wormhole is ever created for passage of man or machine by some future civilization, then there will be some major problems to overcome other than the biggie… creating the wormhole in the first place.  I believe this is the first time most, if not all, of these problems have been identified. Although the wormhole supposedly bends/warps time and space, there is a fundamental limit to how fast you can get from here to there, no matter how much time and space are warped.   That limit is c and it applies to the Long Way Around (LWA) path length.  First let me tell you why I think so as it is key to the some of the rest of my list of problems. A common wormhole is created by every photon that exists.   For example, a photon does a space/time warp from Proxima Centauri (the nearest star to our sun) to our eye.  The distance and time the photon experiences is zero.  It does not age during the trip and the total distance is zero at the moment of creation.   However, it still takes 4.22 years to get here, the time light takes to travel the total distance from that star to ours. Einstein’s equations say that the photon traveling at c has a total path length of zero and travel time of zero duration.  I believe that applies to every photon.   However, we know that the photon takes 4.22 light years and travels about 28 trillion miles from that star to our eye as we measure or calculate it.   Even though the path the photon sees is zero length and the time it ages is zero time during the trip, it still does not arrive until the entire 4.22 light years elapses. It is my theory that this is because the space/time warp of our photon wormhole connects the emission point on Proxima Centauri and the landing point in our eye only in a virtual sense and only in the first instant of its creation. After that first instance, the photon moves away from the emitter at light speed and the path behind it expands as the photon travels along it at c.   The photon’s path to our eye always remains zero length, but it traverses the path at c, leaving an expanded path behind until the entire path is traversed.   The photon never transfers its energy until the entire path is completed at the maximum velocity of c. My first wormhole problem is that the time required is no less than the long way around travel time at c.   Anything entering the wormhole is imposibly close to the other end (as for our photon example), but cannot actually get there until the path from the entry point expands behind the object moving at c throughout the entire trip, the LWA, just as it does for the photon wormhole. Even if the wormhole spans a time/space warp of 1000 light years, it will still take no less than 1000 years to get from here to there even if the wormhole appears to be of zero length.   The crew of the space ship that manages to get into a wormhole would not age during the trip, a distinct advantage for the crew and the ship’s lifetime.  It would seem to be instantaneous and if it were indeed reversible, then the return trip would be just as fast.  Drive into one end and return immediately and likely not be but a few hours older.   However any companions that were left behind on earth would be dead nearly 2000 years.   All this assumes the problems that follow can be solved. The second problem is that a wormhole cannot be established before it is created at each end.  If  one end is created today and the other is somehow created on a distant star, the wormhole would not be operable until the second wormhole is created, presumably at least the normal space ship travel time from one construction site to the other, even if the construction crew travels at c.    Unless the wormhole acts like a reversible time machine, a much more difficult arrangement, it will take the same amount of time each way through the wormhole with the arrow of time aging both ways and it cannot begin to be used as a shortcut until both ends are finished.    It would take a very patient civilization to plan for such a feat. My third problem involves getting into any wormhole that moves you along at light speed.  The nose of the ship would presumably be accelerated to light speed even before the crew compartment made it into the opening.   The result would be powdered spaceship and crew with photons leading the way, larger particles and atoms dragging behind, but no survivors or anything recognizable. The fourth problem is getting out of the wormhole.  Let’s say somehow you can get your space ship in and up to speed.   Everything going out the other end arrives there at light speed.   A huge blast of various rays and light burping out the other end, frying anything loitering near the exit.  A great light show, but hardly useful for the crew wanting to get from here to there in a hurry, or their greeting party for that matter.  The wormhole turns out to be a great ray gun! My fifth problem involves reversibility.  We assume that entering the wormhole at either end establishes the direction of travel.  However, it appears to me that it is very likely that the arrow of time exists only in the direction of the creation of the wormholes.  That is, from the first wormhole to the second.  Items entering the first one created would be moving in an arrow of time from the earliest time to the latest.   Items trying to enter the second wormhole to come back would be rejected in a smoldering heap or blast of rays.   If that logic is reversed, the problem still exists:  One way only! ### Arrow of Time Established? I believe this applies to photons and particles in general.  The equations for physics always seem to allow collisions to be reversable and there are no laws that would not allow any set of particle interactions to be reversible.   However, it is my opinion that photons are not reversible for the reasons listed above.  They are zipping through non reversible wormholes.   Energy is transferred from point of creation to some other point where it is absorbed or transferred to another particle and can’t go back though the wormhole as it is a one way street, from first end created to the second end and never the other way around.  That means the arrow of time always moves forward and is never reverseable.  It can be stopped but never reversed. ### SuperLumal Transmission? As a side note, for the reasons listed in the problems listed above, there will be no speedup of communications through a wormhole.  No superlumal transmissions, no advantage over sending it across space the normal way, and very likely, no two way communications.   I hope these revelations do not stop any projects in progress as science will advance no matter what.  8>)  Photon wormholes are the best anyone will be able to do. Oldtimer PS – check out my earlier wormhole article Copyright 2007, James A. Tabb  (may be reproduced with full credits) ## Location or Momentum Posted on August 9, 2007 Bruster Rockit: Space Guy!                           by Tim Rickard A key element of quantum mechanics is Heisenberg’s uncertainty principle, which forbids the simultaneous measurement of the position and momentum of a particle along the same direction, as so aptly illustrated by Tim Rickard above. $E = c \, p \!$  for a photon, where E is the energy, c is the speed of light and p is the momentum.    So the momentum of a photon is equivalent to the energy of the photon divided by the speed of light or p =  E/c  where E is also related to the frequency of the photon by Planck’s Constant E = hf.   h is Planck’s constant and f is the frequency assigned to the photon.   f is also related to the wavelength of the photon by f = c/λ. So E = hc/λ = cp       Therefore    p = h/λ But we know the values for both h (6.26×10^-34 joules sec.) and for λ if we know the color of the photon.  Usually if we are dealing with coherent light (red laser for example) then we know the wavelength λ very accurately.   Thus we know the momentum very accurately. There is another factor in this equation – spin angular momentum of the photon which is independent of its frequency.  Spin angular momentum is essentially circular polarization for a photon.  Angular momentum is ±h/2π.   It is the helical momentum of the photon along its flight path.   In order to pin down the momentum we also need to know its angular momentum, but it is a constant that is either spinning one way or the other, no half spins no quarter spins just +h/2π or -h/2π. The key for this discussion is that we know the momentum for any photon if we know its wavelength.   p = h/λ and the direction of its spin ±h/2π.   According to Heisenberg’s principle we cannot know the location of the photon if we know its momentum.  Since we do know its momentum we are at a loss to try to pin the location to a particular spot such as through a narrow slot or pinhole. Whenever we try to fit a photon through a slot, we are trying to pin down the location as it goes through the slot.  The narrower we make the slot the closer we are trying to pin it down.   Nature resists by causing havoc with our measurements – fuzzy behavior/weird effects. ## Pair Production Pair production is a possible way for nature to slip one by us – putting a photon through both slots simultaneously, thus confounding our measurements completely.   When a photon hits an obstacle such as the thin barrier between the two slots, it melds through the slots around the barrier as in my earlier posts or possibly down-converts to a lower frequency pair of photons (or up-converts to a higher frequency) through pair production (conserving energy by the frequency change).  These pairs recombine on the far side of the barrier through an up (or down) conversion process causing an effective interference due to jiggling in the conversion process. Our barrier strip knocks the photon silly, and it responds by splitting up, zipping through the two slits independently, then recombining in a way that looks like interference. Virtual Photons Another type of pair production would be through creation of a virtual photon – a pair with one real and one virtual as also mentioned in an earlier post.   The scenario is the same – barrier knocks photon silly, virtual photon forms, passes through other side, then effectively recombines while interfering with the “real” one.   The original and virtual photons could actually be down converted or up converted photon pairs that recombine by up or down conversion causing interference-like behavior. In either case, blocking one slit or the other would prevent melding and also prevent pair production as well as the formation of virtual photons. Pair production through down/up conversion and/or virtual pairs would fit better with particles with mass acting like waves that cause interference when passed through slits.  Even bucky balls and cats could potentially form virtual pairs if moving close to the speed of light.   Well, again, maybe not cats. Oldtimer ## Quantum Weirdness in Entangled Particles Posted on June 9, 2007 # Entangled Particles Selecting which atom we use with careful attention to its excitation states can create entangled particles.  Some atoms emit two photons at a time or very closely together, one in one direction, the other in the opposite direction.  These photons also have a property that one spins or is polarized in one direction and the other always spins or is polarized at right angles to the first.  They come in pairs such that if we conduct an experiment on one to determine its orientation, the other’s orientation becomes known at once.   They are “entangled”. ## Figure 10 – Entangled Particles All of this was involved in a famous dispute between Einstein and Bohr where Einstein devised a series of thought experiments to prove quantum measurement theory defective and Bohr devised answers. The weirdness, if you want to call it that, is the premise that the act of measurement of one actually defines both of them and so one might be thousands of miles away when you measure the first and the other instantly is converted, regardless of the distance between them, to the complement of the first.   Action-at-a-distance that occurs faster than the speed of light? Some would argue (me for instance) that this is more of a hat trick, not unlike where a machine randomly puts a quarter under one hat or the other, and always a nickel under a second one.  You don’t know in advance which contains which.  Does the discovery that one hat has a quarter actually change the other into a nickel or was it always that way?  Some would say that since it is impossible to know what is under each hat, the discovery of the quarter was determined by the act of measuring (lifting the hat) and the other coin only became a nickel at that instant.   Is this action at a distance? It is easy to say that the measurement of the first particle only uncovers the true nature of the first particle and the deduction of the nature of the second particle is not a case of weirdness at all.   They were that way at the start. However, this is a hotly debated subject and many consider this a real effect and a real problem.  That is, they consider the particles (which are called Einstein‑‑ Podolsky‑Rosen (EPR) pairs) to have a happy-go-lucky existence in which the properties are undetermined until measured.   Measure the polarization of one – and the second instantly takes the other polarization. A useful feature of entangled particles is the notion that you could encrypt data using these particles such that if anyone attempted to intercept and read them somewhere in their path, the act of reading would destroy the message. So there you have it – Weird behavior at a distance, maybe across the universe. Next:  Some Random Thoughts About Relativity ## Quantum Weirdness for Tilted Glass Posted on June 9, 2007 # Photons That Hit Tilted Glass Individual photons directed at tilted glass have an option of being reflected or going through.  They can’t do both because they can’t be divided, or so we are told.  Yet some experiments seem to imply that they sometimes take both paths unless a detector is in place. Tilted glass acts like a sort of beam splitter.  It either goes through or bounces off  (or sometimes absorbed). QED can easily compute the probability dependent on the angle.  Some go through and some reflect and the angle makes the difference.  You can adjust the angle to get a 50-50 chance of reflect or go through. If you use other beam splitters to put the two beams back together you can get an interference pattern, not unlike the one depicted in the double slit experiment.   The beam goes both ways, but one path is longer and so when they come back together, they interfere with each other. However, if you turn the light down so only one photon at a time goes through you still see the same effect, implying the photons go both ways.   If you leave the single-photon-at-a-time beam on long enough and have a good film in an exceptionally dark room, the outcome will be a well defined interference pattern. How can single photons being emitted minutes apart interfere with each other?   How can a photon that can only go one way or the other interfere with itself?   QED cannot explain this quantum weirdness for single photons.  It can predict the pattern but cannot explain it.  Every indication is that when no detectors are present, the individual photons somehow split. There are some very sophisticated delayed choice experiments involving beam splitters.  There are super fast detectors that can be switched into the photon beam after it goes through the splitter. In other words, spit a photon at the splitter, calculate when it reaches it (about 1 nanosecond per foot of travel) and then switch the detector into the path behind the splitter. The idea is to try to trick the photon into “thinking” there is no detector so it is ok to split, then turning on the detector at the last moment and try to catch the photon doing something it is not supposed to do, breaking laws along the way.  If it arrived at a detector in the reflected path and was also seen by the detector behind the splitter, some law has been broken and the mystery solved – figure out a new law. You do this randomly. If the photon goes both ways, it can be caught by the detectors.   It never does. The physics says that if you try to make the measurement, it will disturb the experiment. And so every test seems to verify that fact. Whenever a detector is present there is no interference pattern. Whenever the detector is absent, the pattern reappears. There is an argument that the photon must go through the glass whole since the photons transmitted through the glass are actually retransmissions within the glass, not the same photon that impacts it.  That argument then says that the other path has to either have had no photon or a whole one also (creation of energy not allowed).   It also says that the photon must retain a whole packet of energy.   Yet single photons seem to interfere with each other.  QED cannot explain why.   I hope to do so. Next:  Entangled Particles ## Polarized Light Weirdness Posted on June 6, 2007 # Polarized Light Weirdness The same weirdness problem arises when we pass light through polarized devices as in the figure at the left.  The devices are calcite crystals in which the light is split into two parts, a horizontal (H) and a vertical (V) channel.  If we try to send individual photons through, they go through only one channel or the other, never through both, and those that come out of the H channel are always horizontally polarized, those that come out of the V channel are always vertically polarized as we might expect. It is possible to orient photons to other angles at the input.  One such arrangement is to adjust them polarized so that they are tilted 45 degrees right or left as illustrated in the same figure.   If we orient the input to 45 degrees, tilted right, we get half of the photons coming out the H channel and half out of the V channel, one at a time, but these are always horizontal and vertical polarized, no longer polarized at 45 degrees right. Now comes the weird part.  See the figure at the left.  If we put a second calcite crystal in line with the first one, but reversed so that the H channel output of the first goes into the H channel of the second and the V channel output of the first goes into the V channel of the second, we expect the output to consist of one photon at a time (and it is), but since the first crystal only outputs horizontal or vertical polarized photons we expect only horizontal or vertical polarized photons out of the second crystal. ## Quantum Weirdness at work. However, if we test the polarization of the output, we find that the photons coming out are oriented to 45 degrees right, exactly like the input.  Individual photons go in at 45 degrees right at the input, are still individual photons but horizontal or vertical oriented in the middle, but come out oriented 45 degrees right again at the output!  Somehow the two channels combine as if the individual photons go through both channels at the same time, despite rigorous testing that detects only one at a time.  Quantum Weirdness at work. The polarization problem, like the double slit problem, is often called a quantum measurement problem.  An often-quoted theory is that the photon does go both ways, but any attempt to detect/measure one of the paths disturbs the photon such that the measurement results in a change in the path of the photon. My theory reafferms the idea that it does go both ways, but in a manner you would not expect.  We will get to that later.  Next I want to mention  Quantum Weirdness in Glass ## Introduction to Quantum Weirdness Posted on June 6, 2007 # Quantum Weirdness Quantum Electrodynamics (QED) theory has developed to be the theory that defines almost all of the understanding of our physical universe.    It is the most successful theory of our time to describe the way microscopic, and at least to some extent, macroscopic things work. Yet there is experimental evidence that all is not right.  Some weird things happen at the photon and atomic level that have yet to be explained.  QED gives the right answers, but does not clear up the strange behavior – some things are simply left hanging on the marvelous words “Quantum Weirdness”.   A few examples of quantum weirdness include the reflection of light from the surface of thick glass by single photons, dependent on the thickness of the glass; the apparent interference of single photons with themselves through two paths in double slit experiments; the reconstruction of a polarized photon in inverted calcite crystals, among others. This paper introduces some ideas that may explain some of the weirdness. I want to introduce the subject in a way that appeals to the non-scientist public, but also introduce some ideas about what is going on, ideas that may explain some of the weirdness and include a few thoughts about the speed of light and relativity that should stimulate thought on the subject.  Hopefully a few physicists will look in and not be too annoyed with my thoughts.   This will not be a mathematical treatment other than some basic equations from Einstein that most of us are already familiar with.   The later chapters will be more theoretical, but easily understood if I do it justice.   I will include some experimental diagrams and discussion of results. First let’s review a few facts about one of our most commonly known quantum objects.   Light is a quantum object.  When you see the light from a light bulb it is likely you do not realize that the light you see comes in very tiny packets called photons that are arriving in really huge numbers.   Your nearby 100 watt bulb emits around 250 billion billion photons a second!  A photon can travel unchanged completely across our universe from some distant star or across a few feet from a nearby lamp.   Once emitted, it continues until it hits something that stops it.  It lives a go-splat existence. When we read this page, we are intercepting some of the billions of photons of light bouncing off the page, those that come off at just the right angle to illuminate rods in the back of our eyes.    Physicists tell us that photons are tiny bits of massless energy that travel at the speed of light.   These bits are indivisible; you can’t split them up into smaller pieces.   In transit they are invisible. Here are some tidbits of information you will need to know later: Every photon of a particular frequency has the same intensity (energy). If you make the light brighter, you are just making more photons, not changing the energy of the individual photons.  If you make the light very dim, only a few photons are being emitted.  Reduce intensity enough and you can adjust the source to emit one photon at a time, even minutes or hours apart. The energy and frequency of blue light is higher than that of red light. The energy of each photon is dependent on the frequency of the light but not dependent on the intensity.   A brighter (more intense) light of a particular color is the result of more photons per second, not higher energy in the photons. Maybe I can illustrate some of the above this way.  Bird shot is a very small pellet load for a shotgun.  It is small and used for hunting birds.    If you drop a single bird shot pellet from a porch onto a pie pan below, it would make a small sound when it hit.  It would have a certain energy when it hit and every pellet of that size dropped from the same height would have the same energy.  The sound each makes at impact would have the same intensity.  If you dropped a hundred at a time, the energy of each pellet would be the same, but the combined impact and sound intensity would be much higher and louder.  Similarly all red photons hit your eyes with the same energy.  If you step up the current to the light source, the number that hits your eyes goes up accordingly, so you see a higher brightness as the number hitting the rods in your eye each moment is increased. Changing from a red photon (light) to a blue one is somewhat like changing from bird shot to buck shot, a much larger pellet.  The blue photon hits harder, as does the buck shot, no matter where it comes from. Regardless of color, if you make a light very dim, you can get it down to one photon at a time, sort of like dropping one pellet at a time.   Getting a photon down to one at a time is a bit tricky, much harder than getting a single pellet to pour out of a barrel of pellets, but not impossible. Photons, unlike shotgun pellets have no mass, but they still have energy.  This energy is transmitted from whatever emitted it to whatever it finally hits.   Thus the photon is an energy carrier in a hurry, always moving at the speed of light. Next I’ll tell you a little about an easily duplicated experiment using double slits that can be used to prove that light is a wave but also can be used to prove that light is a particle.  It is a good illustration of quantum weirdness. → 2 Comments Posted in about, blue, energy, frequency, light, photon, physics, Quantum mechanics, Quantum Physics, Quantum Weirdness, red, Relativity, science, speed of light, theory, Uncategorized Tagged energy, frequency, photon, physics, Quantum mechanics, quantum weird, Relativity, science
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9226370453834534, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/23071/find-the-net-force-the-southern-hemisphere-of-a-uniformly-charged-sphere-exerts?answertab=oldest
# “Find the net force the southern hemisphere of a uniformly charged sphere exerts on the northern hemisphere” This is Griffiths, Introduction to Electrodynamics, 2.43, if you have the book. The problem states Find the net force that the southern hemisphere of a uniformly charged sphere exerts on the northern hemisphere. Express your answer in terms of the radius $R$ and the total charge $Q$. Note: I will say its uniform charge is $\rho$. My attempt at a solution: My idea is to find the field generated by the southern hemisphere in the northern hemisphere, and use the field to calculate the force, since the field is force per unit charge. To do this I start by introducing a Gaussian shell with radius $r < R$ centered at the same spot as our sphere. Then in this sphere, $$\int E\cdot\mathrm{d}a = \frac{1}{\epsilon_0}Q_{enc}$$ Now what is $Q_{enc}$? I feel like $Q_{enc} = \frac{2}{3}\pi r^3\rho$ , since we're just counting the charge from the lower half of the sphere (the part thats in the southern hemisphere of our original sphere). (Perhaps here is my error, should I count the charge from the entire sphere?, if so why?) Using this we get $$\left|E\right|4\pi r^2 = \frac{2\pi r^3\rho}{3},$$ so $$E = \frac{r\rho}{6\epsilon_0}.$$ Using these I calculate the force per unit volume as $\rho E$ or $$\frac{\rho^2 r}{6\epsilon_0}$$ Then by symmetry, we know that any net force exerted on the top shell by the bottom must be in the $\hat{z}$ direction, so we get $$F = \frac{\rho^2}{6\epsilon_0} \int^{2\pi}_0\int^{\frac{\pi}{2}}_0\int^R_0 r^3\sin(\theta)\cos(\theta) \mathrm{d}r\mathrm{d}\theta\mathrm{d}\phi$$ integrating we get $F = \frac{1}{4}\frac{R^4\rho^2\pi}{6\epsilon_0}$. Now Griffiths requests us to put this in terms of the total charge, and to do so we write $\rho^2 = \frac{9Q^2}{16\pi^2R^6}$ Plugging this back into $F$ we get $$F = (\frac{1}{8\pi\epsilon_0})(\frac{3Q^2}{16R^2})$$ Now the problem is that this is off by a factor of $2$ ... I tried looking back through and the only place I see where I could somehow gain a factor of $2$ is the spot I mentioned in the solution, where I could include the entire charge, however, I can't see why I should include the entire charge, so if that's the reason I would be very grateful if someone could explain to me why I need to include the entire charge. If that is not the reason, and perhaps this attempt at a solution is just complete hogwash, I would appreciate if you could tell me how I should go about solving this problem instead. (but you don't need to completely solve it out for me.) - Your choice for Qenc would be correct if you consider only half the shell (E*da -> 2*pi*r^2) – user12642 Sep 30 '12 at 17:46 ## 3 Answers The factor of two is coming from the place you identified. Think about throwing out that factor of two, so you're considering only the bottom hemisphere. When you make your Gaussian shell and have it enclose charge in the bottom hemisphere only, the charge is no longer uniformly distributed inside your Gaussian shell. Thus, the electric field created by the charge you're considering is not the same at all parts of the shell, so you can't find the magnitude of the electric field in the way you described. That only works when the charge distribution has some sort of symmetry you're exploiting. You'd have to do a difficult integral instead. However, if you don't throw out that factor of two, you're simply finding the electric field inside the shell. Suppose you carry out the rest of your calculation. Then you've found the net force in the z-direction in the north half of the sphere. However, the north half cannot exert any net force on itself, so this entire net force must be the same as the net force from the southern hemisphere. So you're including all the charge when you make your Gaussian surface because you need to find the true electric field in the shell. The true electric field, when integrated, gives you the net force, which by basic mechanics arguments must be due to the southern hemisphere. - This is essentially what I was thinking after proposing the question, and I'm glad to have the thoughts in my head confirmed by someone who knows what they're doing! Thank you very much – Deven Ware Mar 31 '12 at 4:42 If you are off by a factor of two, it's probably because the volume of a sphere is $\frac{4}{3}\pi r^3$ and not $\frac{2}{3}\pi r^3$ - Perhaps I did not understand the question correctly, but it seems to me that you cannot use a Gaussian shell in this case, because the intensity of the field $E$ would be different at different points of the shell. If you want the following expression to hold, $$\int E\cdot da = E \int da$$ then $E$ must be equal to the same value all over the Gaussian shell. I believe this might be the source of your mistake. - Hi Marc, welcome. I don't precisely know what the question is about, but while reviewing your answer, I think I can see that is is perhaps already stated by the currently accepted answer. Is that correct? (If that is the case, perhaps you should consider deleting your answer. If it is not correct, then ignore this comment.) – Gugg May 13 at 14:16 ## protected by Qmechanic♦May 13 at 11:53 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9638559222221375, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/121731-maximum-distance-normal-center.html
Thread: 1. Maximum distance of normal from the center Normal is drawn at a variable point 'P' of an ellipse $<br /> \frac{x^2}{a^2} + \frac{y^2}{b^2} = 1<br />$ Find the maximum distance of the normal from the centre of the ellipse 2. 1) The question makes no sense. If a = b, all the Normals, extended both directions, CONTAIN the center. 2) Okay, I guess it is sufficiently clear that a and b are not equal. I still don't like it. 3) Can you build the Normals? You'll need a few tools. 1) Find the general derivative of all points on the ellipse. 2) Write the general equation of the tangent line of all points on the ellipse. 3) If you can do #2, you really should be able to write the general equation of all Normals. 4) Calculate the distance of a line from the Origin. Where shall we start? Note: The derivative is WAY easier using an implicit method, rather than first solving for y. 3. Originally Posted by wolfyparadise Normal is drawn at a variable point 'P' of an ellipse $<br /> \frac{x^2}{a^2} + \frac{y^2}{b^2} = 1<br />$ Find the maximum distance of the normal from the centre of the ellipse That ellipse can also be written as $b^2x^2+ a^2y^2= a^2b^2$. Then $2b^2x+ 2a^2y\frac{dy}{dx}= 0$ so $\frac{dy}{dx}= -\frac{a^2y}{b^2x}$. The normal to the ellipse at any point $(x_0,y_0)$ can be written $y= \frac{b^2x_0}{a^2y_0}(x- x_0)+ y_0$. Any line normal to that line, passing through the origin, is $y= -\frac{a^2y_0}{b^2x_0}x$. Since the perpendicular to a line, through a point, is the shortest distance to that line, find the point where those two lines intersect and find the distance from that point to the origin. 4. Or, of course, use the standard distance formula (point to line) and just write it down. At (0,0), it simplifies quite a bit. $\frac{|a^{2}y_{0}^{2}-b^{2}x_{0}^{2}|}{\sqrt{(a^{2}y_{0})^{2}+(b^{2}x_{0 })^{2}}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8984938859939575, "perplexity_flag": "head"}
http://mathoverflow.net/questions/117401?sort=oldest
## Is every functor inducing a homotopy equivalence a composition of adjoint functors? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It was asked here whether every functor is a composition of adjoint functors. The answer is no, because all adjoint functors induce homotopy equivalences on the nerve, and we can construct functors that do not induce homotopy equivalences. My question is the following: can all functors inducing homotopy equivalences on the nerve be expressed as compositions of adjoint functors? - One thing to note: in order to make this question precise, one needs a notion of homotopy equivalence when the category is not small. – Lunasaurus Rex Dec 28 at 12:05 Side note, here is the meta: meta.mathoverflow.net/discussion/1502/… – David Corwin Dec 29 at 22:24 ## 1 Answer The answer is no. Let $C$ be a category such that the unique map from $C$ to the terminal category is a composition of $n$ adjoints. Then $C$ has an object $x_0$ such that every other object of $C$ can be connected to $x_0$ by a zigzag of length at most $n$; this is easy to prove by induction on $n$. In particular, let $R$ be the "infinite zigzag", the unique poset such that $|BR|$ is homeomorphic to $\mathbb{R}$. Then $BR$ is contractible, but the unique map from $R$ to the terminal category cannot be a composition of adjoints. Note, however, that this map is a transfinite composition of adjoints (of length $\omega$). It seems plausible to me that any functor between (small) categories which is an equivalence on nerves could be a transfinite composition of adjoints. - 7 A totally different way to get a counterexample: note that any composition of adjoints must have a functor going the other way which provides a homotopy inverse on nerves (namely, the composition of all the adjoint functors going the other way). It's easy to construct examples of finite posets $X$ and $Y$ such that there is a map $X\to Y$ inducing an equivalence but no map $Y\to X$ inducing an equivalence. – Eric Wofsey Dec 28 at 13:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.909612774848938, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/18902/intuitive-explanation-of-nakayamas-lemma?answertab=votes
# Intuitive explanation of Nakayama's Lemma Nakayama's lemma states that given a finitely generated $A$-module $M$, and $J(A)$ the Jacobson radical of $A$, with $I\subseteq J(A)$ some ideal, then if $IM=M$, we have $M=0$. I've read the proof, and while being relatively simple, it doesn't give much insight on why this lemma should be true, for example - is there some way to see how the fact that $J(A)$ is the intersection of all maximal ideals related to the result? Any intuition on the conditions and the result would be of great help. - Have you read the Wikipedia article? – Qiaochu Yuan Jan 25 '11 at 14:15 @Qiaochu Yuan - I've looked at it, but there doesn't seem to be any actual intuition given there (at least none that I can see). – Pandora Jan 25 '11 at 14:27 ## 4 Answers Suppose your module is of finite length. Then you can consider on it the so called radical filtration, which organises the module into an onion-like thing, with elements of the maximal ideal pushing elements of the module farther in from their starting layer to one right below and, moreover, each layer obtained from the one above it in this way. Now, the condition $\mathfrak m M=M$ tells you that the outermost layer of the module is actually empty: obviously, then, there is not much in the whole thing and $M=0$. We have just discovered Nakayama's lemma! If your module is arbitrary, exactly the same happens. - I am talking about the local case, with $I$ equal to the maximal ideal. The general case is just a natural generalization. – Mariano Suárez-Alvarez♦ Jan 25 '11 at 14:51 Reading a bit about path algebras and their modules, and specially their interpretation as representations of the underlying quiver, IMO, are a great way to develop an intuition about this, for it allows you to draw modules and algebras in a very explicit way, making statements such as Nakayama's pretty much self-evident. – Mariano Suárez-Alvarez♦ Jan 25 '11 at 15:27 Thank you for your answer! Could you refer me to some source where I could read more about this radical filtration? I can't find anything relevant and it seems like such a background could really help me see things more clearly. – Pandora Jan 26 '11 at 18:53 1 One place where you'll find it is in Skowronski+Assem+Simpson book about representation theory of finite dimensional algebras, vol. 1. – Mariano Suárez-Alvarez♦ Jan 26 '11 at 21:05 Here's something that might or might not make sense to you. You know that every ideal $I$ of a commutative ring $R$ gives rise to an $R$-module $R/I$; these are precisely the $R$-modules on one generator. The $R$-modules of the form $R/m$ where $m$ is maximal are special among these. In this language, the elements of the Jacobson radical are precisely the ones that act trivially on all $R$-modules of the form $R/m$. It follows, for example, that if $I$ is in the Jacobson radical, then we cannot have $IM = M$ for any module of the form $R/m^k$. Nakayama's lemma asserts that a similar statement is true for all finitely generated $R$-modules. This should be reasonable if you are aware of, for example, the classification of finitely generated $R$-modules when $R$ is a PID. The Wikipedia describes a geometric interpretation of this, but I'm not familiar enough with it to say more. - First, welcome back. This may be a really naive question: The version I have in Reid "Undergrad. Comm. Alg" is stated over a local ring $(A,m)$. I was wondering if this is correct: since $m$ is maximal, it has no units. Thus for $M$ to equal $mM$, I would think $M$ has to be $0$. If you think this is worthy of being a question, I will be happy to post it where you can answer it. Thanks, regards. – Andrew Dec 21 '12 at 19:10 @Andrew: that is a special case of other statements of Nakayama's lemma. See the Wikipedia article. It would be fine to post a separate question if you are still confused. – Qiaochu Yuan Dec 21 '12 at 22:07 This is probably not the answer you're looking for, but here's another proof: Suppose $M \neq 0$ and let $u_1,\ldots,u_n$ be a minimal set of generators for $M$. Then $u_n \in IM$ so we may write $u_n=a_1u_1+\ldots+a_nu_n$ with $a_i \in I$. Hence $(1-a_n)u_n=a_1u_1+\ldots+a_{n-1}u_{n-1}$. Since $I$ was contained in the Jacobson radical, it follows that $1-a_n$ is a unit. Hence $u_n$ belongs to the submodule generated by the $u_1,\ldots,u_{n-1}$, contradiction. - this question was just referenced in MathOverflow.Net - you might want to check out the answers! http://mathoverflow.net/questions/61446/how-to-memorise-understand-nakayamas-lemma-and-its-corollaries -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279147386550903, "perplexity_flag": "head"}
http://mathoverflow.net/questions/70710?sort=newest
## Continuous Measurement in Quantum Mechanics ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $\mathcal{P}(S^{\infty})$ denote the set of probability measures on the unit sphere $S^{\infty} \subset \mathcal{H}$ in the Hilbert space of states of a quantum mechanical system. Measurement of an observable $\Omega$ corresponds to orthogonal projection, sending $\delta_{|\psi \rangle}$ to a particular measure supported on the eigenvectors of $\Omega$, thus inducing a map $T_{\Omega}: \mathcal{P}(S^{\infty}) \rightarrow \mathcal{P}(S^{\infty})$. If we think of $T_{\Omega}$ as the transition matrix of a Markov chain, we can say that a continuum $\lbrace \Omega_t \rbrace_{t \in \mathbb{R} \geq 0}$ of observables induces a stochastic process on $S^{\infty}$. • If we let $\Omega_t = H(t)$, the time-dependent Hamiltonian of our system, is the associated stochastic process the deterministic one described by the Schrödinger equation? • What does this construction have to do with the time-energy uncertainty relation? - This question is not well phrased--- what you can ask is "what is the effect of measuring an operator again and again, when the operator varies continuously in time". – Ron Maimon Aug 1 2011 at 18:39 ## 2 Answers The question is muddled regarding quantum mechanics: The projection operator T takes measures to measures, but it does not define a Markov chain, or if you like, the resulting Markov chain is not an interesting one, because it is deterministic most of the time. The reason is that T is idempotent, a second application of T does nothing. Even if omega varies continuously, doing T on a continuously varying omega is deterministic in the limit of continuous measurement (see below). The specific questions are not the right questions, but here is an answer: 1. There is no deterministic stochastic process associated with the Schrodinger equation, so this question is meaningless. There is a stochastic process associated with the analytic continuation of the Schrodinger equation to imaginary times, but this has nothing to do with measurement (or anything else in the physics in real time). 2. This construction has nothing to do with the time energy uncertainty principle. Stripping away the pointless formalism, what you are asking in the question is: "What happens if you measure an observable again and again, so that the measurements become very dense?" What happens is called the "quantum zeno effect". If you keep measuring a state that tries to change, you prevent the state from changing, instead you constrain it to stay the same Eigenvector of the observable you measure. But you have a continuous family of observables But you are asking what happens if you measure an observable which varies with time. The result is that you follow the Eignevectors of the observable in a deterministic way. So the first operation will project you to one of the eigenvectors at random, then the remaining continuum of measurments will make the state change continuously to follow the changing eigenvectors of the operator. The reason there is no stochasticity is because if you measure after a time "epsilon", the probability of being found in a different eigenvector goes like "epsilon-squared", so that in the continuum limit, the process becomes completely deterministic, with 100% probability of following the corresponding eigenvector of H(t). The only subtlety is when the eigenvectors collide (have the same eigenvalue at some time t), in which case, a continuous measurement will have to follow the eigenvector through the collision. So if to eigenvectors of H(t) coincide at time t, and afterwards come out in different direction, then there will be some stochasticity associated with the collision. The original direction of the eigenvector (assuming the generic case that only two eigenvectors collided) will have to be expanded in the directions of the two new vectors, and the square of the expansion parameters will tell you the probability of going off in different directions. You might be able to make a markov chain by colliding again and again, but this is not in the spirit of the original question. - Thank you for your suggested steps towards a more subtle construction, and for pointing me to the "Quantum Zeno effect". I'm reluctant to accept your answer to my second question above: quoting from the Wikipedia entry for the Quantum Zeno effect, "It is still an open question how closely one can approach the limit of an infinite number of interrogations due to the Heisenberg uncertainty involved in shorter measurement times. In 2006, Streed et al. at MIT observed the dependence of the Zeno effect on measurement pulse characteristics.[29]" – Alexander Moll Aug 2 2011 at 4:55 A measurement can be a failure of interaction. If you have an atom in its first excited state, it will decay to the ground state normally. You can apply an arbitrarily strong laser whose frequency is tuned to the energy difference between the ground state and the second excited state, and this measures when the atom transitioned to the ground state. This will prevent the atom from ever making the transition to the ground state, and there is no theoretical limit to the density of measurements from the uncertainty principle, because the different measurements are free photons. – Ron Maimon Aug 2 2011 at 14:06 It took me a little thinking to understand why people are saying that quantum zeno is limited by time-energy uncertainty, because it isn't at all. The wrong idea is that a measurement of energy to accuracy "delta" will have to take 1/delta seconds, so that the density of measurements can never exceed 1/delta. The first clause it true--- a measurement will have to take a long time, but the measurements can overlap, so that the density is unlimited. – Ron Maimon Aug 2 2011 at 14:48 Also, keep in mind that measurements can be pushed up to the level of psychology, the measurement can be thought to happen only at the last step, when you look at the measuring device. Then one can ask what physical reason allows a strong laser tuned to 0-2 transitions to prevent 1-0 transitions. The reason is that transition from 1-0 is accompanied by high frequency amplitudes for transitions to 2, which are entangled with incoming photon phases and therefore incoherent, so the down transition amplitudes get scrambled. The ground state in the presence of the photons is not stationary. – Ron Maimon Aug 2 2011 at 15:08 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. You have two questions. The first one is if the induced stochastic process is described by the Schrodinger equation. The answer is no. To see this, notice that the stochastic process you describe takes a pure state $\psi$ to a mixed state $\rho$. The Shrodinger equation preserves purity. What does this have to do with time-energy uncertainty relation. You need to go to the Heisenberg picture to see whether there is a connection. Notice that $\Omega_{t_0}$, say, is then going to evolve with time. So measuring $\Omega_{t_0}$ at time $t_1$ is quite different than measuring it at time $t_2$. However, they remain constant if $\Omega_t$ is independent of $t$ which is just a statement of the conservation of energy. However, even in such a case the time-energy uncertainty principle still applies (you can have uncertainty in energy of a state even if energy operators are constant). For this reason I don't see much of a connection here, though maybe I am wrong. I'm afraid that's all I could think of. Hopefully it will be at least of some help :). - Thanks, Sebastian - this certainly settles my first question, and you're right to point out the obvious difficulties with this construction in the Heisenberg (or interaction) picture. I do think that the "Quantum Zeno effect" (see Ron's answer below) is what I was after, and I'd need to read a lot of what's out there before trying to get back on the horse. Thanks again! – Alexander Moll Aug 2 2011 at 5:08 This answer is not accurate--- the stochastic process, (on every step after the first (ignoring collisions) is entirely deterministic, and takes pure states to pure states. – Ron Maimon Aug 3 2011 at 19:10 I didn't see your comment earlier Ron. This is not usually deterministic. If $H_t$ is not constant then the spectrum may well change. Most importantly, if the eigenvectors change then even at a later step a pure state which was before an eigenvector of the Hamiltonian will be mapped to a mixed state due to it now not being an eigenvector anymore. That is if I am understanding the question correctly, and the distribution gets mapped to the mixed state corresponding to the distribution of outcomes rather than a particular outcome. – Sebastian Meznaric May 4 2012 at 16:18 @SebastianMeznaric: This is usually deterministic--- the "mixed state" stops being mixed as the time-steps become small, as the probability of deviating from the eigenvector path goes as "epsilon squared", while the number of timesteps only goes as 1/epsilon, so that in the continuous measurement limit, you have deterministic evolution. This only fails when the eigenvalues collide, which is measure zero, but at this point, you get a definite stochastic splitting depending only on the eigenvectors of the Hamiltonian just after and just before the collision. – Ron Maimon Jan 5 at 16:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447348713874817, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/208259-trigonometric-hyperbolic-identities.html
# Thread: 1. ## Trigonometric and Hyperbolic identities Question is as follows:- Hence solve for t for values in the range 0 ≤ t ≤ 2 π rad: 5.5 Cos t + 7.8 Sin t = 4.5 Any help would be greatful, thanks 2. ## Re: Trigonometric and Hyperbolic identities Originally Posted by boza100 Question is as follows:- Hence solve for t for values in the range 0 ≤ t ≤ 2 π rad: 5.5 Cos t + 7.8 Sin t = 4.5 Any help would be greatful, thanks Well it's a bit ugly but one approach would be to recall that $cos(t) = \sqrt{1 - sin^2(t)}$ and plug that in... $5.5~\sqrt{1 - sin^2(t)} + 7.8~sin(t) = 4.5$ Isolate the radical and square it. This will give you a quadratic in sin(t) which you can solve using the quadratic formula. Check for extra, but not valid, solutions. Also since in reality $cos(t) = \pm \sqrt{1 - sin^2(t)}$ you should also work through the negative solution as well. And again check your solutions with the original equation. -Dan 3. ## Re: Trigonometric and Hyperbolic identities Another approach would be to use a linear combination identity to write the equation as: $\sqrt{5.5^2+7.8^2}\sin\left(t+\tan^{-1}\left(\frac{5.5}{7.8} \right) \right)=4.5$ $\sin\left(t+\tan^{-1}\left(\frac{55}{78} \right) \right)=\frac{45}{\sqrt{9109}}$ Now, after finding the quadrant IV solution, use the identity $\sin(\pi-x)=\sin(x)$ to get the quadrant III solution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9204066395759583, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/94012/sum-of-reciprocal-prime-numbers?answertab=active
# Sum of reciprocal prime numbers How can the following equation be proven? $$\forall n > 2 : \sum_{p \le n}{\frac1{p}} = C + \ln\ln n + O\left(\frac1{\ln n}\right),$$ where $p$ is a prime number. It's not homework; I just don't understand from where should I start. - ## 2 Answers Apostol gives a proof of this in his book. Here's a more-or-less condensed version: Letting $[p]$ be an Iverson bracket ($1$ if condition $p$ is true, and $0$ if $p$ is false), we have $\sum\limits_{p \le n}\frac1{p}=\sum\limits_{k \le n}\frac{[k\in\mathbb P]}{k}$ Introduce the function $\ell(n)=\sum\limits_{p \le n}\frac{\log\,p}{p}=\sum\limits_{k \le n}\frac{[k\in\mathbb P]\log\,k}{k}$. Making use of (a special case of) Abel's identity, $$\sum_{y < n \le x}\frac{a(n)}{\log\,n}=\frac{A(x)}{\log\,x}-\frac{A(y)}{\log\,y}+\int_y^x \frac{A(t)}{t(\log\,t)^2}\mathrm dt$$ where for this case $a(n)=\frac{[n\in\mathbb P]\log\,n}{n}$ and $A(x)=\sum\limits_{k \le x}a(k)$. Taking $y=2$, we have $$\sum_{p \le n}\frac1{p}=\frac{\ell(n)}{\log\,n}+\int_2^n \frac{\ell(t)}{t(\log\,t)^2}\mathrm dt$$ Since $\ell(n)=\log\,n+O(1)$, we then have $$\begin{align*}\sum_{p \le n}\frac1{p}&=1+O\left(\frac1{\log\,n}\right)+\int_2^n \frac1{t\log\,t}\mathrm dt+\int_2^n \frac{\mathfrak{R}(t)}{t(\log\,t)^2}\mathrm dt\\&=1+O\left(\frac1{\log\,n}\right)+\log\log\,n-\log\log\,2+\int_2^n \frac{\mathfrak{R}(t)}{t(\log\,t)^2}\mathrm dt\end{align*}$$ where $\mathfrak{R}(t)=O(1)$. Since $$\int_2^n \frac{\mathfrak{R}(t)}{t(\log\,t)^2}\mathrm dt=\int_2^\infty \frac{\mathfrak{R}(t)}{t(\log\,t)^2}\mathrm dt+O\left(\frac1{\log\,n}\right)$$ making the appropriate replacements gives $$\sum_{p \le n}\frac1{p}=\color{blue}{1-\log\log\,2+\int_2^\infty \frac{\mathfrak{R}(t)}{t(\log\,t)^2}\mathrm dt}+\log\log\,n+O\left(\frac1{\log\,n}\right)$$ where the blue part is the constant term $C$ in the OP. - Note that this proof derives the OP's result from an asymptotic formula for $\ell(n)$, but that asymptotic formula itself is nontrivial. – Greg Martin Dec 26 '11 at 6:20 He really should look at Apostol anyway... :) – J. M. Dec 26 '11 at 6:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8871215581893921, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=119052
Physics Forums ## Is it intuitive that the Energy levels... For a 1D infinite well, The energy levels of an electron trapped inside is dependent on the length of the well. The longer the length, the less its energy will be for each state. I am aware how the formula is derived. The main form of the formula is a solution of Schrodinger's equation which books say is not derived from anything more fundalmental. But is the fact that the energy levels are depedent on L intuitive? If so why? Could you say that a longer well would mean that the energy of the electron is distributed more evenly for each position x in the well? Hence the energy of the electron is lower at each x in the well for a particular state in a longer well? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Homework Help Science Advisor Well you know that the physicists always say that to probe smaller distances requires higher energy particles. Carl Recognitions: Gold Member Science Advisor Staff Emeritus Depends on your intuition. If you are stuck with a classical intuition, it will not help you any. Here's one kind of intuition : The smaller the box, the greater the momentum uncertainty... ## Is it intuitive that the Energy levels... yeah right. only if your " box " happens to be an atom - in which case - what are you putting in it again? if not, any basic QM text will tell you that for the same potential you could choose position or momentum eigenstates (or eigenstates of any other operator) which would have, respectively, 0 uncertainty in position and momentum. (moral: math works even if intuition runs awry) Quote by yeahright yeah right. only if your " box " happens to be an atom - in which case - what are you putting in it again? if not, any basic QM text will tell you that for the same potential you could choose position or momentum eigenstates (or eigenstates of any other operator) which would have, respectively, 0 uncertainty in position and momentum. (moral: math works even if intuition runs awry) What?!? Can you explain this a little more... Recognitions: Homework Help Science Advisor Quote by pivoxa15 For a 1D infinite well, The energy levels of an electron trapped inside is dependent on the length of the well. The longer the length, the less its energy will be for each state. I am aware how the formula is derived. The main form of the formula is a solution of Schrodinger's equation which books say is not derived from anything more fundalmental. But is the fact that the energy levels are depedent on L intuitive? If so why? Could you say that a longer well would mean that the energy of the electron is distributed more evenly for each position x in the well? Hence the energy of the electron is lower at each x in the well for a particular state in a longer well? The kinetic energy is a measure of the curvature of the wavefunction, right? (since $p^2/2m = - \hbar^2 {\partial^2 \over \partial x^2}$). If you narrow the well, the wavefunction has to "bend" more (recall that it must be zero at the two endpoints) which explains why the energy is larger. So the reason why E is depedent on L is because of UP. When I said intuitive, I meant classically intuitive. Obviously, since the UC is needed, the answer is that it is not intuitive. Classically, wouldn't it be the case that the energy of an electron is fixed from the start, no matter what the size of the well it is in? Hence intuitively E should not depedent on L. E=E until the electron is given potential or kinetic energy via a force. Looks like my explanation... 'Could you say that a longer well would mean that the energy of the electron is distributed more evenly for each position x in the well? Hence the energy of the electron is lower at each x in the well for a particular state in a longer well?' is wrong in the classic sense. Is it wrong in a QM sense as well? Thread Tools | | | | |----------------------------------------------------------------|------------------------------------|---------| | Similar Threads for: Is it intuitive that the Energy levels... | | | | Thread | Forum | Replies | | | Quantum Physics | 6 | | | Introductory Physics Homework | 2 | | | Atomic, Solid State, Comp. Physics | 0 | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 16 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9311419725418091, "perplexity_flag": "middle"}