url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/39406/how-do-you-handle-numerical-issues-when-converting-optimization-problems-to-decis/39414
## How do you handle numerical issues when converting optimization problems to decision problems? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The trick of converting an optimization problem to a decision problem is well-known - you add a real number input to the decision problem for thresholding. For example, this is taken from Wikipedia about the traveling salesman problem: "The problem has been shown to be NP-hard (more precisely, it is complete for the complexity class FPNP; see function problem), and the decision problem version ("given the costs and a number x, decide whether there is a round-trip route cheaper than x") is NP-complete." However, it is not clear to me how we actually tackle numerical issues with this kind of thresholding. For example, how can we assure, given our finite accuracy in the algorithm, that we are not away from x slightly? More specifically, if we set x=1.0, and we managed to show using our finite accuracy that it holds for 0.9999.., how can we tell it is not actually holding for x=1.0? (0.99999... is a representation of 1 as well.) This problem does not happen when x is an integer value. I would appreciate any response. Of course, there are many algorithms that assume that we have "access" to a Turing machine that can compute any real number... But I find this case especially important to distinguish from the other cases, because it could affect the complexity of the algorithm, and our whole point is to show hardness of some sort. So, I don't think we can just avoid this issue by just making the assumption that we have such a powerful Turing machine. - ## 1 Answer For most NP-complete problems, you can without loss of generality work with rational numbers, in which case you don't run into issues of precision. There are a few problems which do run into difficulties of precision; e.g., geometry problems where you have to compare sums of square roots. One famous example is the minimum length triangulation problem: given a set of points in the plane, what is the length of the minimum length of a triangulation of these points? Since nobody knows how to efficiently tell whether a sum of square roots is larger or smaller than a given number, it's not even clear that you can find a witness showing that a set of points has a triangulation smaller than some integer $k$. However, it's still possible to show that this problem is NP-hard. You just have to use instances in your reduction which are constructed so this precision issue doesn't occur. The precision issues, however, have so far prevented anyone from showing the decision problem is in NP. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485448598861694, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/gravity+quantum-gravity
# Tagged Questions 3answers 570 views ### Why is gravity such a unique force? My knowledge on this particular field of physics is very sketchy, but I frequently hear of a theoretical "graviton", the quantum of the gravitational field. So I guess most physicists' assumption is ... 1answer 175 views ### Is the quantization of gravity necessary for a quantum theory of gravity? Part II (At the suggestion of the user markovchain, I have decided to take a very large edit/addition to the original question, and ask it as a separate question altogether.) Here it is: I have since ... 7answers 591 views ### Is the quantization of gravity necessary for a quantum theory of gravity? The other day in my string theory class, I asked the professor why we wanted to quantize gravity, in the sense that we want to treat the metric on space-time as a quantum field, as opposed to, for ... 0answers 42 views ### Attractiveness of spin 2 gauge theories [duplicate] Possible Duplicate: Why is gravitation force always attractive? I have heard that the attractiveness of gravitation is due to the fact that it is a spin 2 gauge theory. Why is this so? I ... 1answer 134 views ### Formation of a black hole and Hawking radiation From the perspective of an outside observer it takes infinitely long for the black hole to form. But if the black hole is no extremal black hole, it emits Hawking radiation. So the outside observer ... 1answer 199 views ### Special relativity paradox and gravitation/acceleration equivalence One of the features of the black hole complementarity is the following : According to an external observer, the infinite time dilation at the horizon itself makes it appear as if it takes an ... 0answers 57 views ### Thermal gravitational radiation and its detection To my poor knowledge on the topic, the gravitational waves that are most likely to be detected by LIGO or other experiments do not have thermal spectrum. But I'm not certain. I know that Hawking's ... 3answers 246 views ### Laws of gravity for a universe that only consists of two objects? So, we know that when two objects of normal matter get away from each other, the gravitational pull they feel from each other, decreases. I wanted to see how that would work. And in my ... 2answers 135 views ### How do we end up with a gravity-dominant macroscopic universe from a quantum world having weakest gravity? At quantum scale, gravity is the weakest force. Its even negligible in front of weak force, electromagnetic force, strong force. At macroscopic scale, we see gravity everywhere. Its actually ruling ... 1answer 153 views ### Why did Standard Model never sense a requirement to include gravitational quantum? Standard Model is advanced version of Quantum physics. It tried to include everything which came in the way while understanding quantum world. It even didn't bother to include even Higgs Boson which ... 3answers 341 views ### What happens to matter in extremely high gravity? Though I am a software engineer, I have bit interest in sciences as well. I was reading about black holes and I thought if there is any existing research results on How matter gets affected because of ... 1answer 153 views ### Gravity and Planck scale What is the connection between Planck's constant and gravity? Why is the Planck scale the natural scale for quantum gravity? I would have though the scale would be related to G, not h. 2answers 464 views ### Why does the force of gravity get weaker as it travels through the dimensions? Some theories predict that the graviton exists in a dimension that we of course can't see, and that is why the force of gravity is so weak. Because by the time gravity has got from the dimension in ... 3answers 514 views ### How does gravity force get transmitted? How does gravity force get transmitted? It is not transmitted by particles I guess. Because if it was, then its propagation speed would be limited by the speed of light. If it is not transmitted by ... 4answers 373 views ### Why is gravity weak at the quantum level? Why is gravity stronger than other forces at the macroscopic level, yet weaker than other forces at the quantum level? Is there an explanation? 1answer 256 views ### Why are there Gravitons among the modes of oscillation in String Theory? Why are gravitons present among the modes of oscillation of the 'strings' in String Theory? 0answers 101 views ### What results from particle collision would ensure the existence of the graviton? I understand that particles are smashed together to try to enable us to detect some sort of graviton presence but we can't actually detect a graviton due to the fact that it 'exists' in some extra ... 1answer 81 views ### Quantum mechanical gravitational bound states The quantum mechanics of Coloumb-force bound states of atomic nuclei and electrons lead to the extremely rich theory of molecules. In particular, I think the richness of the theory is related to the ... 1answer 62 views ### Instantons and Non Perturbative Amplitudes in Gravity In perturbative QFT in flat spacetime the perturbation expansion typically does not converge, and estimates of the large order behaviour of perturbative amplitudes reveals ambiguity of the ... 2answers 220 views ### Possibility of “graviballs”? Looking at the relevant wikipedia page, one can read that the graviton should be massless. Is it 100 % certain that it is massless or is there room in any "nonstandard" models for a tiny non-zero mass ... 1answer 153 views ### Sun-Earth Virtual Gravitons? How many virtual gravitons do the sun and earth exchange in one year? What are their wavelengths? 2answers 436 views ### Three-Dimensional Gravity Does anyone have any references that discuss gravity in three-dimensions? I'm trying to make my way through some papers by Witten relating $SL(2,\mathbb{C})$ Chern-Simons theory and gravity in three ... 0answers 256 views ### Quantization of Gravitational Field: Quantization conditions I'm begining to study Quantization of field with the second quantization formalism. I've studied phononic field, electromagnetic field in the vacuum and a generic relativistical scalar field. I ... 1answer 325 views ### Quantum Gravity and Calculations of Mercury's Perihelion In an astronomy forum that I frequent, I have been having a discussion where the state of quantum gravity research came up. I claimed that Loop Quantum Gravity theories couldn't prove GR in the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309842586517334, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/65396-integral.html
# Thread: 1. ## Integral I want to evaluate the volume inside $r=cos(\theta)$. Thanks. 2. $\begin{aligned}r & = \cos \theta \\ r^2 & = r\cos \theta \\ x^2 + y^2 & = x \\ (x-\tfrac{1}{2})^2 + y^2 & = \tfrac{1}{4} \end{aligned}$ Revolving it around the x-axis produces a sphere with radius $\tfrac{1}{2}$ . 3. That is just a circle of radius 1 centered at (1/2,0) A sphere if you revolve If you must do this with integration, you can just move it over the the origin. Now, revolve a circle of radius 1/2. $2{\pi}\int_{0}^{\frac{1}{2}}(\frac{1}{4}-x^{2})dx$ or leave it where it is and: ${\pi}\int_{0}^{1}\left(\frac{1}{4}-(x-\frac{1}{2})^{2}\right)dx$ 4. I want the volume within $\sqrt{x^2+y^2+z^2} = \frac{x}{\sqrt{x^2+y^2}}$* Here is a picture of the solid. Attached Thumbnails 5. I want the volume within * I suggest you check that you have the right definition of $\theta$ because my book used it for the other angle. Anyway, using your notation: $\int\int\int dV=\int_0^{\pi}\int_0^{2\pi}\int_0^{cos(\theta)}r^ 2\sin(\phi)dr d\theta d\phi$ 6. Physicists and mathematicians use the opposite definition right. If you use the physicists' you get a sphere. If you the mathematicians' you get this weird figure I've showed above.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8815084099769592, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/103/how-does-the-risk-neutral-pricing-framework-work
# How does the “risk-neutral pricing framework” work? I've struggled for a long time to understand this - What is this? And how does it affect you? Yes I mean risk neutral pricing - Wilmott Forums was not clear about that. - 1 Can you please elaborate on your question a little? "how does it affect you?" Also, maybe provide a link or two to relevant resources? – Shane Feb 1 '11 at 15:30 1 @ChloeRadshaw: You can accept one of the answers if you are satisfied by it :-) – vonjd Jan 28 at 16:01 1 – SRKX♦ Mar 5 at 10:12 ## 6 Answers I assume you mean risk neutral pricing? Think of it this way (beware, oversimplification ahead ;-) You want to price a derivative on gold, a gold certificate. The product just pays the current price of an ounce in \$. Now, how would you price it? Would you think about your risk preferences? No, you won't, you would just take the current gold price and perhaps add some spread. Therefore the risk preferences did not matter (=risk neutrality) because this product is derived (= derivative) from an underlying product (=underlying). This is because all of the different risk preferences of the market participants is already included in the price of the underlying and the derivative can be hedged with the underlying continuously (at least this is what is often taken for granted). As soon as the price of the gold certificate diverges from the original price a shrewd trader would just buy/sell the underlying and sell/buy the certificate to pocket a risk free profit - and the price will soon come back again... So, you see, the basic concept of risk neutrality is quite natural and easy to grasp. Of course, the devil is in the details... but that is another story. - So risk neutral pricing means pricing an instrument which can be immediately hedged? Is nt that the same as no arbitrage law? – Jack Kada Feb 1 '11 at 16:55 2 @ChloeRadshaw: It is in fact all connected: From the law of one price follows risk neutral pricing - the reason is the possibility of hedging. – vonjd Feb 1 '11 at 17:22 1 I like the explaination +1 – Kinderchocolate Feb 1 '12 at 5:32 – William S. Wong Mar 6 at 18:37 @WilliamS.Wong: The question was about risk-neutral pricing in general, i.e. understanding the concept and getting an intuition. Yet, please feel free to give another answer :-) – vonjd Mar 7 at 7:14 Suppose that you and other bettors participate in a lottery with $N$ possible outcomes; event will occur with probability $\pi_n$. There are $N$ basic contracts available for purchase. Contract $n$ costs $p_n$ and entitles you to one dollar if outcome $n$ occurs, zero otherwise. Now, imagine that you have a contingent claim that pays a complex payoff based on the outcome, say $f(n)$. The expected value of the payoff is $$E(f(n))=\sum_n \pi_n f(n) =E(f)$$ Now, consider a portfolio of $f(1)$ units of basic contract $1$, $f(2)$ units of basic contract $2$, etc. This portfolio has exactly the same random payoff as the contingent claim. Because of the law of one price, it must have the same price as the contingent claim. Hence, the contingent claim has price equal to $$\text{price}(f)=\sum_n p_n f(n)$$ Define $r= 1/(\sum_{i=1}^N p_i)$ and set $\tilde p_n := r p_n$, which is a probability measure, and you can rewrite $$\text{price}(f)=r^{-1} \sum_n \tilde p_n f(n)=r^{-1} E^*(f)$$ So the risk-neutral probabilities are essentially the normalized prices of "state-contingent claims", i.e., outcome-specific bets. And the price of any claim is the discounted expectation according to this probability distribution. $r$ is easy to identify: if the contingent claim is 1 dollar for any outcome, then it's price is the discounted value of a dollar using the risk-free interest rate. Hence $r$ is the risk-free interest rate. Where do these prices come from? There are three ways to think about price determination: 1. They are determined by a non-arbitrage condition, where no bettor can make something for nothing almost surely; 2. They are determined by an equilibrium condition, where all bettors optimize their utility; 3. They are determined by a single-agent utility optimization problem. All conditions imply that the prices are strictly positive. For more information, Duffie's Dynamic Asset Pricing is still the standard reference. This basic intuition behind this framework goes back 35 years to Cox-Rubinstein. Harrison-Kreps extended the result, and since then it has further extended. The most general forms to useless level of technicality are by Delbaen and Schachermayer. - We bet on a fair coin toss -- heads you get $\$100$, tails you get$\$0$. So the expected value is $\$50$. But it is unlikely that you'll pay$\$50$ to play this game because most people are risk averse. If you were risk neutral, then you WOULD pay $\$50$for an expected value of$\$50$ for an expected net payoff of $\$0\$. A risk neutral player will accept risk and play games with expected net payoffs of zero. Or equivalently, a risk neutral player doesn't need a positive expected net payoff to accept risk. Let's say that you would pay $\$25$to play this game. That means if you were risk-neutral, that you'd be assigning probabilities of 1/4 to heads and 3/4 to tails for an expected value of$\$25$ and an expected net payoff of $\$0\$. So if we can convert from the risk probability measure $(1/2, 1/2)$ to a risk neutral probability measure $(1/4, 3/4)$, then we can price this asset with a simple expectation. So if you can find the risk neutral measure for an asset based on a set of outcomes, then you can use this measure to easily price other assets as an expected value. - The risk-netural measure has a massively important property which is worth making very clear: The price of any trade is equal to the expectation of the trade’s winnings and losses under the risk-neutral measure. This property gives us a scheme for pricing derivatives: 1. take a collection of prices of trades that exist in the market (eg swap rates, bond prices, swaption prices, cap/floor prices), 2. back out the set of risk-neutral probabilities that these prices imply, 3. calculate the expectation of the derivative trade’s payoff under these risk-neutral proabilities, 4. that is the price of the derivative. The risk-neutral measure is in some sense the flip-side of the concept of risk premium. Without even getting mixed up with stock and bond prices and suchlike, we can get a good sense of the risk-premium concept at work in a simple betting game. The classic example, a game of coin tossing: 1. a player hands over some money, say £X, to play, 2. the host tosses an unbiased coin, 3. if it comes up heads then the player is given £2, 4. but if it comes up tails then nothing is given back. A textbook on probability will tell you that the price of £1 per go is fair for this game because the concept of fair is defined in probability textbooks to mean that the price paid should equal the value of the expected winnings. Clearly it does for this example. But let’s get savvy, step back from the theory, and ask how much would different players be prepared to pay for this game. Consider two different players: 1. person A that has £1.50 in their pocket but is under pressure from a traffic warden to pay £2 for a parking ticket (and nothing less than £2 will do), 2. person B that has £10 in their pocket and doesn’t really need anything more than that. Don’t you think you could convince person A to pay up to their whole £1.50 for this game? Person B might be a harder sell, but perhaps they’d come around if we charged something like 50p a go and advertised the game as ‘potential 4 times returns on your investment’? The important point is that the theoretical fair price may well be £1 for this game, but the actual price at which we sell the game may be something different since it will depend on the circumstances of the players we are selling it to. The difference between the actual and theoretical price is called the risk premium. Throwing in a bit of market language, we can write this as: the risk premium is the amount of premium (or discount) that needs to be added to the theoretical fair price in order to match the actual price of the trade in the market. Remark: The risk-neutral measure is risk-neutral because in this alternative reality the price paid by player A for the game contains no risk premium — the price is exactly equal to the value of the expected winnings of the game. I have written a little bit more on this in my blog if you want to go see. - 1 Don't just post a link to your blog (especially since this particular link goes to a sign-in page). Either give a real answer here or I'm going to assume you're spamming us. – chrisaycock♦ Jan 25 '12 at 3:58 – vonjd Jan 25 '12 at 10:27 1 Fair comments. I have now fixed the link and filled out the body of my answer. – Robert Jan 25 '12 at 15:25 @Robert This is better, though if you look around you'll notice that nobody else posts a link to his personal blog. Every answer is either self-contained or links to an established source (like a Wikipedia article, a book on Amazon, or another post within Stack Exchange). Please edit your answer again to remove the link to your personal blog. – chrisaycock♦ Jan 25 '12 at 17:47 1 – chrisaycock♦ Jan 25 '12 at 20:57 show 2 more comments A market is said to be complete if any contingent claim can be replicated by an admissible (i.e. with value process bounded from below) self-financing (i.e. all gains and losses exactly offset each other) trading strategy, a so-called replicating strategy. This strategy being constructed from primary securities - the market prices of which are unique - it must be that its price is identical to everyone, and the strategy is therefore independent of any assumptions on risk aversion. Any discrepancy between the replicating strategy's price and its underlying primary securities would be wiped out by arbitrage trades by market participants, regardless of their risk preferences. Now, suppose you want to price a contingent claim, e.g. a European option on an equity security. Assuming the market is complete, the payoff of this security can be perfectly replicated using existing securities. Again, by the same arguments as above, the market price of the option and of the replicating strategy must be exactly the same under a no-arbitrage condition, regardless of risk preferences. Therefore, neither a positive nor a negative risk premium can be embedded into the equilibrium market price of the option, or equivalently of the replicating strategy (actually, a sort of "aggregate" risk premium is already included in the prices of the replicating strategy's primary securities, but no further risk premium is added when pricing the contingent claim). We have shown that if the market has no arbitrage opportunities and is complete, then it must be that the option's market price is exactly equal to that of the replicating strategy, and that this price is in fact unique. This is essentially what the (Second) Fundamental Theorem of Asset Pricing (FTAP) says. Since the replicating strategy does not depend on any assumptions concerning risk preferences, it does not matter what assumptions are made on the risk preferences of the market participants. Therefore, the price in the real-world market (where risk-averse, risk-neutral and risk-seeking participants meet) must equal that in a risk-neutral market. Since it is much more convenient (and mathematically powerful, e.g. martingale theory) to work in a risk-neutral world, this is the standard pricing approach used in mathematical finance. - I like this point of view on risk neutral pricing: risk neutral probability $q$ is such a probability that the expected possible price of the option at $t=T$ calculated with this probability and then discounted gives you today price at $t=t_0$ it is derived from today price under the assumption that all the time holding portfolio of option(buy) and instrument(sell) you are delta hedged, so it's value is known and the same in each case (rise, fall). other nice view is: future price of the option (expected with risk free rate) is equal to it's expected value, i.e if today price is $V$ and option price tomorrow might be $V^+$or $V^-$ and risk free rate is $r$ then you can retrieve $q$ from this equation: $(1+rdt)V=qV^++(1-q)V^-$ - ## protected by SRKX♦Mar 5 at 10:05 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 1 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422040581703186, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/37130?sort=oldest
## Integrability of distributions close to a given one. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In this and this papers Thurston proves that every distribution is homotopic to an integrable one (in the first one for codimension greater than one and in the other for codimension one). Recently, I've came up with a nice paper by Burago and Ivanov which with some other hypothesis manage to start with a distribution and construct a foliation tangent to arbitrarily small cone field around it. I am far from foliations, but I was wondering if there are examples of distributions which are not integrable and can not be perturbed in the $C^0$ topology in order to become integrable (Edit: That is, a distribution is $\epsilon$ close to other if it is contained pointwise in a cone of angle $\epsilon$ of the original one.) References appreciated! - I am no expert, but do you mean $C^0$ topology, or do you mean something completely different that I'm ignorant of? – Willie Wong Aug 30 2010 at 8:51 I could correct it, I had problems with my keyboard. Thanks. – rpotrie Aug 30 2010 at 9:19 ## 1 Answer No smooth non-integrable distribution can be $C^0$ approximated by integrable ones. For example, consider the following 2-dimensional distribution in $\mathbb R^3$: the plane at $(x,y,z)\in\mathbb R^3$ is spanned by vectors $(1,0,0)$ and $(0,1,x)$. Perturb this distribution within a small $C^0$ distance $\varepsilon\ll 1$. Consider the square in $\mathbb R^2$ with vertices $(0,0)$, $(1,0)$, $(1,1)$, $(0,1)$ and let $\gamma$ be its boundary (counter-clockwise). This $\gamma$ has a "lift", that is a curve $\tilde\gamma$ in $\mathbb R^3$ which is tangent to the distribution and whose projection to the horizontal plane is $\gamma$. The lift is found by solving an o.d.e., so it is unique if the distribution is smooth but may be non-unique if it is only $C^0$. In the non-perturbed case, the unique lift ends at $(0,0,1)$, hence in the perturbed case all lifts end near $(0,0,1)$. This implies that the distribution is not integrable - if it was integrable, there would be at least one lift (the one contained in a leaf of a foliation) that ends near the origin. The proof in the general case is similar. - Thanks. From what I could understand, this means that a smooth non-integrable distribution can't be $C^0$ aproximated by SMOOTH integrable ones. How about being aproximated by non-smooth integrable ones?. – rpotrie Aug 30 2010 at 10:31 I addressed this too. All lifts end up near (0,0,1) because the o.d.e. is a $C^0$ perturbation of a smooth one. – Sergei Ivanov Aug 30 2010 at 10:42 I understand now. Tks. – rpotrie Aug 30 2010 at 11:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466025829315186, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/93825-solved-few-beg-trig-probs-angles.html
# Thread: 1. ## [SOLVED] Few beg. Trig Probs Angles So when you have 90-x in a right triangle that you are trying to solve cos sin and tang etc.. those questions, how do you include angle degrees into your calculations.. For instance, l there is a h(3) line 4 l i just can't l draw online l_____________x 5 I know how to simply find these functions by dividing one another but I get confused when it comes to including angles. I also have a question regarding "clinometer" use. Well, we can use such an instrument (called a “clinometer”) to measure the angle A. Then we know that the angle between the ground and the building is a right angle, so we know the measure of 3 angles, because they all add up to 180°. Now all we have to do is find a similar triangle anywhere. Then we can measure the ratio of the sides that correspond to the height and the ground length, and then we can multiply by the actual ground length to get the height. So once you have the three angles, how do you find the measurement on the structure you are trying to identify. I am given a question like this: Given the angle you are given to the top of the building, calculate the height given distance? I am unsure how to find the height of the building could someone help? Here is a sample prob I am given, Angle 71 Distance 20 Meters Top L l Angle ; in l box l _____________________________71 20 Any potential help appreciated. 2. Originally Posted by KevinVM20 I am unsure how to find the height of the building could someone help? Here is a sample prob I am given, Angle 71 Distance 20 Meters Top L l Angle ; in l box l _____________________________71 20 Any potential help appreciated. Since $tan{\theta}=\frac{o}{a}$, let $\theta=71^\circ$, and let $a=20$, and let $o=$ the side opposite $\theta$ (which is the height of the building). $\tan{71^\circ}=\frac{o}{20}$ Therefore, $(20)\tan{71^\circ}=o$ Does that help? Note that in this problem we didn't have to find any of the other angles to find the height of the building.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177870154380798, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/27224/list
## Return to Answer 1 [made Community Wiki] In my dissertation I developed an algorithm for finding bounds on $H_2$ of a finitely presented group with finite field coefficients. I was motivated by a conjecture Quillen on the (co)homology of linear groups. As such, I included an appendix with presentations of several linear groups and the homology calculations using my algorithms. I didn't include the list of presentations for publication, but if these types of groups are of interest I could get it to you.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.972689151763916, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4181344
Physics Forums ## Variant of Bocard's Problem [All numbers as assumed to be integers] Bocard's problem asks to find the solutions to the Diophantine equation $n!=m^2-1$. The only known $(n,m)$ pairs are $(4,5),(5,11),(7,71)$, and it is conjectured that there are no more. A generalization of this problem would be $n!=m^2-A$. I've been playing around with this formula, and while I do not have a general proof right now, it seems that one could prove there to be for any non-square $A$ a finite number of solutions (usually 0). As an example, consider the solutions to $n!=m^2-5$. To show it has a finite number of solutions, consider the equation in terms of mod 3. Assuming $n\geq3$, then $n!\bmod3=0$, and if it can be shown that $m^2 \equiv 2 \pmod{3}$ has no solutions, then $n!=m^2-5$ cannot have a solution with $n\geq3$. It is simple enough to show that the solution set to $m^2 \equiv 2 \pmod{3}$ is null. As $3x$, $3x+1$, and $3x+2$ include all integers, showing that $(3x)^2=9x\equiv 0 \pmod{3}$, $(3x+1)^2=9x^2+6x+1 \equiv 1 \pmod{3}$, $(3x+2)^2=9x+12x+4\equiv 1 \pmod{3}$ demonstrates that any $n^2\bmod3$ will be 0 or 1, but never 2. Therefore, $n$ can only equal 1 or 2; since neither provides a solution, the equation at $A=5$ has no solutions. Making a quick list of $a^2\bmod b$ cycles (I say cycle since the first $a$ terms will repeat ad infinitum over $b$) and determining which numbers mod b do not exist therein gives a method for finding linear equations for which numbers will have finite solutions. For example, $a^2\bmod 3$ will never include 2. Therefore, for all integers $k$, $A=3k+2$ will have a finite solution set. It's easy enough to write a program to loop through the first several $b$'s and it seems that only numbers in the form $n!=m^2-s^2$ cannot be proven finite by this method; this is simply an observed pattern without a general proof. Does anyone have any thoughts on this (definitely let me know if I've come to incorrect conclusions)? Is there a good way to go about proving that for all non-square $A$'s there is a finite number of solutions? --- Edit: My program confirms that non-square numbers from $A=$ 2 to 224 have finite solutions with an upper bound $B$ for $n$ at $B\leq13$. This is what leads me to wonder if this extends to infinity. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug So by your argument, given A, you need to find a modulus N such that A is not a square modulo N. If you can find such an N then you can repeat your argument replacing 3 with N. It might help you to read up on quadratic residues. http://en.wikipedia.org/wiki/Quadratic_residue Also, you might check out http://en.wikipedia.org/wiki/Law_of_...ic_reciprocity That provides the fastest way to check whether or not A is a square modulo N. Blog Entries: 2 Quote by Someone2841 [All numbers as assumed to be integers] Bocard's problem asks to find the solutions to the Diophantine equation $n!=m^2-1$. The only known $(n,m)$ pairs are $(4,5),(5,11),(7,71)$, and it is conjectured that there are no more. A generalization of this problem would be $n!=m^2-A$. I've been playing around with this formula, and while I do not have a general proof right now, it seems that one could prove there to be for any non-square $A$ a finite number of solutions (usually 0). As an example, consider the solutions to $n!=m^2-5$. To show it has a finite number of solutions, consider the equation in terms of mod 3. Assuming $n\geq3$, then $n!\bmod3=0$, and if it can be shown that $m^2 \equiv 2 \pmod{3}$ has no solutions, then $n!=m^2-5$ cannot have a solution with $n\geq3$. It is simple enough to show that the solution set to $m^2 \equiv 2 \pmod{3}$ is null. As $3x$, $3x+1$, and $3x+2$ include all integers, showing that $(3x)^2=9x\equiv 0 \pmod{3}$, $(3x+1)^2=9x^2+6x+1 \equiv 1 \pmod{3}$, $(3x+2)^2=9x+12x+4\equiv 1 \pmod{3}$ demonstrates that any $n^2\bmod3$ will be 0 or 1, but never 2. Therefore, $n$ can only equal 1 or 2; since neither provides a solution, the equation at $A=5$ has no solutions. Making a quick list of $a^2\bmod b$ cycles (I say cycle since the first $a$ terms will repeat ad infinitum over $b$) and determining which numbers mod b do not exist therein gives a method for finding linear equations for which numbers will have finite solutions. For example, $a^2\bmod 3$ will never include 2. Therefore, for all integers $k$, $A=3k+2$ will have a finite solution set. It's easy enough to write a program to loop through the first several $b$'s and it seems that only numbers in the form $n!=m^2-s^2$ cannot be proven finite by this method; this is simply an observed pattern without a general proof. Does anyone have any thoughts on this (definitely let me know if I've come to incorrect conclusions)? Is there a good way to go about proving that for all non-square $A$'s there is a finite number of solutions? --- Edit: My program confirms that non-square numbers from $A=$ 2 to 224 have finite solutions with an upper bound $B$ for $n$ at $B\leq13$. This is what leads me to wonder if this extends to infinity. Your post is interesting and I thought it warranted comment but had none to offer. Glad to see that it is now getting a few comments. ## Variant of Bocard's Problem Quote by Vargo So by your argument, given A, you need to find a modulus N such that A is not a square modulo N. If you can find such an N then you can repeat your argument replacing 3 with N. It might help you to read up on quadratic residues. http://en.wikipedia.org/wiki/Quadratic_residue Also, you might check out http://en.wikipedia.org/wiki/Law_of_...ic_reciprocity That provides the fastest way to check whether or not A is a square modulo N. Thanks for your reply! I was not familiar with quadratic residues before my original post, though I came across them just a few days ago. I think they are definitely key to the problem. Quote by ramsey2879 Your post is interesting and I thought it warranted comment but had none to offer. Glad to see that it is now getting a few comments. Thanks :) Thread Tools | | | | |--------------------------------------------------|-------------------------------|---------| | Similar Threads for: Variant of Bocard's Problem | | | | Thread | Forum | Replies | | | Differential Equations | 5 | | | Electrical Engineering | 10 | | | General Physics | 2 | | | General Physics | 2 | | | Introductory Physics Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 68, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388704299926758, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/83591-likelihood-ratio-test-likelihood-function.html
# Thread: 1. ## Likelihood Ratio Test & Likelihood Function Suppose that X_1,X_2,...,X_n1, Y_1,Y_2,...,Y_n2, and W_1,W_2,...,W_n3 are independent ranom samples from normal distributions with respective unknown means μ1, μ2, and μ3 and variances (σ1)^2, (σ2)^2, (σ3)^2. Find the likelihood ratio test for H_o: (σ1)^2 = (σ2)^2 = (σ3)^2 against the alternative of at least one inequality. [source: Wackerly #10.108a] ================================ I am clueless...can someone please go through the approach/steps to solving this problem? In particular, I am not sure how to find the likelihood function. If we simply have the random variables X_1,X_2,...,X_n, then the likelihood function is just the joint density f_X1,X2,...,Xn(x1,x2,...,xn), but here we have three sets of different random variables (X,Y, and W), what would be the likelihood function, then? Thanks for helping! 2. I almost assigned that problem two weeks ago. I can work it out, but it's a lot of typing. Basically, and I'm sure you'll keep asking me the same question repeatedly, the MLEs of populations means under BOTH hypotheses are the sample means. The estimates of the population variances are of the form $\hat\sigma_i^2={\sum_{k=1}^n (X_{ik}-\bar X_i)^2\over n_i}$ for each of these sets. UNDER the null hypothesis we have as the estimate of the COMMOM variance as the pooled estimator. JUST like in the two sample t case, giving us... $\hat\sigma^2 ={\hat\sigma_1^2 +\hat\sigma_2^2 +\hat\sigma_3^2 \over n_1+n_2+n_3}$ You reject the null hypothesis when the ensuing ratio is large... $\lambda ={(\hat\sigma_1^2)^{n_1/2} (\hat\sigma_2^2)^{n_2/2} (\hat\sigma_3^2)^{n_3/2} \over (\hat\sigma^2)^{(n_1+n_2+n_3)/2}}$ 3. That's a tough problem from Wackerly... 1) In general, is it always true that the maximum likelihood estimator (MLE) of populations mean is the sample mean? 2) To apply the likelihood ratio test, an important step is to find the likelihood function, but what is the likelihood function in this case??? If we simply have the random variables X_1,X_2,...,X_n, then I know that the likelihood function is just the joint density f_X1,X2,...,Xn(x1,x2,...,xn), but here we have THREE sets of different random variables (X's,Y's, and W's) which makes me confused... Thanks! 4. 1) No. The average is the MLE for the expected value in the normal distribution. It may or may not be the MLE for other distributions. It depends on the likelihood function and which values of the parameters maximize it. 2) The log likelihood is the sum of the logs of the normal pdf. Each normal pdf involves its own RV, its own mu, and its own variance. It just so happens that the mu for all the X's is the same ( $\mu_X$), that it is the same for all the Y's ( $\mu_Y$) and for all the W's (( $\mu_W$). Under the null, the variance happens to be the same for all the pdf's. Under the alternate, it is, just like the mean, grouped by X, Y, and W. Hope this helps. 5. But in this case, is the likelihood function going to a product or sum? Should we just multiply the 3 likelihood functions for X's, Y's, and W's or add them? 6. A likelihood is a pdf. It is a joint pdf of all of the RV's that are under consideration. It's a single function. If the RV's involved are independent, then the pdf can be rewritten as a product of other pdf's. It makes the problem very convenient. However, it's often inconvenient dealing with a large product. So usually, instead of working with the likelihood (which is a product), people work with the logarithm of the likelihood which is a sum. 7. Originally Posted by JohnQ A likelihood is a pdf. It is a joint pdf of all of the RV's that are under consideration. It's a single function. Is it always ALL (X's,Y's,W's) of the RV's under consideration? Is this the definition of a likelihood function? Somehow I have trouble understanding this because for all of the problems I have seen, the likelihood only deals with one sample (i.e. only X1,X2,...,Xn). If the RV's involved are independent, then the pdf can be rewritten as a product of other pdf's. It makes the problem very convenient. I have a technical question. X's are iid, Y's are iid, Z's are iid, and the three samples are independent, does this mean all the RV's (X's,Y's,Z's) are mutually independent? The likelihood in our problem is f(x1,x2,...,x_n1, y1,y2,...,y_n2, w1,w2,...,w_n3), how can we break this into a product? Thanks a lot!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218547940254211, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/88730-placing-identical-distinct-objects.html
# Thread: 1. ## Placing Identical/Distinct Objects Hello: Suppose there are 10 identical red balls and 10 different blue balls. I am wondering if the following is correct. Suppose we want to put those balls into 4 boxes such that: (a) there is a box containing exactly 2 red balls. I assume this means that the box contains 2 red balls only and no other balls. So this means, (4 choose 1) to choose the special box. Then the rest of the red balls gets distributed as follows: (10 choose 8). Then the blue balls gets distributed like this: 3^10. So the total is: 4 x (10 choose 8) x (3^10) (b) there is a box containing exactly 2 blue balls. to distribute the red, we have 3 boxes. So we have (10 + 3 - 1 choose 10), or (12 choose 10). there are 10 x 9/2! ways of putting balls in the special box that gets 2 blue balls, and 4 ways of choosing the box. Then the rest of the blue balls can be distributed in 3^8 ways. So the total is: (12 choose 10) x (10 choose 2) x (4) x (3^8) ways. Thanks. 2. Are the boxes identical or distinct? 3. Hello: they are distinct boxes ... I think, the problem doesn't say. Or if they are the same boxes, then I don't have x 4 for choosing the one special box? 4. The reason I deleted the answer that I quickly gave is that it is an over-count. Here is the difficulty. Once we pick the box to have the two red balls we do not want another box to have only two red balls. Because, it would be counted more than once. So we count at least one has exactly two red balls. $\sum\limits_{k = 1}^3 {\left( { - 1} \right)^{k + 1}\binom{4}{k}\binom{10-3k+3}{10-2k} \left( {4 - k} \right)^{10} }$ 5. Hello: So the logic would be: Count for one box with 2 red balls - count for 2 boxes with 2 red balls + count for 3 boxes with 2 red balls - (count with 4 boxes with 2 red balls -- impossible, so count is 0)? Does that mean that for the one with one box with 2 blue balls, we would also do: count for one box with 2 blue balls - count for 2 boxes with 2 blue balls + count for 3 boxes with 2 blue balls? So this would be 4 x (10 choose 2) x (3 ^ 8) - (4 choose 2) x (10 choose 2) x (8 choose 2) x (2 ^ 6) + (4 choose 3) x (10 choose 2) x (8 choose 2) x (6 choose 2) ? Thanks.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205725789070129, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=42390
Physics Forums ## Understanding set theory I'm trying to prove something small with set theory and since I'm new to it, I've run into a problem. I can't understand what the following means exactly and how to proceed further. Or where the mistake is, if there is one. I think there is, because it seems... freaky. $$x\notin\left(\left(\left(A\cup B\right)\setminus\left(A\cap B\right)\right)\cap C\right)$$ $$x\notin\left(\left(A\cup B\right)\setminus\left(A\cap B\right)\right)\wedge x\notin C$$ $$\left(x\notin\left(A\cup B\right)\wedge x\in\left(A\cap B\right)\right)\wedge x\notin C$$ $$\left(\left(\left(x\notin A\right)\vee\left(x\notin B\right)\right)\wedge\left(\left(x\in A\right)\wedge\left(x\in B\right)\right)\right)\wedge x\notin C$$ I'd post the entire thing of which this is a small part of, but that's my homework and I don't want to get into the habit of having other people do my homework for me. Plus I want to learn how and why it works, not just do it. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Science Advisor consider this x is not an element of UnV x is not an element of U AND x is not an element of V you've said those two statements are equivalent (I think, since you've not actually said what your deductions are from line to line). find a counter example to show this is false. negation switches conjunction and disjunction, or union and intersection. similar observations hold for the other steps in your reasoning. Thread Tools | | | | |-----------------------------------------------|-------------------------------|---------| | Similar Threads for: Understanding set theory | | | | Thread | Forum | Replies | | | Classical Physics | 4 | | | Linear & Abstract Algebra | 0 | | | Introductory Physics Homework | 2 | | | General Physics | 4 | | | General Physics | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9445735216140747, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/75795?sort=oldest
## In an arbitrary abelian category, does chain complex homology commute with coproduct? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) On page 55 of Weibel's Introduction to homological algebra the following passage appears: Here are two consequences that use the fact that homology commutes with arbitrary direct sums of chain complexes I understand why homology commutes with arbitrary direct sums when the direct sum of a collection of monics is a monic (i.e the direct sum functor is exact) but I was under the impression that there were abelian categories where the direct sum functor is not exact. After a bit of thought, I realised that I don't know an example of an abelian category in which the coproduct functor is not exact. Sheaves of abelian groups on a fixed topological space give an example of an abelian category in which the product functor is not exact. Question 1: Is the passage from Weibel's book correct? If so, then why? Question 2: Is there an example of an abelian category where the direct sum functor is not exact? - Or are you really after the result that homology is a functor which commutes with direct sums? Consider the category $S$ of chain complexes $A_i\to B_i \to C_i$ (yes, only three terms) (say of R-modules, which should be enough by standard embedding theorems). Homology is a functor $S \to Mod_R$. Does this preserve sums? – David Roberts Sep 18 2011 at 23:49 @David: The standard embeddings" do not work here because they do not preserve infinite direct sums. As I said above, it is clear that homology commutes with direct sum in categories where coproducts are exact. (R-modules are such a category) – Daniel Barter Sep 19 2011 at 0:38 ## 1 Answer I couldn't think of a natural example of an abelian category in which direct sums are not exact (I think this is called axiom AB4). For example, sheaves of abelian groups and R-modules both have this property. However there are natural examples of abelian categories where direct products are not exact (i.e. not satisfying AB4*), for example, the category of abelian sheaves on a space. Taking the opposite category of such a category will then give an example of a category not satisfying AB4 (albeit, not a very nice one). Once you have such an example, homology of chain complexes in this category will not commute with direct sum: if $A_i \to B_i$ is a sequence of monos such that $\bigoplus (f_i :A_i \to B_i)$ is not a mono, then consider the sequence of two-term complexes $A_i \to B_i$. $H^0$ of each of these complexes is zero, but $H^0$ of the direct sum is the kernel of $\bigoplus f_i$. Here is one way to see that Sh(X) does not satisfy AB4* (probably not the easiest). Assume for simplicity X = [0,1]. Take a finite open cover, $\mathcal U_i$ of X by balls of radius $1/i$. Let $A_i$ be the sheaf $\prod _{U \in \mathcal U_i} j_{U!} \mathbb Z_U$. This has an epimorphism to $\mathbb Z_X$, but the direct product of all of them together is not epimorphic: taking sections over any open set $V$ will kill off any $A_i$ when no $1/i$-ball contains $V$. I hope this is correct! - "Once you have such an example, homology of chain complexes in this category will not commute with direct sum". Ok, this is what I was looking for. thanks Sam! – Daniel Barter Sep 19 2011 at 2:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248877167701721, "perplexity_flag": "head"}
http://www.reference.com/browse/Vacuum+bubble
Definitions Nearby Words Feynman diagram In quantum field theory a Feynman diagram is an intuitive graphical representation of the transition amplitude or other physical quantity of a quantum system. Within the canonical formulation of quantum field theory a Feynman diagram represents a term in the Wick's expansion of the perturbative S-matrix. The transition amplitude is the matrix element of the S-matrix between the initial and the final states of the quantum system. Alternatively the path integral formulation of quantum field theory represents the transition amplitude as a weighted sum of all possible paths of the system from the initial to the final state. A Feynman diagram is then identified with a particular path of the system contributing to the transition amplitude. Feynman diagrams are named after Richard Feynman. Motivation and history When calculating scattering cross sections in particle physics, the interaction between particles can be described by starting from a free field which describes the incoming and outgoing particles, and including an interaction Hamiltonian to describe how the particles deflect one another. The amplitude for scattering is the sum of each possible interaction history over all possible intermediate particle states. The number of times the interaction Hamiltonian acts is the order of the perturbation expansion, and the time-dependent perturbation theory for fields is known as the Dyson series. When the intermediate states at intermediate times are energy eigenstates, collections of particles with a definite momentum, the series is called old-fashioned perturbation theory. The Dyson series can be alternately rewritten as a sum over Feynman diagrams, where at each interaction vertex both the energy and momentum are conserved, but where the length of the energy momentum four vector is not equal to the mass. The Feynman diagrams are much easier to keep track of than old-fasioned terms, because the old-fasioned way treats the particle and antiparticle contributions as separate. Each Feynman diagram is the sum of exponentially many old fasioned terms, because each internal line can separately represent either a particle or an antiparticle. In a non-relativistic theory, there are no antiparticles and there is no doubling, so each Feynman diagram includes only one term. Feynman gave a prescription for calculating the amplitude for any given diagram from a field theory Lagrangian, the Feynman rules. Each internal line corresponds to a factor of the corresponding virtual particle's propagator; each vertex where lines meet gives a factor derived from an interaction term in the Lagrangian, and incoming and outgoing lines carry an energy, momentum, and spin. In addition to their value as a mathematical tool, Feynman diagrams provide deep physical insight into the nature of particle interactions. Particles interact in every way available; in fact, intermediate virtual particles are allowed to propagate faster than light. The probability of each final state is then obtained by summing over all such possibilities. This is closely tied to the functional integral formulation of quantum mechanics, also invented by Feynman–see path integral formulation. The naïve application of such calculations often produces diagrams whose amplitudes are infinite, because the short-distance particle interactions require a careful limiting procedure, to include particle self-interactions. The technique of renormalization, pioneered by Feynman, Schwinger, and Tomonaga compensates for this effect and eliminates the troublesome infinite terms. After such renormalization, calculations using Feynman diagrams often match experimental results with very good accuracy. Feynman diagram and path integral methods are also used in statistical mechanics. Alternative names Murray Gell-Mann always referred to Feynman diagrams as Stückelberg diagrams, after a Swiss physicist, Ernst Stückelberg, who devised a similar notation many years earlier. Stückelberg was motivated by the need for a manifestly covariant formalism for quantum field theory, but did not provide as automated a way to handle symmetry factors and loops, although he was first to find the correct physical interpretation in terms of forward and backward in time particle paths, all without the path-integral. Historically they were sometimes called Feynman-Dyson diagrams or Dyson graphs, because when they were introduced the path integral was unfamiliar, and Freeman Dyson's derivation from old-fashioned perturbation theory was easier for physicists trained in earlier methods to follow. Description A Feynman diagram represents a perturbative contribution to the amplitude of a quantum transition from some initial quantum state to some final quantum state. For example, in the process of electron-positron annihilation the initial state is one electron and one positron, the final state — two photons. The initial state is often assumed to be at the right of the diagram and the final state — at the left (although other conventions are also used quite often). A Feynman diagram consists of points, called vertexes, and lines attached to the vertexes. The particles in the initial state are depicted by lines sticking out in the direction of the initial state (e.g. to the right), the particles in the final state are represented by lines sticking out in the direction of the final state (e.g. to the left). In QED there are two types of particles: electrons/positrons (called fermions) and photons (called gauge bosons). They are represented in Feynman diagrams as follows: 1. Electron in the initial state is represented by a solid line with an arrow pointing toward the vertex (•←). 2. Electron in the final state is represented by a line with an arrow pointing away from the vertex: (←•). 3. Positron in the initial state is represented by a solid line with an arrow pointing away from the vertex: (•→). 4. Positron in the final state is represented by a line with an arrow pointing toward the vertex: (→•). 5. Photon in the initial and the final state is represented by a wavy line (•~ and ~•). In a gauge theory (of which QED is a fine example) a vertex always has three lines attached to it: one bosonic line, one fermionic line with arrow toward the vertex, and one fermionic line with arrow away from the vertex. The vertexes might be connected by a bosonic or fermionic propagator. A bosonic propagator is represented by a wavy line connecting two vertexes (•~•). A fermionic propagator is represented by a solid line (with an arrow in one or another direction) connecting two vertexes, (•←•). The number of vertexes gives the order of the term in the perturbation series expansion of the transition amplitude. $e^+e^-to2gamma$ `$gamma$ $e^-$` ` ~•←` ` ↓` ` ~•→` `$gamma$ $e^+$` For example, this second order Feynman diagram contributes to a process (called electron-positron annihilation) where in the initial state (at the right) there is one electron (•←) and one positron (•→) and in the final state (at the left) there are two photons (~•). Canonical quantization formulation Perturbative S-matrix The probability amplitude for a transition of a quantum system from the initial state $|irangle$ to the final state $|frangle$ is given by the matrix element $S_\left\{fi\right\}=langle f|S|irangle;,$ where $S$ is the S-matrix. In the canonical quantum field theory the S-matrix is represented within the interaction picture by the perturbation series in the powers of the interaction Lagrangian, $S=sum_\left\{n=0\right\}^\left\{infty\right\}\left\{i^nover n!\right\}intprod_\left\{j=1\right\}^n d^4 x_j Tprod_\left\{j=1\right\}^n L_v\left(x_j\right)equivsum_\left\{n=0\right\}^\left\{infty\right\}S^\left\{\left(n\right)\right\};,$ where $L_v$ is the interaction Lagrangian and $T$ signifies the time-product of operators. A Feynman diagram is a graphical representation of a term in the Wick's expansion of the time product in the $n$-th order term $S^\left\{\left(n\right)\right\}$ of the S-matrix, $Tprod_\left\{j=1\right\}^nL_v\left(x_j\right)=sum_\left\{mathrm\left\{all;possible;contractions\right\}\right\}\left(pm\right)Nprod_\left\{j=1\right\}^nL_v\left(x_j\right);,$ where $N$ signifies the normal-product of the operators and $\left(pm\right)$ takes care of the possible sign change when commuting the fermionic operators to bring them together for a contraction (a propagator). Feynman rules The diagrams are drawn according to the Feynman rules which depend upon the interaction Lagrangian. For the QED interaction lagrangian, $L_v=-gbarpsigamma^mupsi A_mu$, describing the interaction of a fermionic field $psi$ with a bosonic gauge field $A_mu$, the Feynman rules can be formulated in coordinate space as follows: 1. Each integration coordinate $x_j$ is represented by a point (sometimes called a vertex); 2. A bosonic propagator is represented by a curvy line connecting two points; 3. A fermionic propagator is represented by a solid line connecting two points; 4. A bosonic field $A_mu\left(x_i\right)$ is represented by a curvy line attached to the point $x_i$; 5. A fermionic field $psi\left(x_i\right)$ is represented by a solid line attached to the point $x_i$ with an arrow toward the point; 6. A fermionic field $barpsi\left(x_i\right)$ is represented by a solid line attached to the point $x_i$ with an arrow from the point; Example: second order processes in QED The second order perturbation term in the S-matrix is $S^\left\{\left(2\right)\right\}=\left\{\left(ie\right)^2over 2!\right\}int d^4x d^4x\text{'} Tbarpsi\left(x\right)gamma^mupsi\left(x\right)A_mu\left(x\right)barpsi\left(x\text{'}\right)gamma^nupsi\left(x\text{'}\right)A_nu\left(x\text{'}\right);$ Scattering of fermions ``` | The Wick's expansion of the integrand gives (among others) the following term$Nbarpsi\left(x\right)gamma^mupsi\left(x\right)barpsi\left(x\text{'}\right)gamma^nupsi\left(x\text{'}\right)underline\left\{A_mu\left(x\right)A_nu\left(x\text{'}\right)\right\};,$where$underline\left\{A_mu\left(x\right)A_nu\left(x\text{'}\right)\right\}=int\left\{d^4pover\left(2pi\right)^4\right\}\left\{ig_\left\{munu\right\}over k^2+i0\right\}e^\left\{-k\left(x-x\text{'}\right)\right\}$is the electromagnetic contraction (propagator) in the Feynman gauge. This term is represented by the Feynman diagram at the right. This diagram gives contributions to the following processes: $e^-e^-$ scattering (initial state at the right, final state at the left of the diagram); $e^+e^+$ scattering (initial state at the left, final state at the right of the diagram); $e^-e^+$ scattering (initial state at the bottom/top, final state at the top/bottom of the diagram). Compton scattering and annihilation/generation of $e^-e^+$ pairs Another interesting term in the expansion is$Nbarpsi\left(x\right)gamma^muunderline\left\{psi\left(x\right)barpsi\left(x\text{'}\right)\right\}gamma^nupsi\left(x\text{'}\right)A_mu\left(x\right)A_nu\left(x\text{'}\right);,$where$underline\left\{psi\left(x\right)barpsi\left(x\text{'}\right)\right\}=int\left\{d^4kover\left(2pi\right)^4\right\}\left\{iover gamma p-m+i0\right\}e^\left\{-p\left(x-x\text{'}\right)\right\}$is the fermionic contraction (propagator). Path integral formulation In a path-integral, the field Lagrangian, integrated over all possible field histories, defines the probability amplitude to go from one field configuration to another. In order to make sense, the field theory should have a good ground state, and the integral should be performed a little bit rotated into imaginary time. Scalar Field Lagrangian A simple example is the free relativistic scalar field in d-dimensions, whose action integral is: $S = int \left\{1over 2\right\} partial_mu phi partial^mu phi d^dx$The probability amplitude for a process is:$int_A^B e^\left\{iS\right\} Dphi$where A and B are space-like hypersurfaces which define the boundary conditions. The collection of all the $phi\left(A\right)$ on the starting hypersurface give the initial value of the field, analogous to the starting position for a point particle, and the field values $phi\left(B\right)$ at each point of the final hypersurface defines the final field value, which is allowed to vary, giving a different amplitude to end up at different values. This is the field-to-field transition amplitude.The path integral gives the expectation value of operators between the initial and final state:$int_A^B e^\left\{iS\right\} phi\left(x_1\right) ... phi\left(x_n\right) Dphi = langle A| phi\left(x_1\right) ... phi\left(x_n\right) |B rangle$and in the limit that A and B recede to the infinite past and the infinite future, the only contribution that matters is from the ground state (this is only rigorously true if the path-integral is defined slightly rotated into imaginary time). The path integral should be thought of as analogous to a probability distribution, and it is convenient to define it so that multiplying by a constant doesn't change anything:$\left\{int e^\left\{iS\right\} phi\left(x_1\right) ... phi\left(x_n\right) Dphi over int e^\left\{iS\right\} Dphi \right\} = langle 0 | phi\left(x_1\right) .... phi\left(x_n\right) |0rangle$The normalization factor on the bottom is called the partition function for the field, and it coincides with the statistical mechanical partition function at zero temperature when rotated into imaginary time.The initial-to-final amplitudes are ill-defined if you think of things in the continuum limit right from the beginning, because the fluctuations in the field can become unbounded. So the path-integral should be thought of as on a discrete square lattice, with lattice spacing $a$ and the limit $arightarrow 0$ should be taken carefully. If the final results do not depend on the shape of the lattice or the value of a, then the continuum limit exists.On a lattice, the field can be expanded in Fourier modes: $$ phi(x) = int {dkover (2pi)^d} phi(k) e^{ikcdot x} = int_k phi(k) e^{ikx} Where the integration domain is over k restricted to a cube of side length $2pi/a$, so that large values of k are not allowed. It is important to note that the k measure contains the factors of $2pi$ from Fourier transforms, this is the best standard convention for k integrals in QFT. The lattice means that fluctuations at large k are not allowed to contribute right away, they only start to contribute in the limit $arightarrow 0$. Sometimes, instead of a lattice, the field modes are just cut off at high values of k instead.It is also convenient from time to time to consider the space-time volume to be finite, so that the k modes are also a lattice. This is not strictly as necessary as the space-lattice limit, because interactions in k are not localized, but it is convenient for keeping track of the factors in front of the k-integrals and the momentum-conserving delta functions which will arise.On a lattice, the action needs to be discretized: where means that x and y are nearest lattice neighbors. The discretization should be thought of as defining what the derivative $partial_mu phi$ means.In terms of the lattice Fourier modes, the action can be written: $$ S= int_k ((1-cos(k_1)) +(1-cos(k_2)) + ... + (1-cos(k_d)) )phi^*_k phi^k Which for k near zero is: $$ S = int_k {1over 2} k^2 |phi(k)|^2 Which is the continuum Fourier transform of the original action. In finite volume, the quantity $d^dk$ is not infinitesimal, but becomes the volume of a box made by neighboring Fourier modes, or $\left(2pi/V\right)^d$.The field $phi$ is real valued, so the Fourier transform obeys:$phi\left(k\right)^* = phi\left(-k\right),$In terms of real and imaginary parts, the real part of $phi\left(k\right)$ is an even function of k, while the imaginary part is odd. The Fourier transform avoids double-counting, so that it can be written:$S = int_k \left\{1over 2\right\} k^2 phi\left(k\right) phi\left(-k\right)$over an integration domain which integrates over each pair (k,-k) exactly once.For a complex scalar field with action:$S = int \left\{1over 2\right\} partial_muphi^* partial^muphi d^dx$The Fourier transform is unconstrained:$S = int_k \left\{1over 2\right\} k^2 |phi\left(k\right)|^2$and the integral is over all k.Integrating over all different values of $phi\left(x\right)$ is equivalent to integrating over all Fourier modes, because taking a Fourier transform is a unitary linear transformation of field coordinates. When you change coordinates in a multidimensional integral by a linear transformation, the value of the new integral is given by the determinant of the transformation matrix. If $y_i = A_\left\{ij\right\} x_j,$Then $$ det(A) int dx_1 dx_2 ... dx_n = int dy_1 dy_2 ... dy_n If A is a rotation, then $$ A^T A = I , so that $det A = pm 1$, and the sign depends on whether the rotation includes a reflection or not.The matrix which changes coordinates from $phi\left(x\right)$ to $phi\left(k\right)$ can be read off from the definition of a Fourier transform.$A_\left\{kx\right\} = e^\left\{ikx\right\} ,$and the Fourier inversion theorem tells you the inverse:$A^\left\{-1\right\}_\left\{kx\right\} = e^\left\{-ikx\right\} ,$which is the complex conjugate-transpose, up to factors of $2pi$. On a finite volume lattice, the determinant is nonzero and independent of the field values. $det A = 1 ,$and the path integral is a separate factor at each value of k.$int exp\left(i sum_k phi^*\left(k\right) phi\left(k\right)\right) Dphi = prod_k int_\left\{phi_k\right\} e^\left\{\left\{iover 2\right\} k^2 |phi_k|^2 d^dk \right\} ,$and each separate factor is an oscillatory Gaussian.In imaginary time, the Euclidean action' becomes positive definite, and can be interpreted as a probability distribution. The probability of a field having values $phi_k$ is $e^\left\{int_k - \left\{1over 2\right\} k^2 phi^*_k phi_k\right\} = prod_k e^\left\{- k^2 |phi_k|^2 d^dk\right\}$The expectation value of the field is the statistical expectation value of the field when chosen according to the probability distribution:$$ langle phi(x_1) ... phi(x_n) rangle = { int e^{-S} phi(x_1) ... phi(x_n) Dphi over int e^{-S} Dphi} Since the probability of $phi_k$ is a product, the value of $phi\left(k\right)$ at each separate value of k is independently Gaussian distributed. The variance of the Gaussian is 1/(k^2 d^dk), which is formally infinite, but that just means that the fluctuations are unbounded in infinite volume. In any finite volume, the integral is replaced by a discrete sum, and the variance of the integral is $V/k^2$. Monte-Carlo The path integral defines a probabilistic algorithm to generate a Euclidean scalar field configuration. Randomly pick the real and imaginary parts of each Fourier mode at wavenumber k to be a gaussian random variable with variance $1/k^2$. This generates a configuration $phi_C\left(k\right)$ at random, and the Fourier transform gives $phi_C\left(x\right)$. For real scalar fields, the algorithm must generate only one of each pair $phi\left(k\right),phi\left(-k\right)$, and make the second the complex conjugate of the first.To find any correlation function, generate a field again and again by this procedure, and find the statistical average:$langle phi\left(x_1\right) ... phi\left(x_n\right) rangle = lim_\left\{|C|rightarrowinfty\right\}\left\{ sum_C phi_C\left(x_1\right) ... phi_C\left(x_n\right) over |C| \right\}$where $|C|$ is the number of configurations, and the sum is of the product of the field values on each configuration. The Euclidean correlation function is just the same as the correlation function in statistics or statistical mechanics. The quantum mechanical correlation functions are an analytic continuation of the Euclidean correlation functions.For free fields with a quadratic action, the probability distribution is a high dimensional Gaussian, and the statistical average is given by an explicit formula. But the Monte-carlo method also works well for bosonic interacting field theories where there is no closed form for the correlation functions. Scalar Propagator Each mode is independently Gaussian distributed. The expectation of field modes is easy to calculate:for $kne k\text{'}$, since then the two gaussian random variables are independent and both have zero mean.in finite volume V, when the two k-values coincide, since this is the variance of the Gaussian. In the infinite volume limit,Strictly speaking, this is an approximation: the lattice propagator is:But near k=0, for field fluctuations long compared to the lattice spacing, the two forms coincide.It is important to emphasize that the delta functions contain factors of $2pi$, so that they cancel out the $2pi$ factors in the measure for k integrals.$delta\left(k\right) = \left(2pi\right)^d delta_D\left(k_1\right)delta_D\left(k_2\right) ... delta_D\left(k_d\right) ,$where $delta_D\left(k\right)$ is the ordinary one-dimensional Dirac delta function. This convention for delta-functions is not universal--- some authors keep the factors of $2pi$ in the delta functions (and in the k-integration) explicit. Equation of Motion The form of the propagator can be more easily found by using the equation of motion for the field. From the Lagrangian, the equation of motion is:$partial_mu partial^mu phi = 0,$and in an expectation value, this says:$$ partial_mupartial^mu langle phi(x) phi(y)rangle =0 Where the derivatives act on x, and the identity is true everywhere except when x and y coincide, and the operator order matters. The form of the singularity can be understood from the canonical commutation relations to be a delta-function. Defining the (euclidean) Feynman propagator $Delta$ as the fourier transform of the time-ordered two-point function (the one that comes from the path-integral):$partial^2 Delta \left(x\right) = idelta\left(x\right),$So that:$Delta\left(k\right) = \left\{iover k^2\right\}$If the equations of motion are linear, the propagator will always be the reciprocal of the quadratic-form matrix which defines the free Lagrangian, since this gives the equations of motion. This is also easy to see directly from the Path integral. The factor of i disappears in the Euclidean theory. Wick Theorem Because each field mode is an independent Gaussian, the expectation values for the product of many field modes obeys Wick's theorem:$langle phi\left(k_1\right) phi\left(k_2\right) ... phi\left(k_n\right)rangle$is zero unless the field modes coincide in pairs. This means that it is zero for an odd number of $phi$'s, and for an even number of phi's, it is equal to a contribution from each pair separately, with a delta function.$langle phi\left(k_1\right) ... phi\left(k_\left\{2n\right\}\right)rangle = sum prod_\left\{i,j\right\} \left\{delta\left(k_i - k_j\right) over k_i^2 \right\}$where the sum is over each partition of the field modes into pairs, and the product is over the pairs. For example,$langle phi\left(k_1\right) phi\left(k_2\right) phi\left(k_3\right) phi\left(k_4\right) rangle = \left\{delta\left(k_1 -k_2\right) over k_1^2\right\}\left\{delta\left(k_3-k_4\right)over k_3^2\right\} + \left\{delta\left(k_1-k_3\right) over k_3^2\right\}\left\{delta\left(k_2-k_4\right)over k_2^2\right\} + \left\{delta\left(k_1-k_4\right)over k_1^2\right\}\left\{delta\left(k_2 -k_3\right)over k_2^2\right\}$An intepretation of Wick's theorem is that each field insertion can be thought of as a dangling line, and the expectation value is calculated by linking up the lines in pairs, putting a delta function factor that ensures that the momentum of each partner in the pair is equal, and dividing by the propagator. Higher Gaussian moments--- completing Wick's theorem There is a subtle point left before Wick's theorem is proved--- what if more than two of the phi's have the same momentum? If its an odd number, the integral is zero, negative values cancel with the positive values, But if the number is even, the integral is positive. The previous demonstration assumed that the phi's would only match up in pairs.But the theorem is correct even when arbitrarily many of the phis are equal, and this is a notable property of Gaussian integration: $I = int e^\left\{-ax^2/2\right\} = sqrt\left\{2piover a\right\}$ $\left\{partial^n over partial a^n \right\} I = int \left\{x^\left\{2n\right\} over 2^n\right\} e^\left\{-ax^2\right\} = \left\{1cdot 3 cdot 5 ... cdot \left(2n-1\right) over 2 cdot 2 cdot 2 ... ;;;;;cdot 2;;;;;;\right\} sqrt\left\{2pi\right\} a^\left\{-\left\{2n+1over2\right\}\right\}$Dividing by I,$langle x^\left\{2n\right\}rangle=\left\{int x^\left\{2n\right\} e^\left\{-a x^2\right\} over int e^\left\{-a x^2\right\} \right\} = 1 cdot 3 cdot 5 ... cdot \left(2n-1\right) \left\{1over a^n\right\}$ $langle x^2 rangle = \left\{1over a\right\}$If Wick's theorem were correct, the higher moments would be given by all possible pairings of a list of 2n x's:$langle x_1 x_2 x_3 ... x_\left\{2n\right\} rangle$where the x-s are all the same variable, the index is just to keep track of the number of ways to pair them. The first x can be paired with 2n-1 others, leaving 2n-2. The next unpaired x can be paired with 2n-3 different x's leaving 2n-4, and so on. This means that Wick's theorem, uncorrected, says that the expectation value of $x^\left\{2n\right\}$ should be:$langle x^\left\{2n\right\} rangle = \left(2n-1\right)cdot\left(2n-3\right).... cdot5 cdot 3 cdot 1 \left(langle x^2rangle\right)^n$and this is in fact the correct answer. So Wick's theorem holds no matter how many of the momenta of the internal variables coincide. Interaction Interactions are represented by higher order contributions, since quadratic contributions are always Gaussian. The simplest interaction is the quartic self-interaction, with an action:$S = int partial^mu phi partial_muphi + \left\{lambda over 4!\right\} phi^4.$The reason for the combinatorial factor 4! will be clear soon. Writing the action in terms of the lattice (or continuum) Fourier modes:$S = int_k k^2 |phi\left(k\right)|^2 + int_\left\{k_1k_2k_3k_4\right\} phi\left(k_1\right) phi\left(k_2\right) phi\left(k_3\right)phi\left(k_4\right) delta\left(k_1+k_2+k_3 + k_4\right) = S_F + X.$Where $S_F$ is the free action, whose correlation functions are given by Wick's theorem. The exponential of S in the path integral can be expanded in powers of $lambda$, giving a series of corrections to the free action.$e^\left\{-S\right\} = e^\left\{-S_F\right\} \left(1 + X + \left\{1over 2!\right\} X X + \left\{1over 3!\right\} X X X + ... \right)$The path integral for the interacting action is then a power series of corrections to the free action. The term represented by X should be thought of as four half-lines, one for each factor of $phi\left(k\right)$. The half-lines meet at a vertex, which contributes a delta-function which ensures that the sum of the momenta are all equal.To compute a correlation function in the interacting theory, there is a contribution from the X terms now. For example, the path-integral for the four-field correlator:$langle phi\left(k_1\right) phi\left(k_2\right) phi\left(k_3\right) phi\left(k_4\right) rangle = \left\{int e^\left\{-S\right\} phi\left(k_1\right)phi\left(k_2\right)phi\left(k_3\right)phi\left(k_4\right) Dphi over Z\right\}$which in the free field was only nonzero when the momenta k were equal in pairs, is now nonzero for all values of the k. The momenta of the insertions $phi\left(k_i\right)$ can now match up with the momenta of the X's in the expansion. The insertions should also be thought of as half-lines, four in this case, which carry a momentum k, but one which is not integrated.The lowest order contribution comes from the first nontrivial term $e^\left\{-S_F\right\} X$ in the Taylor expansion of the action. Wick's theorem requires that the momenta in the X half-lines, the $phi\left(k\right)$ factors in X, should match up with the momenta of the external half-lines in pairs. The new contribution is equal to:$lambda \left\{1over k_1^2\right\} \left\{1over k_2^2\right\} \left\{1over k_3^2\right\} \left\{1over k_4^2\right\}.$The 4! inside X is canceled because there are exactly 4! ways to match the half-lines in X to the external half-lines. Each of these different ways of matching the half-lines together in pairs contributes exactly once, regardless of the values of the k's, by Wick's theorem. Feynman Diagrams The expansion of the action in powers of X gives a series of terms with progressively higher number of X's. The contribution from the term with exactly n X's are called n-th order.The n-th order terms has: 4n internal half-lines, which are the factors of $phi\left(k\right)$ from the X's. These all end on a vertex, and are integrated over all possible k. external half-lines, which are the come from the $phi\left(k\right)$ insertions in the integral. By Wick's theorem, each pair of half-lines must be paired together to make a line, and this line gives a factor of$delta\left(k_1 + k_2\right) over k_1^2$which multiplies the contribution. This means that the two half-lines that make a line are forced to have equal and opposite momentum. The line itself should be labelled by an arrow, drawn parallel to the line, and labeled by the momentum in the line k. The half-line at the tail end of the arrow carries momentum k, while the half-line at the head-end carries momentum -k. If one of the two half-lines is external, this kills the integral over the internal k, since it forces the internal k to be equal to the external k. If both are internal, the integral over k remains.The diagrams which are formed by linking the half-lines in the X's with the external half-lines, representing insertions, are the Feynman diagrams of this theory. Each line carries a factor of $1over k^2$, the propagator, and either goes from vertex to vertex, or ends at an insertion. If it is internal, it is integrated over. At each vertex, the total incoming k is equal to the total outgoing k.The number of ways of making a diagram by joining half-lines into lines almost completely cancels the factorial factors coming from the Taylor series of the exponential and the 4! at each vertex. Loop Order A tree diagram is one where all the internal lines have momentum which is completely determined by the external lines and the condition that the incoming and outgoing momentum are equal at each vertex. The contribution of these diagrams is a product of propagators, without any integration.An example of a tree diagram is the one where each of four external lines end on an X. Another is when eight external lines end on two X's. A third is when three external lines end on an X, and the remaining half-line joins up with another X, and the remaining half-lines of this X run off to external lines.It is easy to verify that in all these cases, the momenta on all the internal lines is determined by the external momenta and the condition of momentum conservation in each vertex.A diagram which is not a tree diagram is called a loop diagram, and an example is one where two lines of an X are joined to external lines, while the remaining two lines are joined to each other. The two lines joined to each other can have any momentum at all, since they both enter and leave the same vertex. A more complicated example is one where two X's are joined to each other by matching the legs one to the other. This diagram has no external lines at all.The reason loop diagrams are called loop diagrams is because the number of k-integrals which are left undetermined by momentum conservation is equal to the number of independent closed loops in the diagram, where independent loops are counted as in homology theory. The homology is real-valued (actually R^d valued), the value associated with each line is the momentum. The boundary operator takes each line to the sum of the end-vertices with a positive sign at the head and a negative sign at the tail. The condition that the momentum is conserved is exactly the condition that the boundary of the k-valued weighted graph is zero.A set of k-values can be relabeled whenever there is a closed loop going from vertex to vertex, never revisiting the same vertex. Such a cycle can be thought of as the boundary of a 2-cell. The k-labelings of a graph which conserve momentum (which have zero boundary) up to redefinitions of k (up to boundaries of 2-cells) define the first homology of a graph. The number of independent momenta which are not determined is then equal to the number of independent homology loops. For many graphs, this is equal to the number of loops as counted in the most intuitive way. Symmetry factors The number of ways to form a given Feynman diagram by joining together half-lines is large, and by Wick's theorem, each way of pairing up the half-lines contributes equally. Often, this completely cancels the factorials in the denominator of each term, but the cancellation is sometimes incomplete.The uncancelled denominator is called the symmetry factor of the diagram. The contribution of each diagram to the correlation function must be divided by its symmetry factor.For example, consider the Feynman diagram formed from two external lines joined to one X, and the remaining two half-lines in the X joined to each other. There are 4*3 ways to join the external half-lines to the X, and then there is only one way to join the two remaining lines to each other. The X comes divided by 4!=4*3*2, but the number of ways to link up the X half lines to make the diagram is only 4*3, so the contribution of this diagram is divided by two.For another example, consider the diagram formed by joining all the half-lines of one X to all the half-lines of another X. This diagram is called a vacuum bubble, because it does not link up to any external lines. There are 4! ways to form this diagram, but the denominator includes a 2! (from the expansion of the exponential, there are two X's) and two factors of 4!. The contribution is multiplied by 4!/(2*4!*4!) = 1/48.Another example is the Feynman diagram formed from two X's where each X links up to two external lines, and the remaining two half-lines of each X are joined to each other. The number of ways to link an X to two external lines is 4*3, and either X could link up to either pair, giving an additional factor of 2. The remaining two half-lines in the two X's can be linked to each other in two ways, so that the total number of ways to form the diagram is 4*3*4*3*2*2, while the denominator is 4!4!2!. The total symmetry factor is 2, and the contribution of this diagram is divided by two.The symmetry factor theorem gives the symmetry factor for a general diagram: the contribution of each Feynman diagram must be divided by the order of its group of automorphisms, the number of symmetries that it has.An automorphism of a Feynman graph is a permutation M of the lines and a permutation N of the vertices with the following properties: If a line l goes from vertex v to vertex v', then M(l) goes from N(v) to N(v'). If the line is undirected, as it is for a real scalar field, then M(l) can go from N(v') to N(v) too. If a line l ends on an external line, M(l) ends on the same external line. If there are different types of lines, M(l) should preserve the type. This theorem has an interpretation in terms of particle-paths: when identical particles are present, the integral over all intermediate particles must not double-count states which only differ by interchanging identical particles.Proof: To prove this theorem, label all the internal and external lines of a diagram with a unique name. Then form the diagram by linking the a half-line to a name and then to the other half line.Now count the number of ways to form the named diagram. Each permutation of the X's gives a different pattern of linking names to half-lines, and this is a factor of n!. Each permutation of the half-lines in a single X gives a factor of 4!. So a named diagram can be formed in exactly as many ways as the denominator of the Feynman expansion.But the number of unnamed diagrams is smaller than the number of named diagram by the order of the automorphism group of the graph. Connected diagrams: Linked cluster theorem A diagram is connected when it is connected as a graph, meaning that there is a sequence of attached lines and vertices which link any line or vertex to any other. The connected diagrams suffice to reconstruct the full Feynman series, and this is the linked cluster theorem.The full series is the sum over all diagrams, which include several connected components, each one can occur multiple times. The automorphism of the full graph consists of the automorphisms of the connected components, and an extra factor of n! for permutations of n identical copies of one connected component.$sum prod_i \left\{C_\left\{i\right\}^\left\{n_i\right\} over n_i!\right\}$But this can be seen to be a product of separate factors for each connected graph:$prod_i sum_j \left\{C_i^\left\{n_i\right\} over n_i!\right\} = prod_i exp\left(C_i\right) = exp\left(sum_i C_i\right).$This is the linked cluster theorem: the sum of all diagrams is the exponential of the connected ones. Vacuum Bubbles An immediate consequence of the linked-cluster theorem is that all vacuum bubbles, diagrams without external lines cancel when calculating correlation functions. A correlation function is given by a ratio of path-integrals:$langle phi_1\left(x_1\right) ... phi_n\left(x_n\right)rangle = \left\{int e^\left\{-S\right\} phi_1\left(x_1\right) ...phi_n\left(x_n\right) Dphi over int e^\left\{-S\right\} Dphi\right\}.$The top is the sum over all Feynman diagrams, including disconnected diagrams which do not link up to external lines at all. In terms of the connected diagrams, the numerator includes the same contributions of vacuum bubbles as the denominator:$int e^\left\{-S\right\}phi_1\left(x_1\right)...phi_n\left(x_n\right)rangle = \left(sum E_i\right)\left(exp\left(sum_i C_i\right) \right).$Where the sum over E diagrams includes only those diagrams each of whose connected components end on at least on external line. The vacuum bubbles are the same whatever the external lines, and give an overall multiplicative factor. The denominator is the sum over all vacuum bubbles, and dividing gets rid of the second factor.The vacuum bubbles then are only useful for determining Z itself, which from the definition of the path integral is equal to:$Z= int e^\left\{-S\right\} Dphi = e^\left\{-HT\right\} = e^\left\{-rho V\right\}$where $rho$ is the energy density in the vacuum. Each vacuum bubble contains a factor of $delta\left(k\right)$ zeroing the total k at each vertex, and when there are no external lines, this contains a factor of $delta\left(0\right)$, because the momentum conservation is over-enforced. In finite volume, this factor can be identified as the total volume of space time. Dividing by the volume, the remaining integral for the vacuum bubble has an interpretation: it is a contribution to the energy density of the vacuum. Sources Correlation functions are the sum of the connected Feynman diagrams, but the formalism treats the connected and disconnected diagrams differently. Internal lines end on vertices, while external lines go off to insertions. Introducing sources unifies the formalism, by making new vertices where one line can end.Sources are external fields, fields which contribute to the action, but are not dynamical variables. A scalar field source is another scalar field h which contributes a term to the (Lorentz) Lagrangian:$int h\left(x\right) phi\left(x\right) d^dx = int h\left(k\right) phi\left(k\right) d^dk ,$In the Feynman expansion, this contributes H terms with one half-line ending on a vertex. Lines in a Feynman diagram can now end either on an X vertex, or on an H-vertex, and only one line enters an H vertex. The Feynman rule for an H-vertex is that a line from an H with momentum k gets a factor of h(k).The sum of the connected diagrams in the presence of sources includes a term for each connected diagram in the absence of sources, except now the diagrams can end on the source. Traditionally, a source is represented by a little "x" with one line extending out, exactly as an insertion.$log\left(Z\left[h\right]\right) = sum_\left\{n,C\right\} h\left(k_1\right) h\left(k_2\right) ... h\left(k_n\right) C\left(k_1,...,k_n\right),$where $C\left(k_1,....,k_n\right)$ is the connected diagram with n external lines carrying momentum as indicated. The sum is over all connected diagrams, as before.The field h is not dynamical, which means that there is no path integral over h: h is just a parameter in the Lagrangian which varies from point to point. The path integral for the field is:$Z\left[h\right] = int e^\left\{iS + iint hphi\right\} Dphi ,$and it is a function of the values of h at every point. One way to interpret this expression is that it is taking the Fourier transform in field space. If there is a probability density on R^n, the Fourier transform of the probability density is:$int rho\left(y\right) e^\left\{i k y\right\} d^n y = langle e^\left\{i k y\right\} rangle = langle prod_\left\{i=1\right\}^\left\{n\right\} e^\left\{h_i y_i\right\}rangle ,$The fourier transform is the expectation of an oscillatory exponential. The path integral in the presence of a source h(x) is:$Z\left[h\right] = int e^\left\{iS\right\} e^\left\{iint_x h\left(x\right)phi\left(x\right)\right\} Dphi = langle e^\left\{i h phi \right\}rangle$which, on a lattice, is the product of an oscillatory exponential for each field value:$langle prod_x e^\left\{i h_x phi_x\right\}rangle$The fourier transform of a delta-function is a constant, which gives a formal expression for a delta function:$delta\left(x-y\right) = int e^\left\{k\left(x-y\right)\right\} dk$This tells you what a field delta function looks like in a path-integral. For two scalar fields $phi$ and $eta$,$delta\left(phi - eta\right) = int e^\left\{ i h\left(x\right)\left(phi\left(x\right) -eta\left(x\right)d^dx\right\} Dh$Which integrates over the Fourier transform coordinate, over h. This expression is useful for formally changing field coordinates in the path integral, much as a delta function is used to change coordinates in an ordinary multi-dimensional integral.The partition function is now a function of the field h, and the physical partition function is the value when h is the zero function:The correlation functions are derivatives of the path integral with respect to the source:$langlephi\left(x\right)rangle = \left\{1over Z\right\} \left\{partial over partial h\left(x\right)\right\} Z\left[h\right] = \left\{partialoverpartial h\left(x\right)\right\} log\left(Z\left[h\right]\right)$ Spin 1/2: Grassman integrals The preceding discussion can be extended to the Fermi case, but only if the notion of integration is expanded. Particle-Path Interpretation A Feynman diagram is a representation of quantum field theory processes in terms of particle paths.In a Feynman diagram, particles are represented by lines, which can be squiggly or straight, with an arrow or without, depending on the type of particle. A point where lines connect to other lines is referred to as an interaction vertex, or vertex for short. There are three different types of lines: internal lines connect two vertices, incoming lines extend from "the past" to a vertex and represent an initial state, and outgoing lines extend from a vertex to "the future" and represent the final state.There are several conventions for where to represent the past and the future. Sometimes, the bottom of the diagram represents the past and the top of the diagram represents the future. Other times, the past is to the left and the future to the right. When calculating correlation functions instead of scattering amplitudes, there is no past and future and all the lines are internal. Then the particle lines begin and end on small x's, which represent the positions of the operators whose correlation is being calculated. The LSZ reduction formula is the standardized argument that shows that the correlation functions and scattering diagrams are the same.Feynman diagrams are a pictorial representation of a contribution to the total amplitude for a process which can happen in several different ways. When a group of incoming particles are to scatter off each other, the process can be thought of as one where the particles travel over all possible paths, including paths that go backward in time. In a perturbative expansion of the scattering amplitude for the experiment defined by the incoming and outgoing lines. In some quantum field theories (notably quantum electrodynamics), one can obtain an excellent approximation of the scattering amplitude from a few terms of the perturbative expansion, corresponding to a few simple Feynman diagrams with the same incoming and outgoing lines connected by different vertices and internal lines.The method, although originally invented for particle physics, is useful in any part of physics where there are statistical or quantum fields. In condensed matter physics, there are many-body Feynman diagrams with dashed lines which represent an instantaneous potential interaction, while phonons take the place of photons. In statistical physics, there are statistical Feynman diagrams which represent the way in which correlations travel along paths.Feynman diagrams are often confused with spacetime diagrams and bubble chamber images because they all seek to represent particle scattering. Feynman diagrams are graphs that represent the trajectories of particles in intermediate stages of a scattering process. Unlike a bubble chamber picture, only the sum of all the Feynman diagrams represent any given particle interaction; particles do not choose a particular diagram each time they interact. The law of summation is in accord with the principle of superposition--- every diagram contributes a factor to the total amplitude for the process. Scattering The correlation functions of a quantum field theory describe the scattering of particles. The definition of "particle" in relativistic field theory is not self-evident, because if you try to determine the position so that the uncertainty is less than the compton wavelength, the uncertainty in energy is large enough to produce more particles and antiparticles of the same type from the vacuum. This means that the notion of a single-particle state is to some extent incompatible with the notion of an object localized in space.In the 1930's, Wigner gave a mathematical definition for single-particle states: they are a collection of states which form an irreducible representation of the Poincare group. Single particle states describe an object with a finite mass, a well defined momentum, and a spin. This definition is fine for protons and neutrons, electrons and photons, but it excludes quarks, which are permanently confined, so the modern point of view is more accomodating: a particle is anything whose interaction can be described in terms of Feynman diagrams, which have an interpretation as a sum over particle trajectories.A field operator can act to produce a one-particle state from the vacuum, which means that the field operator $phi\left(x\right)$ produces a superposition of Wigner particle states. In the free field theory, the field produces one particle states only. But when there are interactions, the field operator can also produce 3-particle,5-particle (if there is no +/- symmetry also 2,4,6 particle) states too. To compute the scattering amplitude for single particle states only requires a careful limit, sending the fields to infinity and integrating over space to get rid of the higher-order corrections.The relation between scattering and correlation functions is the LSZ-theorem: The scattering amplitude for n particles to go to m-particles in a scattering event is the given by the sum of the Feynman diagrams that go into the correlation function for n+m field insertions, leaving out the propagators for the external legs.For example, for the $lambda phi^4$ interaction of the previous section, the order $lambda$ contribution to the (Lorentz) correlation function is:$langle phi\left(k_1\right)phi\left(k_2\right)phi\left(k_3\right)phi\left(k_4\right)rangle = \left\{iover k_1^2\right\}\left\{iover k_2^2\right\} \left\{iover k_3^2\right\} \left\{iover k_4^2\right\} ilambda ,$Stripping off the external propagators, that is, removing the factors of $i/k^2$, gives the invariant scattering amplitude M:$M = ilambda ,$which is a constant, independent of the incoming and outgoing momentum. The interpretation of the scattering amplitude is that the sum of $|M|^2$ over all possible final states is the probability for the scattering event. The normalization of the single-particle states must be chosen carefully, however, to ensure that M is a relativistic invariant.Non-relativistic single particle states are labeled by the momentum k, and they are chosen to have the same norm at every value of k. This is because the nonrelativistic unit operator on single particle states is:$int dk |kranglelangle k|,$In relativity, the integral over k states for a particle of mass m integrates over a hyperbola in E,k space defined by the energy-momentum relation:$E^2 - k^2 = m^2 ,$If the integral weighs each k point equally, the measure is not Lorentz invariant. The invariant measure integrates over all values of k and E, restricting to the hyperbola with a Lorentz invariant delta function:$int delta\left(E^2-k^2 - m^2\right) |E,kranglelangle E,k| dE dk = int \left\{dk over 2 E\right\} |kranglelangle k|$So the normalized k-states are different from the relativistically normalized k-states by a factor of $sqrt\left\{E\right\} = \left(k^2-m^2\right)^\left\{1over 4\right\}$ The invariant amplitude M is then the probability amplitude for relativistically normalized incoming states to become relativistically normalized outgoing states.For nonrelativistic values of k, the relativistic normalization is the same as the nonrelativistic normalization (up to a constant factor $sqrt\left\{m\right\}$ ). In this limit, the $phi^4$ invariant scattering amplitude is still constant. The particles created by the field phi scatter in all directions with equal amplitude.The nonrelativistic potential which scatters in all directions with an equal amplitude (in the Born approximation) is one whose Fourier transform is constant--- a delta-function potential. The lowest order scattering of the theory reveals the non-relativistic interpretation of the this theory--- it describes a collection of particles with a delta-function repulsion. Two such particles have an aversion to occupying the same point at the same time. LSZ theorem Nonperturbative effects Thinking of Feynman diagrams as a perturbation series, nonperturbative effects like tunneling do not show up, because any effect which goes to zero faster than any polynomial does not affect the Taylor series. Even bound states are absent, since at any finite order particles are only exchanged a finite number of times, and to make a bound state, the binding force must last forever.But this point of view is misleading, because the diagrams not only describe scattering, but they also are a representation of the short-distance field theory correlations. They encode not only asymptotic processes like particle scattering, they also describe the multiplication rules for fields, the operator product expansion. Nonperturbative tunneling processes involve field configurations which on average get big when the coupling constant gets small, but each configuration is a coherent superposition of particles whose local interactions are described by Feynman diagrams. When the coupling is small, these become collective processes which involve large numbers of particles, but where the interactions between each of the particles is simple. This means that nonperturbative effects show up asymptotically in resummations of infinite classes of diagrams, and these diagrams can be locally simple. The graphs determine the local equations of motion, while the allowed large-scale configurations describe non-perturbative physics. But because Feynman propagators are nonlocal in time, translating a field process to a coherent particle language is not completely intuitive, and has only been explicitly worked out in certain special cases. In the case of nonrelativistic bound states, the Bethe-Salpeter equation describes the class of diagrams to include to describe a relativistic atom. For quantum chromodynamics, the Shifman Vainshtein Zakharov sum rules describe non-perturbatively excited long-wavelength field modes in particle language, but only in a phenomenological way.The number of Feynman diagrams at high orders of perturbation theory is very large, because there are as many diagrams as there are graphs with a given number of nodes. Nonperturbative effects leave a signature on the way in which the number of diagrams and resummations diverge at high order. It is only because non-perturbative effects appear in hidden form in diagrams that it was possible to analyze nonperturbative effects in string theory, where in many cases a Feynman description is the only one available. Mathematical details A Feynman diagram can be considered a graph. When considering a field composed of particles, the edges will represent (sections of) particle world lines; the vertices represent virtual interactions. Since only certain interactions are permitted, the graph is constrained to have only certain types of vertices. The type of field of an edge is its field label; the permitted types of interaction are interaction labels. The value of a given diagram can be derived from the graph; the value of the interaction as a whole is obtained by summing over all diagrams. Mathematical interpretation Feynman diagrams are really a graphical way of keeping track of deWitt indices, much like Penrose's graphical notation for indices in multilinear algebra. There are several different types for the indices, one for each field (this does not depend on how the fields are grouped; for instance, if the up quark field and down quark field are treated as different fields, then there would be the same type assigned to both of them but if they are treated as a single multicomponent field with "flavors", then there would be a problem). The edges, (i.e., propagators) are tensors of rank (2,0) in deWitt's notation (i.e., with two contravariant indices and no covariant indices), while the vertices of degree n are rank n covariant tensors which are totally symmetric among all bosonic indices of the same type and totally antisymmetric among all fermionic indices of the same type and the contraction of a propagator with a rank n covariant tensor is indicated by an edge incident to a vertex (there is no ambiguity in which "slot" to contract with because the vertices correspond to totally symmetric tensors). The external vertices correspond to the uncontracted contravariant indices.A derivation of the Feynman rules using Gaussian functional integrals is given in the functional integral article.Each Feynman diagram on its own does not have a physical significance. It's only the infinite sum over all possible (bubble-free) Feynman diagrams which gives physical results. This infinite sum is usually only asymptotically convergent. See alsoStückelberg-Feynman interpretation Invariance mechanics Penguin diagram Notes References Gerardus 't Hooft, Martinus Veltman, Diagrammar, CERN Yellow Report 1973, online David Kaiser, Drawing Theories Apart: The Dispersion of Feynman Diagrams in Postwar Physics, Chicago: University of Chicago Press, 2005. ISBN 0-226-42266-6 Martinus Veltman, Diagrammatica: The Path to Feynman Diagrams, Cambridge Lecture Notes in Physics, ISBN 0-521-45692-4 (expanded, updated version of above) External links Feynman diagram page at SLAC AMS article: "What's New in Mathematics: Finite-dimensional Feynman Diagrams" WikiTeX supports editing Feynman diagrams directly in Wiki articles. Drawing Feynman diagrams with FeynDiagram C++ library that produces PostScript output. Feynman Diagram Examples using Thorsten Ohl's Feynmf LaTeX package. JaxoDraw A Java program for drawing Feynman diagrams. ```
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 153, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9060658812522888, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/112144-integrate-using-complex.html
# Thread: 1. ## Integrate using complex I stuck with this question I need any hint in it thanks in advance show that $\int_{0}^{\pi} \sin ^{2n} \theta d\theta =\frac{(2n)! \pi}{2^{2n} (n!)^2}$ my work let $z=e^{i\theta}$ $d\theta = \frac{dz}{iz}$ and $\sin \theta = \frac{1}{2i}\left(z-\frac{1}{z} \right)$ so $\int_{\mid z \mid =1 } \left(\frac{1}{2i}\right)^{2n}\left(z-\frac{1}{z} \right)^{2n} \frac{dz}{zi}$ $\int_{\mid z \mid =1 }\left(\frac{1}{2i}\right)^{2n} \left(\frac{z^2-1}{z}\right)^{2n}\frac{dz}{zi}$ $\int_{\mid z \mid =1 } \frac{(z^2-1)^{2n}}{z^{2n+1}} dz$ the function is not analytic at z=0 let $g(z) =(z^2-1)^{2n}$ so $\int_{\mid z \mid =1 } \frac{(z^2-1)^{2n}}{z^{2n+1}} dz= \frac{2\pi i g^{2n}(0)}{(2n)!}$ note $g^{2n}$ the 2n-th derivatives of g(z) but I can't find the 2n-th derivative it is long I think there is a mistake in my work any help will be appreciated 2. I did this quick so check it well. If you're going to use the Residue Theorem, go all the way round. Note: $\int_0^{\pi} \sin^{2n}(t)dt=1/2 \int_0^{2\pi} \sin^{2n}(x)dx$ Now let $z=e^{it}$ and when I do that I get: $-\frac{i}{2}\mathop\oint\limits_{|z|=1} \frac{(z^2-1)^{2n}}{z^{2n}} \frac{1}{2^{2n}}\frac{1}{i^{2n}} \frac{1}{z} dz$ For now, suppose we only wanted to solve: $\mathop\oint\limits_{|z|=1}\frac{(z^2-1)^{2n}}{z^{2n+1}}dz$ That's still a messy derivative but we can simplify it if we make a change of variables. What happens to both the integrand and contour if I let $u=z^2-1$? If $z=e^{it}$, then $u=-1+e^{2it}$ right? Isn't that a circular contour around the point $u=-1$ and going around twice as t goes from 0 to 2pi? Make all those substitutions and try and show: $\mathop\oint\limits_{|z|=1} \frac{(z^2-1)^{2n}}{z^{2n+1}}dz=\frac{1}{2}\mathop\oint\limit s_{\tiny\begin{array}{c}u=-1+e^{2it} \\ 0\leq t \leq 2\pi\end{array}} \frac{u^{2n}}{(u+1)^n}du$ 3. Originally Posted by shawsend I did this quick so check it well. If you're going to use the Residue Theorem, go all the way round. Note: $\int_0^{\pi} \sin^{2n}(t)dt=1/2 \int_0^{2\pi} \sin^{2n}(x)dx$ Now let $z=e^{it}$ and when I do that I get: $-\frac{i}{2}\mathop\oint\limits_{|z|=1} \frac{(z^2-1)^{2n}}{z^{2n}} \frac{1}{2^{2n}}\frac{1}{i^{2n}} \frac{1}{z} dz$ For now, suppose we only wanted to solve: $\mathop\oint\limits_{|z|=1}\frac{(z^2-1)^{2n}}{z^{2n+1}}dz$ That's still a messy derivative but we can simplify it if we make a change of variables. What happens to both the integrand and contour if I let $u=z^2-1$? If $z=e^{it}$, then $u=-1+e^{2it}$ right? Isn't that a circular contour around the point $u=-1$ and going around twice as t goes from 0 to 2pi? Make all those substitutions and try and show: $\mathop\oint\limits_{|z|=1} \frac{(z^2-1)^{2n}}{z^{2n+1}}dz=\frac{1}{2}\mathop\oint\limit s_{\tiny\begin{array}{c}u=-1+e^{2it} \\ 0\leq t \leq 2\pi\end{array}} \frac{u^{2n}}{(u+1)^n}du$ Thanks very much 4. Originally Posted by shawsend $\mathop\oint\limits_{|z|=1} \frac{(z^2-1)^{2n}}{z^{2n+1}}dz=\frac{1}{2}\mathop\oint\limit s_{\tiny\begin{array}{c}u=-1+e^{2it} \\ 0\leq t \leq 2\pi\end{array}} \frac{u^{2n}}{(u+1)^n}du$ After checking it, I think this should be: $\mathop\oint\limits_{|z|=1} \frac{(z^2-1)^{2n}}{z^{2n+1}}dz=\frac{1}{2}\mathop\oint\limit s_{\tiny\begin{array}{c}u=-1+e^{2it} \\ 0\leq t \leq 2\pi\end{array}} \frac{u^{2n}}{(u+1)^{n+1}}du$ Here's a numeric check of this relationship with n=6 (because this sort of thing is new for me too). Remember, we're going around twice in the second integral so I just multiplied it by two and went around once: Code: ```In[65]:= NIntegrate[(u^(2*n)/(u + 1)^(n + 1))*I* Exp[I*t] /. {n -> 6, u -> -1 + Exp[I*t]}, {t, 0, 2*Pi}] N[2*Pi*I*((-1)^n(2*n)!/n!^2) /. n -> 6] NIntegrate[((z^2 - 1)^(2*n)/z^(2*n + 1))* I*Exp[I*t] /. {n -> 6, z -> Exp[I*t]}, {t, 0, 2*Pi}] Out[65]= -9.379164112033322*^-13 + 5805.66322740536*I Out[66]= 0. + 5805.663223833938*I Out[67]= -1.3515644212702682*^-6 + 5805.663226625173*I``` 5. Originally Posted by shawsend After checking it, I think this should be: $\mathop\oint\limits_{|z|=1} \frac{(z^2-1)^{2n}}{z^{2n+1}}dz=\frac{1}{2}\mathop\oint\limit s_{\tiny\begin{array}{c}u=-1+e^{2it} \\ 0\leq t \leq 2\pi\end{array}} \frac{u^{2n}}{(u+1)^{n+1}}du$ Here's a numeric check of this relationship with n=6 (because this sort of thing is new for me too). Remember, we're going around twice in the second integral so I just multiplied it by two and went around once: ya I figured that but what I want is the idea just and you gave me it, thanks. 6. Originally Posted by Amer I stuck with this question I need any hint in it thanks in advance show that $\int_{0}^{\pi} \sin ^{2n} \theta d\theta =\frac{(2n)! \pi}{2^{2n} (n!)^2}$ my work let $z=e^{i\theta}$ $d\theta = \frac{dz}{iz}$ and $\sin \theta = \frac{1}{2i}\left(z-\frac{1}{z} \right)$ so $\int_{\mid z \mid =1 } \left(\frac{1}{2i}\right)^{2n}\left(z-\frac{1}{z} \right)^{2n} \frac{dz}{zi}$ $\int_{\mid z \mid =1 }\left(\frac{1}{2i}\right)^{2n} \left(\frac{z^2-1}{z}\right)^{2n}\frac{dz}{zi}$ $\int_{\mid z \mid =1 } \frac{(z^2-1)^{2n}}{z^{2n+1}} dz$ the function is not analytic at z=0 let $g(z) =(z^2-1)^{2n}$ so $\int_{\mid z \mid =1 } \frac{(z^2-1)^{2n}}{z^{2n+1}} dz= \frac{2\pi i g^{2n}(0)}{(2n)!}$ note $g^{2n}$ the 2n-th derivatives of g(z) but I can't find the 2n-th derivative it is long I think there is a mistake in my work any help will be appreciated You are really near from the solution!! First, one small mistake in the calculations, take in account $\left(\frac{1}{2i}\right)^{2n}\frac{1}{i}=\frac{(-1)^n}{i2^{2n}}$. This factor is missed in the integral. Second, as Shawsend obseved, the integral you want is one half of the integral around the circle. Third, observe that your $g(z)=b_0+b_1z+\cdots+b_{4n}z^{4n}$, i.e. is a polynomial of degree $4n$. Thus, $g^{(2n)} (0)=(2n)!b_{2n}$. By the Newton binomial formula applied to $(z^2+(-1))^{2n}$ we have $b_{2n}=\left(\begin{array}{cc}2n \\ n\end{array}\right)(-1)^n=\frac{(2n)!}{n!^2}(-1)^n$. All together lead you to the desired formula!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 47, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501043558120728, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/27164/why-is-fourier-analysis-so-handy-for-proving-the-isoperimetric-inequality/27192
## Why is Fourier analysis so handy for proving the isoperimetric inequality? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have just completed an introductory course on analysis, and have been looking over my notes for the year. For me, although it was certainly not the most powerful or important theorem which we covered, the most striking application was the Fourier analytic proof of the isoperimetric inequality. I understand the proof, but I still have no feeling for why anyone would think to use Fourier analysis to approach this problem. I am looking for a clear reason why someone would look at this and think "a Fourier transform would simplify things here". Even better would be a physical interpretation. Could this somehow be related to "hearing the shape of a drum"? Is there any larger story here? - 1 By the way, there are several different proofs of isoperimetric inequality using Fourier series (Hurwitz alone gave two). – Victor Protsak Jun 5 2010 at 22:43 1 The proposed connection with hearing the shape of the drum is very nice. It is known that you can hear the volume of a drum (but, in general, not the shape). – Gil Kalai Jun 8 2010 at 14:52 ## 5 Answers Experience with Fourier analysis and representation theory has shown that every time a problem is invariant with respect to a group symmetry, the representation theory of that group is likely to be relevant. If the group is abelian, the representation theory is given by the Fourier transform on that group. In this case, the relevant symmetry group is that of reparameterising the arclength parameterisation of the perimeter by translation. This operation does not change the area or the perimeter. When combined with the observation (from Stokes theorem) that both the area and perimeter of a body can be easily recovered from the arclength parameterisation, this naturally suggests to use Fourier analysis in the arclength variable. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It seems to me that it is worth mentioning that only the 2-d isoperimetric inequality is easily proved using Fourier analysis but not higher dimensional geometric inequalities. In addition to the group invariance property cited by Terry Tao, there is the simple fact that a closed curve can be represented by a pair of periodic functions, and, if the curve is parameterized properly, its length and area are nicely represented by integrals of quadratic polynomials of the periodic functions and their derivatives. All in all, a nice setup for Fourier series. If the integrands were higher degree polynomials, a proof might still be possible but I'm not sure it would be as easy. And is there an analogous proof in higher dimensions? - 1 Just about the only way in higher dimensions is via Brunn-Minkowski inequality. Does it have any Fourier analytic meaning? – Victor Protsak Jun 6 2010 at 1:29 1 Arguably, the Brunn-Minkowski inequality is itself an isoperimetric inequality, so it's best to talk about how it is proved. Every proof I know involves some form of symmetrization via rearrangement (the fashionable terms is mass transportation). There are definitely deep connections between Brunn-Minkowski and Fourier analysis, but I will let people who understand this much better than me explain this. – Deane Yang Jun 6 2010 at 1:35 I will also be very interested to learn a Fourier proof of the isoperimetric inequality for d>2.I was always very curious with the question why Fourier analysis is NOT handy forproving isoperimetric inequalities in dimension 3 and higher given the beautiful d=2 proof. – Gil Kalai Jun 8 2010 at 14:49 This doesn't answer the question of why Fourier analysis works, but it certainly is an answer to how one might think "Hmm ... perhaps we're in the domain of Fourier analysis here." It's that the surface area of a shape X is defined in terms of the volume of X+B when B is a small ball. There is a close relationship between sumsets and convolutions (the sumset is precisely the set of points where the convolution of the characteristic functions of the two sets is non-zero), and every time you have a convolution, thoughts of Fourier analysis should be triggered. The reason I say this doesn't answer the question of why Fourier analysis works is that there is a difference between the convolution and the support of the convolution, and the latter does not transform nicely. But that shouldn't stop one thinking of Fourier analysis and attempting to find some way of relating surface area to convolutions. - That's a nice answer. For some reason I had never heard that interpretation of surface area before, and it's nice to hear. – Peter Samuelson Jun 9 2010 at 15:46 I'd like to add a few words on what happens in higher dimensions. First, a convexity assumption becomes essential (as in the second proof of Hurwitz which works only for convex domains). The isoperimetric inequalities in $\mathbb R^n$, $n>2$, are much easier to deal with in case of convex bodies, and the whole problem in some sense looks most natural under the convexity assumption. Second, there are many different isoperimetric inequalities in higher dimensions. And Fourier analysis (or rather harmonic analysis on a sphere) can be successfully applied to prove at least some of them. There is a classical approach to isoperimetric problems based on Steiner's theorem. Let $K$ be a convex body in $\mathbb R^n$ and let $K_r$ denote the "parallel" body $$K_r=\{x\in\mathbb R^n|\ dist(x, K)\leq r \},\quad r>0.$$ Then, by Steiner's theorem, there exist $n+1$ numbers $W_0^n(K),W_1^n(K),\dots,W_n^n(K)$, such that $$V(K_r)=Vol(K_r)=\sum\limits_{i=0}^{n}{n \choose i}W^n_i(K)r^{i}.$$ It can be shown that $$W^n_0(K)=V(K),\quad W^n_1(K)=\frac{S(K)}{n},\qquad(1)$$ where $S(K)$ is the surface area of $\partial K$. Moreover, $W^n_n(K)$ is equal to the volume $\pi_n$ of the unit ball in $\mathbb R^n$ and $$W^n_{n-1}(K)=\frac{\pi_n}{2}w(K),\qquad\qquad\qquad\quad (2)$$ where $w(K)$ is the mean width of $K$. Note that for $n=2$ the perimeter $P(K)$ equals $\pi w(K).$ The numbers $W_i^n(K)$ give some information on how the convex body $K$ is different from a ball (for the unit ball in $\mathbb R^n$, obviously $W^n_i(K)=\pi_n$ for all $i$.) A convex body is completely determined by its support function $$h(x)=\sup\{x\cdot y|\ y\in K\}$$ which measures the directed distance of the origin to the tangent plane of $K$ at direction $x\in S^{n-1}.$ Now, the second proof of Hurwitz deals with the Fourier decomposition of the support function of a convex 2D domain. The problem is that in dimension $n>2$ the formulas for volume and surface area in terms of the support function cannot be expressed nicely by means of spherical harmonics. However, it is still possible to derive an isoperimetric inequality for the numbers $W^n_{n-2}$ and $W^n_{n-1}$ via harmonic analysis, namely $$W^n_{n-1}\geq\sqrt{\pi_n W^n_{n-2}}. \qquad\qquad\qquad(3)$$ When $n=2$ this is the standard isoperimetric inequality $P^2\geq 4\pi A$. If $n=3$ $(3)$ gives the isoperimetric inequality between the mean width and surface area of a convex body $$\pi [w(K)]^2\geq S(K).\qquad\qquad\qquad (4)$$ The proof is a straightforward extension of the second Hurwitz proof (using a decomposition of the support function into a series of spherical harmonics) and can be found here. Update (concerning the question in Victor's comment below). If we assume as known the inequality $$W_{1}^3\geq \sqrt{W_{0}^3W_{2}^3},$$ then together with (1),(2) and (4) it implies that $S^3\geq 36\pi V^2$. ("Known" means that I don't know how to obtain the inequality using only harmonic analysis. It follows from the Alexandrov-Fenchel inequality for mixed volumes.) - 3 Awesome! Do you know whether $S^3\geq 36\pi V^2$ has $\textit{any}$ proof based on harmonic analysis (not necessarily of Hurwitz type)? – Victor Protsak Jun 7 2010 at 3:49 My two cents. It is known since Antiquity that the circle is the curve of minimal length maximizing inscribed area. So you know that you are looking for the circle, from the start. A series in cos(x) and sin(x) is certainly a good way to represent circles, and composition of circles. Circles turning around circles, turning around circles, were called epicycles in the terminology of Middle Ages and Renaissance. They were used to approximate the trajectories of planets, which are close to circles. So using trigonometric functions to approximate curves close to circle is an idea which is quite old, and from the historical viewpoint, quite natural. Now in modern courses in Fourier analysis, justifications for introducing these series usually come from the heat equation, faithful to the original application Fourier had in mind, or from purely mathematical considerations about the nice properties of orthogonal basis. So the link with curves close to circles is somehow lost. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951981246471405, "perplexity_flag": "head"}
http://mathoverflow.net/questions/25911/random-walks-in-z2-z2-intrinsic-characterization-of-euclidean-distance-part/25947
Random Walks in $Z^2$/$Z^2$-intrinsic characterization of Euclidean distance Part II Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For some context see http://mathoverflow.net/questions/25846/random-walks-in-z2-z2-intrinsic-characterization-of-euclidean-distance As per Noah's answer and JBL's comment this was false as stated. However, I think the following reformulation is interesting. As before we consider a random walk on $\mathbb{Z}^2$ where a particle either stays at its vertex or moves to a neighbor with probability 1/5. We start the process with a particle at the origin. For $x \in \mathbb{Z}^2$ we let $p_n(x)$ denote the probability that we find the particle at $x$ after $n$ iterations. Let $|\cdot|$ denote the Euclidean distance of two points in $\mathbb{Z}^2$ via the standard embedding of $\mathbb{Z}^2 \subset \mathbb{R}^2$. Now for the reformulated question: For each $n$, let $C_n$ be the supremum over all $C > 0$ so that for all $x,y \in \mathbb{Z}^2$ we have $|x|,|y| \leq C$ and $|x| \leq |y| \Rightarrow p_n(x) \geq p_n(y)$ Does $\lim_{n\to\infty} C_n = \infty$? If so, how fast does this diverge? EDIT: As per George Lowther's comment, I now find it quite probable that $\lim\inf_{n\to\infty} C_n \leq 5$ if not $C_n = 5$ for all large $n$. A natural attempt to salvage the question is the following: For each $n$, let $\tilde{C}_n$ be the supremum over all $C > 0$ so that for all $x,y \in \mathbb{Z}^2$ we have $|x|,|y| \leq C$ and $|x| < |y| \Rightarrow p_n(x) > p_n(y)$ Again we ask if $\lim_{n\to\infty} \tilde{C}_n = \infty$ and if so, how fast this diverges. - What you are suggesting implies that the probability of being at (3,4) is the same as being at (5,0) for all large n. That seems unlikely, and would guess that $C_n=5$ for n large. – George Lowther May 25 2010 at 23:25 That is a very good point! I will try to modify the question. – Yakov Shlapentokh-Rothman May 26 2010 at 0:30 Do you want to add this as an answer so I can accept it? – Yakov Shlapentokh-Rothman May 26 2010 at 0:50 Just a remark to recall that your p<sub>n</sub>(j,k) is the coefficient of x<sup>j</sup>&nbsp;y<sup>k</sup> in the expansion of (x + 1/x + 1 + y + 1/y )<sup>n</sup>. – Pietro Majer May 26 2010 at 6:53 3 Answers As I mentioned in my comment - what you are suggesting implies that the probability of being at (3,4) is the same as being at (5,0) for all large n. That seems unlikely, and would guess that $C_n=5$ for n large. The answer to your modified question is yes! $\tilde C_n$ tends to infinity as n goes to infinity. (Phew! It took me a couple of revisions to prove this, but hopefully the calculations below are now correct). In fact, $\tilde C_n\ge c\sqrt{n}$ for some positive constants c. I think that you can also show that $\tilde C_n\le C\sqrt{n}$ for some other constant C but I'm not completely sure yet, although it should follow from a closer examination of my expression below for $p_n(x)$. You can derive an asymptotic expansion for $p_n(x)$ in 1/n. Evaluating this to second order is enough to answer your question. After n steps the distribution of the particle will be approximately normal with variance 2n/5 in both dimensions, so we expect to get $p_n(x)=\frac{5}{4\pi n}e^{-\frac{5}{4n}\vert x\vert^2}$ to leading order. The idea is to note that you are repeatedly applying a linear operator, $$p_{n+1}=Lp_n,\ Lp(x) \equiv (p(x)+p(x-e_1)+p(x+e_1)+p(x-e_2)+p(x+e_2))/5$$ where $e_1=(1,0)$, $e_2=(0,1)$. In finite dimensional spaces, you would solve this by decomposing $p_0$ into a sum of eigenvectors and for large n, the dominant term of $L^np_0$ will be that corresponding to the largest eigenvalue. In this case, the infinite dimensional operator L has a continuous spectrum, and is diagonalized by a Fourier transform. $$p_0(x)=1_{\lbrace x=0\rbrace}=\int_{-[\frac12,\frac12]^2}e^{2\pi ix\cdot u}\,du.$$ Noting that $e^{2\pi ix\cdot u}$ (as a function of x) is an eigenvector of L, $$Le^{2\pi ix\cdot u}=\left(\frac15+\frac25\cos(2\pi u_1)+\frac25\cos(2\pi u_2)\right)e^{2\pi ix\cdot u}$$ gives the following for $p_n$, $$p_n(x)=L^np_0(x)=\int_{[-\frac12,\frac12]^2}\left(\frac15+\frac25\cos(2\pi u_1)+\frac25\cos(2\pi u_2)\right)^ne^{2\pi ix\cdot u}\,du.$$ The term inside the parentheses is less than 1 in absolute value everywhere away from the origin, so looks like a Dirac delta when raised to a high power n. Using a Taylor expansion to second order, $$\left(\frac15+\frac25\cos(2\pi u_1)+\frac25\cos(2\pi u_2)\right)^n =e^{-\frac45\pi^2n\vert u\vert^2}\left(1+\frac{8\pi^4n}{75}(7\vert u\vert^4-20u_1^2u_2^2)+O(n\vert u\vert^6)\right).$$ This expansion is valid over any domain on which $n\vert u\vert^6$ is bounded. Say, $\vert u\vert\le n^{-1/6}$. Outside of this domain, the integrand above is bounded by $e^{-cn(n^{-1/6})^2}=e^{-cn^{2/3}}$ for a constant c, which is much smaller than O(1/n^3) and can be neglected. Then, $$p_n(x)=\int_{\mathbb{R}^2}\left(1+\frac{8\pi^4n}{75}(7\vert u\vert^4-20u_1^2u_2^2)+O(n\vert u\vert^6)\right)e^{-\frac45\pi^2n\vert u\vert^2+2\pi ix\cdot u}\,du.$$ Here I not only substituted in the second order approximation to the integrand, but also extended the range of integration out to infinity. This is fine, because it can be shown that the value of this integral over $\vert u\vert\ge n^{-1/6}$ has size of the order of no more than $e^{-cn^{2/3}}$, so vanishes much faster than $O(1/n^3)$. Substituting in $v=\sqrt{\frac{8n}{5}}\pi u$ also shows that the $O(nu^6)$ term in the integrand vanishes at rate $1/n^3$, giving the following. $$p_n(x)=\frac{5}{8\pi^2n}\int_{\mathbb{R}^2}\left(1+\frac{1}{24n}(7\vert v\vert^4-20v_1^2v_2^2)\right)e^{-\frac12\vert v\vert^2+i\sqrt{\frac{5}{2n}}x\cdot v}\,dv+O(n^{-3}).$$ This integral can be computed, $$p_n(x)=\frac{5}{4\pi n}e^{-\frac{5}{4n}\vert x\vert^2}\left(1+\frac{1}{24n}\left(36-\frac{90}{n}\vert x\vert^2+\frac{175}{4n^2}\vert x\vert^4-\frac{125}{n^2}x_1^2x_2^2\right)\right)+O(n^{-3}).$$ This is a bit messy, but the exact coefficients are not too important. What matters is the general form of the expression. The leading order term also agrees with the guess above based on it being approximately normal. Also, for any fixed $\vert x\vert \lt\vert y\vert$, the leading order term in $p_n(x)-p_n(y)$ will dominate for large n, giving $p_n(x)\gt p_n(y)$. So, $\tilde C_n\to\infty$. Consider $\vert x\vert\le c\sqrt{n}$ for some $c\le1$. Then, $$p_n(x)=\frac{5}{4\pi n}e^{-\frac{5}{4n}\vert x\vert^2}\left(1+\frac{3}{2n}\right)+O(c^2n^{-2}).$$ If $\vert x\vert\lt\vert y\vert\le c\sqrt{n}$ then $\vert y\vert^2-\vert x\vert^2\ge 1$ (as it is a nonzero integer) $$\begin{align} p_n(x)-p_n(y)&=\frac{5}{4\pi n}\left(1+\frac{3}{2n}\right)e^{-\frac{5}{4n}\vert x\vert^2}\left(1-e^{-\frac{5}{4n}(\vert y\vert^2-\vert x\vert^2)}\right)+O(c^2n^{-2})\\ &\ge\frac{5}{4\pi n}e^{-\frac{5}{4n}\vert x\vert^2}(1-e^{-\frac{5}{4n}})+O(c^2n^{-2})\\ &=\frac{25}{16\pi n^2}e^{-\frac{5}{4n}\vert x\vert^2}\left(1+O(c^2)\right). \end{align}$$ As long as c is chosen small enough that the $O(c^2)$ term is always greater than -1, this expression will be positive. So $p_n(x)\gt p_n(y)$ for all $\vert x\vert\lt\vert y\vert\le c\sqrt{n}$, giving $\tilde C_n\ge c\sqrt{n}$. - Thanks a lot! This is exactly the kind of thing I was hoping was true. – Yakov Shlapentokh-Rothman May 29 2010 at 0:22 Nice ! – Robby McKilliam May 29 2010 at 5:17 1 Also, my formula shows that $n^2(p_n(0)−p_n(x))\to\frac{25}{16\pi}\vert x\vert^2$. This characterizes the Euclidean distance in terms of $p_n$. – George Lowther May 29 2010 at 19:22 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. By Donsker's theorem, this should converge to a Brownian motion in the scaling limit. This means that the shapes Robby McKilliam plotted will converge to a circle (when properly scaled), since the distribution of Brownian motion is rotationally invariant. Since the probability of moving from the current position is only 1/5 instead of 1, the time of the process will be slowed by a factor of 5, hence the radius of your limiting shapes will grow like $\sqrt{t/5}$ instead of $\sqrt{t}$. - Sorry if this is a silly question but my probability knowledge is minimal. Are you saying that $\lim_{n\to\infty}\frac{C_n}{\sqrt{n/5}} = 1$ and that this follows easily from Donsker's theorem? – Yakov Shlapentokh-Rothman May 25 2010 at 23:00 I could be wrong about the constants, but I would bet that some statement like that is true. For a good presentation of Donsker's theorem (much better than the Wikipedia article), see Section 7.6 of Durrett's Probability. – Tom LaGatta May 25 2010 at 23:41 Thanks, I will check out Durrett's book. – Yakov Shlapentokh-Rothman May 26 2010 at 0:35 I don't have the answer but I figured I would give you the results of a few quick experiments. Here is what things look like when $n = 5$ and when $n = 10$ and when $n = 50$ and when $n = 1000$ The colour represents the probability, red being large, blue being small. The actual colours are assigned according to the log of the probability. To generate these I used the following matlab ````M = [ 0 1/5 0; 1/5 1/5 1/5; 0 1/5 0 ]; B = [1]; n = 50; for i = 1:n B = conv2(B,M); end colormap(jet(256)); imagesc([-n, n], [-n, n], log(B)); ```` Provided that the `shape' close to the origin becomes sufficiently circular, then the answer to your question is positive. - Thanks! These are really nice. – Yakov Shlapentokh-Rothman May 25 2010 at 22:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 71, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422081112861633, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/112467/is-the-domain-of-an-one-to-one-function-a-set-if-the-target-is-a-set?answertab=oldest
# Is the domain of an one-to-one function a set if the target is a set? This is probably very naive but suppose I have an injective map from a class into a set, may I conclude that the domain of the map is a set as well? - 2 Perhaps you should clarify what is meant by "map" in this question. The answers using replacement seem to be assuming that the map is defined by a formula in the language of set theory. – ccc Feb 23 '12 at 15:19 ## 3 Answers If a function $f:A\to B$ is injective one, we can assume without loss of generality that $f$ is surjective too (by passing to a subclass of $B$), therefore $f^{-1}:B\to A$ is also a bijection. If $B$ is a set then every subclass of $B$ is a set, so $f^{-1}:B\to A$ is a bijection from a set, and by the axiom of replacement $A$ is a set. - I say you can, since an injective map $f:A\to B$ is isomorphic to both its image (which is a set) and its domain. - Define $g$ on the range of $f$ such that $g(f(x)) = x$. This is well-defined because $f$ is injective. The domain of $g$ is equal to the range of $f$, which is a set. Therefore by the axiom of replacement, or maybe the axiom of union applied to $\bigcup_{y\in{\rm Range}f} \{g(y)\}$, the range of $g$ is a set. But the range of $g$ is precisely the domain of $f$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9717344045639038, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/46653-laurent-s-theorem.html
# Thread: 1. ## Laurent's Theorem Hi just a bit of help needed here as I don;t know where to start: Part (A) ---------------------------- Suppose $f(z) = u(x,y) + iv(x,y)\;and\;g(z) = v(x,y) + iu(x,y)$ are analytic in some domain D. Show that both u and v are constant functions..? I guess we have to use the CRE here but not really sure how to approach this..? Part (B) ---------------------------- Let f be a holomorphic function on the punctured disk $D'(0,R) = \left\{ {z \in C:0 < |z| < R} \right\}$ where R>0 is fixed. What is the formulae for c_n in the Laurent expansion: $<br /> f(z) = \sum\limits_{n = - \infty }^\infty {c_n z_n }$. Using these formulae, prove that if f is bounded on D'(0,R), it has a removable singularity at 0. - Well I know that: $c_n = \frac{1}<br /> {{2\pi i}}\int\limits_{\gamma _r }^{} {\frac{{f(s)}}<br /> {{(s - z_0 )^{n + 1} }}} ds = \frac{{f^{(n)} (z_0 )}}<br /> {{n!}}$. Any suggestions from here? PART (C) ------------------- Find the maximal radius R>0 for which the function $<br /> f(z) = (\sin z)^{ - 1}$ is holomorphic in D'(0,R) and find the principal part of its Laurent expansion about z_0=0 ?? Any help would be greatly appreciated. Thanks a lot 2. A) Write CRE for both functions. B) See here. 3. PART (A) CRE's for the f(z) : u_x=v_y and u_y=-v_x. CRE's for g(z) : v_x=u_y and v_y=-u_x. SO: u_x = v_y = -u_x AND u_y = -v_x = v_x so u and v are constant because u_x = -u_x and -v_x = v_x is that correct? 4. If $v_x=-v_x$ then $2v_x=0\Rightarrow v(x,y)=f(y)$ But you can also use the CR equations to show $2v_y=0\Rightarrow v(x,y)=f(x)$ Only way both statements are true is if $v(x,y)=k$ Then do these two steps for $u(x,y)$ as well. 5. For part (c), the Laurent series for $\csc(z)$ is kind of messy. Could certainly use long division to get the first few terms. Here's one way to get the rest of the terms: First look at Mathworld under Bernoulli numbers. There they show how to derive: $z\cot(z)=\sum_{n=0}^{\infty}\frac{(-1)^n B_{2n}(2z)^{2n}}{(2n)!}$ Now, if: $\csc(z)=\sum_{n=0}^{\infty}\frac{(-1)^{n+1}2(2^{2n-1}-1)B_{2n}}{(2n)!}z^{2n-1}$ then split up the series for $\csc(z)$ into two sums, and figure out how the series for $z\cot(z)$ could be modified to represent those two series. Here's a start: what happens when I substitute $z\to z/2$ in the series for $z\cot(z)$? Probably an easier way to get the terms though. Also, the radius of convergence extends to the nearest singularity which is $\pi$ 6. Originally Posted by mathfied PART (C) ------------------- Find the maximal radius R>0 for which the function $<br /> f(z) = (\sin z)^{ - 1}$ is holomorphic in D'(0,R) and find the principal part of its Laurent expansion about z_0=0 As shawsend has said, R = π. To find the principal part of the Laurent expansion, you might guess that since sin(z) is close to z when z is small, the principal part of 1/sin(z) ought to be 1/z. You can then justify that guess as follows. Let $g(z) = \frac1{\sin z} - \frac1z = \frac{z-\sin z}{z\sin z}$. Since z-sin(z) has degree 3 and z*sin(z) has degree 2, it follows that g(z) has a removable singularity at the origin and is therefore holomorphic in a neighbourhood of the origin. Therefore $\frac1{\sin z} = \frac1z + g(z)$ has the same principal part as 1/z (namely 1/z). 7. I just realized I over-killed it: You only wanted the singular part. Do what Opalg said or just use long division: $\csc(z)=\frac{1}{z-\frac{z^3}{3!}+\cdots}$ and the first term when you do that is $1/z$ then all the rest are positive powers of $z$. But hey Mathfied, thanks a bunch. I hadn't known about all that stuff I posted above until I researched it and worked it out myself today. The Bernoulli stuff is interesting I think.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441723227500916, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/138792/cardinality-does-ab-2b-if-a-le-2b/138810
# Cardinality: does $|A^B|=|2^B|$ if $|A| \le |2^B|$? Taking finite powers of a countably infinite set still yields a countable set: $$|\mathbb{N}|=|\mathbb{N}^k|.$$ It's also known that countable powers of the continuum still have the same cardinality as the continuum: $$|2^\mathbb{N}|=|\mathbb{R}|=|\mathbb{R}^k|=|\mathbb{R}^\mathbb{N}|.$$ Furthermore, taking the next highest ordinal to the power of the continuum doesn't seem to change it. (at least according to wikipedia) Denote $X:=2^\mathbb{R}$, then $$|2^\mathbb{R}|=|X|=|X^k|=|X^\mathbb{N}|=|X^\mathbb{R}|$$ See a pattern? It seems that when taking one set to the power of another set, it doesn't really matter how big the exponent is so long as it is smaller than the base. Or thinking about it in terms of the other variable, if the exponent isn't too big then you might as well replace the base by 2. Ie., $$|A| \le |2^B| \Rightarrow |A^B|=|2^B|.$$ Is there a way in which this pattern can be stated and proved rigorously, or is it just a coincidence? - ## 2 Answers If $B$ is infinite, then yes: $|A^B|\leq |(2^B)^B|=|2^{B\times B}|=|2^B|$ while the other direction is trivial. - Could you elaborate a little bit on that? What is the set $B \times B$ in the formula $2^{B \times B}$ (cartesian product?), and how do we know $|2^{B \times B}|=|2^B|$? If this is too simplistic a reference or link to something explaining it would be great! – Nick Alger Apr 30 '12 at 8:44 1 – Alex Becker Apr 30 '12 at 8:53 Thanks, much appreciated. – Nick Alger Apr 30 '12 at 8:58 4 – Martin Sleziak Apr 30 '12 at 9:21 Interesting that it is equivalent to the axiom of choice... I tried to construct a proof just now but was getting stuck so that link was very useful. – Nick Alger Apr 30 '12 at 9:40 We must require $A$ to not be $0$ or $1$! In the first case, $|A^B|$ is 0, while in the second case, $|A^B|$ is 1. (Note there are no functions from $B$ to the empty set, and there is one function from $B$ to a set with one element.) Then $|A^B|$ is strictly less than $|2^B|$ when $B$ is $1$ or greater. Categorize this under "silly but true" :) - Yes, I realized this and debated whether or not to mention it. I'm always at a loss whether to include little edge case technicalities in my posts. If I do it clutters things up and obscures the main idea. If I don't, someone always comes in and "corrects" me for leaving it out... Thank you for the post though, perhaps it will be useful to others reading in the future. – Nick Alger May 4 '12 at 10:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950706958770752, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/38132/calculate-intersection-of-2-points
# Calculate intersection of 2 points I have 2 points with bearing's coming from them, I need to calculate a 3rd point(intersection of bearing's from point 1,2) but am unsure of the maths required to do this. Could someone give me an example of how to do this. Edit - The bearing's I am referring to are degree's from north. So I have point A + Point B and an imaginary line coming from from each point on the a particular bearing. I wish to know at what point the imaginary lines would cross. To explain further - I have point X,Y on a map and a bearing to an object(point 3) from point 1, I also have point 2 on the map and a bearing from point 2 to the same object (point 3) as point 1. what I need to do is to calculate the X,Y for point 3 using points 1,2. If it helps I would imagine the max distances between point 3 and points 1,2 would be a mile or so. Maths was never my strongest point so if someone could explain how to do this in basic steps that would be great Thanks Colin - 1 What do you mean by "bearing's"? Rays? Spheres? – El'endia Starman May 9 '11 at 22:05 On a sphere/globe or on a plane? – Isaac May 9 '11 at 23:00 1 "bearing" is usually, I think, direction given as an angle clockwise from North. Doing the problem, however, requires knowing whether we're working on a sphere/globe or on a plane. – Isaac May 9 '11 at 23:21 2 Points do not intersect. Rays from points or lines through points intersect. – Ross Millikan May 10 '11 at 0:32 1 – t.b. May 10 '11 at 12:17 show 2 more comments ## 1 Answer In the plane, if you are given two points $(x_1,y_1), (x_2,y_2)$ and the angles between the vertical and the vector to a third point $(x_3,y_3)$ as $\theta_1, \theta_2$ we have the slope of the line through $(x_1,y_1)$ and $(x_3,y_3)$ is $m_1=\tan(\theta_1+\frac{\pi}{2})$ and the slope of the line through $(x_2,y_2)$ and $(x_3,y_3)$ is $m_2=\tan(\theta_2+\frac{\pi}{2})$. Then $y_3-y_1=m_1(x_3-x_1)$ and $y_3-y_2=m_2(x_3-x_2)$. This gives two equations in two unknowns. Added in response to comment: I used the point-slope form for the two lines. Some further discussion is at PurpleMath and at Mathwords. The slopes are given by your bearings. Normally the slope of a line is the tangent of the angle measured from the horizontal, but I assumed that your bearings are measured from the vertical (as they are usually taken from North). That accounts for the addition of $\frac{\pi}{2}=90^{\circ}$. Given two lines, the intersection is found by finding a point $(x_3,y_3)$ that lies on both. This gives two simultaneous equations to solve for the two coordinates. - Could you explain in terms an idiot could understand thanks colin – meee May 10 '11 at 9:18 1 @meee Colin, may I suggest making a drawing of what you want to do; that may vastly help us to explain the mathematics you need. – J. M. May 10 '11 at 9:49 @J.M. I'm pretty sure Ross Millikan interpreted it right. You have two points and two slopes given as angles from the vertical ($\pi/2$) and want the intersection between the two lines thus determined. – Ben Alpert May 10 '11 at 14:52 @Ben: I know Ross did it correctly; my point was that Ross's solution might have been more transparent to Colin had Colin sketched out his situation first... – J. M. May 10 '11 at 15:09 @Ross: Given that bearings are angles clockwise from North, if the $\theta_i$ are bearings, then would we want $m_i=\tan(\frac{\pi}{2}-\theta_i)$? – Isaac May 10 '11 at 16:35 show 6 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9573635458946228, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/67039/decomposition-of-l-02gl-2-mathbbq-backslash-gl-2a-psi/67064
Decomposition of $L_0^2(GL_2({\mathbb{Q}}) \backslash GL_2(A), \psi)$ Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Two questions concerning the decomposition of $L_0^2(GL_2({\mathbb{Q}}) \backslash GL_2(A), \psi)$, where $\psi$ is a Hecke character on the adelic ring $A$: 1. It is known that when $\psi$ is trivial on $\mathbb{R}^\times_{+}$, there is an explicit correspondence between classical modular forms and irreducible constituents of the regular representation of $L_0^2(GL_2({\mathbb{Q}}) \backslash, GL_2(A), \psi)$. Does similar correspondence (with other generalized automorphic forms) exist when $\psi$ is non-trivial on $\mathbb{R}^\times_{+}$? 2. What is the role play by Maass forms in such a decomposition? Can we also construct a correspondence just as we did for modular forms? Thank you! - 1 For 1. I am a bit confused. Classical cuspidal eigenforms (not modular forms) give rise to irreducible subrepresentations of the space you're interested in, but there are other irreducible subrepresentations not coming from classical eigenforms -- namely those coming from Maass forms. Does this answer 2. for you? For 1. why can't you just twist to reduce to the situation where $\psi$ is trivial on the positive reals? – Kevin Buzzard Jun 6 2011 at 18:45 Thank you for your response, I didn't think of twisting the character. – Hsueh-Yung Lin Jun 6 2011 at 20:20 2 Answers Originally, I wrote: Assuming that I understand the intent of the question properly, it is intended that psi be a Hecke character on the center of GL(2, adeles). The question seems to grant that we understand the situation with non-trivial Hecke characters subject only to the requirement that they be trivial on the "ray" ("the non-compact part" of the idele class group) denoted R-times-sub-plus. Then all other cases can be reduced to this by tensoring with characters of the form |det|^{it} for suitable real t. (Note that for L^2 to make sense the central character should be unitary.) That is, nothing really new comes up. Edit: Upon further consideration, I think that perhaps the question had some implicit questions in it, which can be answered: The whole space of L^2 automorphic forms, even with trivial central character, includes many things that are not immediately classical holomorphic automorphic/modular forms, nor Maass/wave-forms. The (relatively easy) structure theory of representations of GL(2,R) or SL(2,R) shows that a Gamma-invariant K-finite Casimir-Laplacian eigenfunctions is either a holomorphic/anti-holomorphic automorphic form, a Maass waveform, or... significantly... a Lie algebra derivative of one of those. The central character issue is somewhat secondary to this classification, which itself is not so hard. In particular, perhaps we have been lucky that historical events presented us with "vectors" which generate all the relevant representations. - Thank you for your answer. – Hsueh-Yung Lin Jun 6 2011 at 20:17 Dear Paul Garrett, I do not understand what you mean by the last sentence. Could you please elaborate how would you generate all the Maass forms by concrete "vectors". What do you mean by relevant representations? – Marc Palm Jun 7 2011 at 7:11 2 @pm: If you apply the Maass raising and lowering operators on a Maass cusp form, and you lift these to SL(2,R) appropriately, then you obtain an orthonormal basis of an automorphic representation: these are exactly those vectors (up to scalar multiple) in the representation which transform by a character under SO(2,R). By "relevant representations" Paul meant that the automorphic representations so obtained from classical modular forms (holomorphic, Maass, Eisenstein) exhaust all infinite-dimensional automorphic representations of SL(2,R), we don't miss anything. – GH Jun 8 2011 at 23:03 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Perhaps it should be noted that this is a more genral phenomena: The Schur lemma tells you that the restriction of an irreducible representation of a locally compact group $G$ to the centrum $Z$ is always isomorphic to a character. Assummin fixed central character, is a technical requirement, but important since $GL_2(F) \backslash GL_2(A)$ has not finite quotient measure, but $GL_2(F) Z(A) \backslash GL_2(A)$ does. Alternatively, you can look $GL_2(A)^1$, i.e. the kernel $g \mapsto | \det g|_A$, since $GL_2(F) \backslash GL_2(A)^1$ has finite volume as well. This is a necessary point, if you want to analyse the representations via the Arthur trace formula, which does make sense for finite volume quotients only. Maass (cusp) forms and modular (cusp) forms give both (cuspidal) automorphic representations and the cuspidal ones give vectors of $L_0$. The main point is the observation: $$GL_2(\mathbb{Q}) z(A) \backslash GL_2(A) /\prod_p GL_2(\mathbb{Z}_p) O(2)$$ $$= PSL_2( \mathbb{Z}) \backslash \mathbb{H}.$$ An explanation, how to see this, is in Gelbarts "Adeles Groups ... " book, if I recall it correctly,and there is an article of Kudla in Cogdell et al. "Introduction to Langlands program". I think you should consider these only as different objects, if you want argue with algebraic geometry, which applies to holomorphic stuff only, but not if you want to do representation theory. Edit due to the comments: Maass forms and modular forms seen as representations look different at the archimedean primes, i.e. have a "different" representation of $GL_2(\mathbb{R})$ there, i.e. prinicpal series vs. discrete series. - 2 Dear pm, There are significant differences between Masss forms and modular forms from a rep'n theoretic perspective as well, namely the very different behavious of their local factors at $\infty$. Regards, Matthew – Emerton Jun 7 2011 at 7:52 Dear Matthew, you mean that the gamma factors look different and hence the analytic conductor is different? – Marc Palm Jun 7 2011 at 7:57 3 I believe Matthew is referring to the fact that the $(\mathfrak{g},K)$-module (i.e. the infinite component) of the automorphic representation attached to a cusp form is a discrete series representation (if $k\geq2$, limit of discrete series for $k=1$), whereas, that for a Maass form is a principal series. These are two quite different objects from a representation theory point of view. – Rob Harron Jun 7 2011 at 13:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329110980033875, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-software/146743-maple-logical-question.html
# Thread: 1. ## Maple / a logical question My lecturer gave us the following question: $f(x):=\frac{x}{x^{sin(x)}-1}$ Using Maple, find the limit of f(x) in $x=0^+$. In order to approve Maple's answer, find the answers of the following equations: * $f(x)=-0.1$ ( I got something with $10^-4$) * $f(x)=-0.01$ ( I got something with $10^-44$) * $f(x)=-0.001$ ( I got something with $10^-435$ ) According to your answers, how many digits should maple work with in order to find the solution of: $f(x)=-10^{-10}$ ? Now, I think I got the idea, I just want you to approve / support it: I believe that if you create a series out of the answers, you get: $a_1=-4, a_2=-44, a_3=-435, ...$ Therefore, a_10 should be something like $435 * 10^{-7}$. Is my 'logic' right?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391676187515259, "perplexity_flag": "middle"}
http://micromath.wordpress.com/2008/04/14/donald-knuth-calculus-via-o-notation/?like=1&_wpnonce=5e79d40c83
# Mathematics under the Microscope Atomic objects, structures and concepts of mathematics Posted by: Alexandre Borovik | April 14, 2008 ## Donald Knuth: Calculus via O notation Continuing the theme of alternative approaches to teaching calculus, I take the liberty of posting a letter sent by Donald Knuth to to the Notices of the American Mathematical Society in March, 1998 (TeX file). Professor Anthony W. Knapp P O Box 333 East Setauket, NY 11733 Dear editor, I am pleased to see so much serious attention being given to improvements in the way calculus has traditionally been taught, but I’m surprised that nobody has been discussing the kinds of changes that I personally believe would be most valuable. If I were responsible for teaching calculus to college undergraduates and advanced high school students today, and if I had the opportunity to deviate from the existing textbooks, I would certainly make major changes by emphasizing several notational improvements that advanced mathematicians have been using for more than a hundred years. The most important of these changes would be to introduce the $O$ notation and related ideas at an early stage. This notation, first used by Bachmann in 1894 and later popularized by Landau, has the great virtue that it makes calculations simpler, so it simplifies many parts of the subject, yet it is highly intuitive and easily learned. The key idea is to be able to deal with quantities that are only partly specified, and to use them in the midst of formulas. I would begin my ideal calculus course by introducing a simpler “$A$ notation,” which means “absolutely at most.” For example, $A(2)$ stands for a quantity whose absolute value is less than or equal to $2$. This notation has a natural connection with decimal numbers: Saying that $\pi$ is approximately $3.14$ is equivalent to saying that $\pi=3.14+A(.005)$. Students will easily discover how to calculatewith $A$: $10^{A(2)}=A(100)$ $\bigl(3.14+A(.005)\bigr)\bigl(1+A(0.01)\bigr)$ $\qquad = 3.14+A(.005)+A(0.0314)+A(.00005)$ $\qquad=3.14+A(0.3645)=3.14+A(.04)\,.$ I would of course explain that the equality sign is not symmetric with respect to such notations; we have $3=A(5)$ and $4=A(5)$ but not $3=4$, nor can we say that $A(5)=4$. We can, however, say that $A(0)=0$. As de Bruijn points out in [1, 1.2], mathematicians customarily use the $=$ sign as they use the word “is” in English: Aristotle is a man, but a man isn’t necessarily Aristotle. The $A$ notation applies to variable quantities as well as to constant ones. For example, $\sin x=A(1);$ $A(x) =xA(1)\,;$ $A(x)+A(y) =A(x+y)$ if $x\geq 0$ and $y\geq 0\,;$ $\bigl(1+A(t)\bigr){}^2 =1+3A(t)$ if $t=A(1)\,.$ Once students have caught on to the idea of $A$ notation, they are ready for $O$ notation, which is even less specific. In its simplest form, $O(x)$ stands for something that is $CA(x)$ for some constant $C$, but we don’t say what $C$ is. We also define side conditions on the variables that appear in the formulas. For example, if $n$ is a positive integer we can say that any quadratic polynomial in $n$ is $O(n^2)$. If $n$ is sufficiently large, we can deduce that $\bigl(n+O(\sqrt{n}\,)\bigr)\bigl(\ln n+\gamma+O(1/n)\bigr)$ $\quad=n\ln n+\gamma n+O(1)$ $\qquad\null+O(\sqrt{n}\ln n)+O(\sqrt{n}\,)+O(1/\sqrt{n}\,)$ $\quad=n\ln n+\gamma n+O(\sqrt{n}\ln n)\,.$ I would define the derivative by first defining what might be called a “strong derivative”: The function $f$ has a strong derivative $f'(x)$ at point $x$ if $f(x+\epsilon)=f(x)+f'(x)\epsilon+O(\epsilon^2)$ whenever $\epsilon$ is sufficiently small. The vast majority of all functions that arise in practical work have strong derivatives, so I believe this definition best captures the intuition I want students to have about derivatives. We see immediately, for example, that if $f(x)=x^2$ we have $(x+\epsilon)^2=x^2+2x\epsilon+\epsilon^2\,,$ so the derivative of $x^2$ is $2x$. And if the derivative of $x^n$ is $d_n(x)$, we have $(x+\epsilon)^{n+1}=(x+\epsilon)\bigl(x^n+d_n(x)\epsilon+O(\epsilon^2)\bigr)$ $\qquad=x^{n+1}+\bigl(xd_n(x)+x^n\bigr)\epsilon+O(\epsilon^2)\,;$ hence the derivative of $x^{n+1}$ is $xd_n(x)+x^n$ and we find by induction that $d_n(x)=nx^{n-1}.$ Similarly if $f$ and $g$ have strong derivatives $f'(x)$ and $g'(x)$, we readily find $f(x+\epsilon)g(x+\epsilon)=f(x)g(x)+\bigl(f'(x)g(x)+f(x)g'(x)\bigr)\epsilon +O(\epsilon^2)$ and this gives the strong derivative of the product. The chain rule $f\bigl(g(x+\epsilon)\bigr)=f\bigl(g(x)\bigr)+f'\bigl(g(x)\bigr)g'(x)\epsilon +O(\epsilon^2)$ also follows when $f$ has a strong derivative at point $g(x)$ and $g$ has a strong derivative at $x$. Once it is known that integration is the inverse of differentiation and related to the area under a curve, we can observe, for example, that if $f$ and $f'$ both have strong derivatives at $x$, then $f(x+\epsilon)-f(x)=\int_0^{\epsilon}f'(x+t)\,dt$ $\qquad=\int_0^{\epsilon}\bigl(f'(x)+f''(x)\,t+O(t^2)\bigr)\,dt$ $\qquad=f'(x)\epsilon+f''(x)\epsilon^2\!/2+O(\epsilon^3)\,.$ I’m sure it would be a pleasure for both students and teacher if calculus were taught in this way. The extra time needed to introduce $O$ notation is amply repaid by the simplifications that occur later. In fact, there probably will be time to introduce the “$o$ notation,” which is equivalent to the taking of limits, and to give the general definition of a not-necessarily-strong derivative: $f(x+\epsilon)=f(x)+f'(x)\epsilon+o(\epsilon)\,.$ The function $f$ is continuous at $x$ if $f(x+\epsilon)=f(x)+o(1)\,;$ and so on. But I would not mind leaving a full exploration of such things to a more advanced course, when it will easily be picked up by anyone who has learned the basics with $O$ alone. Indeed, I have not needed to use “$o$” in 2200 pages of The Art of Computer Programming, although many techniques of advanced calculus are applied throughout those books to a great variety of problems. Students will be motivated to use $O$ notation for two important reasons. First, it significantly simplifies calculations because it allows us to be sloppy — but in a satisfactorily controlled way. Second, it appears in the power series calculations of symbolic algebra systems like Maple and Mathematica, which today’s students will surely be using. For more than 20 years I have dreamed of writing a calculus text entitled O Calculus, in which the subject would be taught along the lines sketched above. More pressing projects, such as the development of the TeX system, have made that impossible, although I did try to write a good introduction to $O$ notation for post-calculus students in [2, Chapter 9]. Perhaps my ideas are preposterous, but I’m hoping that this letter will catch the attention of people who are much more capable than I of writing calculus texts for the new millennium. And I hope that some of these now-classical ideas will prove to be at least half as fruitful for students of the next generation as they have been for me. Sincerely, Donald E. Knuth Professor [1] N. G. de Bruijn, Asymptotic Methods in Analysis (Amsterdam: North-Holland, 1958). [2] R. L. Graham, D. E. Knuth, and O. Patashnik, Concrete Mathematics (Reading, Mass.: Addison-Wesley, 1989). ### Like this: Posted in Uncategorized ## Responses 1. Sorry, I forgot to activate the comments. It is fixed now. Your comments are welcome! By: Alexandre Borovik on April 14, 2008 at 3:20 pm 2. Very interesting article. Also, since I’m the first to comment I thought I’d mention you’ve hit the front page of Reddit! Congrats! By: Ben Gotow on April 14, 2008 at 3:33 pm 3. Congratulations should be directed to Donald Knuth, but he, apparently, does not read e-mails. By: Alexandre Borovik on April 14, 2008 at 3:38 pm 4. Perhaps it’s just me, but this idea seems as though it would needless obfuscate an already difficult to grasp concept. It’s a hand waving exercise in an area where actual understanding of limits and infinitesimals is desired. For example, what would one gain by saying sin(x) = A(1)? This is known from the definition of the sin function, but seldom in regular calculus classes does a student need to use the bounds of the function in a calculation. As another, specifying pi = 3.14 + A(0.005) is an interesting way to approximate it’s value descriptively, but it’s necessarily imprecise. It doesn’t help the final calculations, in which case the student will use as many decimals as they see fit, or the predefined Pi variable in their calculators, and when writing equations, the symbol for pi is exact. I can see introducing this notation and it’s concepts in a calculus course designed specifically as a mathematical primer for computer scientists, for whom it will be eventually useful, but I would have strong objection to the use of this idea in an introduction to calculus, or even forays into Real Analysis, where delta-epsilon style proofs have significant meaning which could potentially be lost by stating O(1). Just my two cents. By: Kevin on April 14, 2008 at 4:01 pm 5. It strikes me as very domain specific. Who outside of computer science majors would need it? Most other engineers and science majors don’t see it often as far as I know, and if you understand limits O(f(x)) is not hard to understand. I think if anything spend less time on limits and spend more time on an introduction to differential equations would benefit students more. By: Kevin on April 14, 2008 at 4:55 pm 6. Interesting idea… but I hate it when CS folks overload the ‘=’ sign when ϵ (U+03F5 or $\in$ if it doesn’t print here) is what’s really meant. It’s much simpler to be clear about what is an element and what is a set. By: Michael R. Head on April 14, 2008 at 5:20 pm 7. Wow. So that explains Calc I. Where were you three semesters ago? @ Kevin: You make some good points; however, my guess is that teaching it this way would probably result in more people “getting it.” Those that should have an actual understanding of limits and infinitesimals (assuming that this will result in a pseudo-understanding of limits and infinitesimals) will probably be able to pick up that understanding regardless of which teaching approach is taken in the intro to Calc course. By: Sam on April 14, 2008 at 5:25 pm 8. It is also easy to explain all of those other functions, like w(x), W(x), which would take, at maximum, one class of two hours exaggerating. Note that these functions are also important, mainly when working with big field. Thus, I approve Knuth’s idea, and I’d be happy to see a text book with this kind of fundamentals. Breno By: Breno Leitao on April 14, 2008 at 5:30 pm 9. It’s fine to tout the benefits of big-oh notation for future computer scientists who will never encounter a badly-behaved function in their lives, and who will constantly be using big-oh notation. For future mathematicians, however, the fundamental concept of limit will have far more general applicability down the road. I would have been most distressed in my later math courses had limits been left “for a more advanced course” by my intoductory calculus teacher. It is also telling that Knuth says, “Once it is known that integration is the inverse of differentiation [...]“. How would one prove the Fundamental Theorem of Calculus using big-oh notation instead of limits? I’m imagining being presented with a proof of the Fundamental Theorem of Calculus later in my mathematical career, and thinking, “Oh, so that’s what calculus was about. Why didn’t they just tell me?” By: Karl Juhnke on April 14, 2008 at 5:31 pm 10. To Karl Juhnke: Here is a proof of the Fundamental Theorem of Calculus in o notation. Let $F(x)=\int_a^x f(u)du$ and $f$ continuous at $b$. Then, because $f(x)-f(b)=o(1)$, $F(x)-F(b)-f(b)(x-b)=o(|x-b|)$, that is the same as $F'(b)=f(b)$. By: misha on April 14, 2008 at 6:06 pm 11. A proof of the Fundamental Theorem of Calculus for a Lipschitz function in O notation and with the Knuth’s definition of the derivative. Let $f$ be Lipschitz, i.e., $f(x)-f(u)=O(|x-u|)$ and $F(x)=\int_a^x f(u)du$. Then, because $f(x)-f(b)=O(|x-b|)$, $F(x)-F(b)=f(b)(x-b)+O((x-b)^)$, and this is the same as $F'(b)=f(b)$ according to the Knuth’s definition. By: misha on April 14, 2008 at 6:26 pm 12. Oops! the penultimate formula in my previous comment should be: $F(x)-F(b)=f(b)(x-b)+O((x-b)^2)$ By: misha on April 14, 2008 at 6:29 pm 13. To Kevin: The O notation in fact gives us the explicit instances of the epsilon-delta definitions, where delta is a function of epsilon of a specific form, instead of the claim that “for every epsilon there is delta, such that…” Let’s take a look at the O-definition of the defivative suggested by Knuth. It says that $f(x+h)=f(x)+f'(x)e+O(h^2)$, which can be rewritten as $(f(x+h)-f(x))/h-f'(x)=O(|h|)$, i.e., we can take $\delta=\epsilon /K$ with some constant $K$ when we say that $f’(x)$ is the limit of $(f(x+h)-f(x))/h$ for $h \rightarrow 0$. Come to think about it, the phrase “for every epsilon there is delta” is very much the same as “delta is a function of epsilon.” This whole tradition of cramming this epsilon-delta mantra into every definition looks like a throwback to the times when the abstract notion of a function was not widely known and universally accepted yet. We can do better today. In any case, the translations between the O-definitions, explicit estimates, and epsilon-delta mantras are totally straight forward and should not cause any troubles. On the other hand, anybody who has troubles with these translations, should think twice before trying to become a mathematician, he may be not up to it. The O-definitions and “strong derivative” suggested by Knuth will allow exactly what you want, i.e., spending less (=zero) time on limits and spending the saved time on differential equations. By: misha on April 14, 2008 at 7:41 pm 14. Is there a prize for pointing out an error in Knuth’s calculations? In the first example, Knuth reduces A(.005) + A(0.0314) + A(.00005) to A(0.3645). I think there is a zero missing (or the decimal is misplaced); it should be A(0.03645). By: Rob Leslie on April 14, 2008 at 7:43 pm 15. @Rob There is a prize if he printed it in a book. See this wikipedia entry to try to claim your “hexadollar” check: http://en.wikipedia.org/wiki/Knuth_reward_check By: Andrew Parker on April 14, 2008 at 8:16 pm 16. Using Big-O notation SERIOUSLY speeds up the process for finding divergence or convergence for series and sequences. You go from around 10 operations to 2-3 operations. Super-quick. By: Chaos Motor on April 14, 2008 at 8:47 pm 17. omg yes!!! I’m doing this right now in my class By: john on April 14, 2008 at 9:36 pm 18. “mathematicians customarily use the = sign as they use the word “is” in English” News to me. I think it’s computer scientists who are careless with the = symbol. I think if this set of ideas were to be expressed without this very bad one it might be more compelling. By: Tonio Loewald on April 14, 2008 at 10:34 pm 19. I like sin(x) = A(1). I think it would have been clarifying to me at a certain stage. While it does not confer a great deal of information about the sine function the information it does convey is definitely useful to the beginner. To write -1 <= sin(x) <= 1 forces the reader to digest more symbols. Symbol overload is a problem for most people in studying math. I say reduce it whenever possible. By: Ralph on April 15, 2008 at 12:12 am 20. Despite of all the computational simplifications delivered by O notation, Knuth doesn’t go far enough in my opinion. He still sticks with the pointwise notion of differentiation. His constant $C$, implicitly entering into his definition of “strong derivative at point $x$” is allowed to depend on $x$ in an uncontrolled manner. When he is talking about continuity, he is still talking about pointwise continuity. It means that getting from his definitions to any practical resilts, such as the fact that a function with a positive derivative is increasing, requires a good deal of subtle reasoning that involves completeness of the reals, such as the existence of the lowest upper bound. He still would need the uniform continuity of pointwise continuous functions on a closed interval to build a definite integral, and that involves compactness, i.e., Bolzano-Weierstrass lemma and such. On the other hand, if we strenthen his definition of “strong differentiability” even further, by requiring the estimate $|f(x+h)-f(x)-f'(x)h| \leq Kh^2$ that is uniform in $x$, we will end up with the derivatives that are automatically Lipschitz, and very simple proofs of the basic facts of calculus, that do not use the heavy machinery of classical analysis, such as completeness and compactness. This approach to calculus has been systematically developed. See the posting Calculus without limits on the previous reincarnation of this blog. If Lipschitz estimates are too restrictive, Holder comes to the rescue, all the proofs stay the same. Any other modulus of continuity can be used as well, and since any function, continuous on a segment has a modulus of continuity, this approach captures all the results about continuously differentiable functions. But we hardly need anything beyond Lipschitz in an undergraduate calculus course. By: misha on April 15, 2008 at 1:37 am 21. Despite of all the computational simplifications delivered by O notation, Knuth doesn’t go far enough in my opinion. He still sticks with the pointwise notion of differentiation. His constant $C$, implicitly entering into his definition of “strong derivative at point $x$“, is allowed to depend on $x$ in an uncontrolled manner. When he is talking about continuity, he is still talking about pointwise continuity. It means that getting from his definitions to any practical resilts, such as the fact that a function with a positive derivative is increasing, requires a good deal of subtle reasoning that involves completeness of the reals, such as the existence of the lowest upper bound. He still would need the uniform continuity of pointwise continuous functions on a closed interval to build a definite integral, and that involves compactness, i.e., Bolzano-Weierstrass lemma and such. On the other hand, if we strenthen his definition of “strong differentiability” even further, by requiring the estimate $|f(x+h)-f(x)-f'(x)h| \leq Kh^2$ that is uniform in $x$, we will end up with the derivatives that are automatically Lipschitz, and very simple proofs of the basic facts of calculus, that do not use the heavy machinery of classical analysis, such as completeness and compactness. This approach to calculus has been systematically developed. See the posting Calculus without limits on the previous reincarnation of this blog. If Lipschitz estimates are too restrictive, Holder comes to the rescue, all the proofs stay the same. Any other modulus of continuity can be used as well, and since any function, continuous on a segment has a modulus of continuity, this approach captures all the results about continuously differentiable functions. But we hardly need anything beyond Lipschitz in an undergraduate calculus course. By: misha on April 15, 2008 at 3:54 am 22. The example “=3.14 + A(0.3645)” should be “=3.14 + A(0.03645)”. Without this, I was looking at that example and going “huh? I don’t get it, even addition doesn’t work correctly, so how can it be called simple?” … Then I realised the example was wrong. By: Nick J on April 15, 2008 at 4:16 am 23. Kevin and Karl: I don’t think the article proposes elimination of limits from the curriculum. Much less any kind of “fudging” or harmful imprecision. Rather, it discusses intermediate, yet meaningful and strictly defined, concepts. I see nothing wrong with that. Defining little-oh indeed is equivalent to defining limits, hence continuity, derivatives and integrals. Using big-oh as an important stepping stone may be worth a try. Michael, Tonio: Look at recent publications in analysis. Big- and little-oh are used, and with the “abused” equality sign. Most of the time this is fine, as there is no possibility for confusion. By: Arnold Layne on April 15, 2008 at 5:05 am 24. In my opinion Knuth doesn’t go far enough. He still sticks with pointwise notions of differentiability and continuity that still require some heavy tools from classical analysis, such as completeness and compactness, to get any practical results. If his estimates had been uniform in x, he would have ended up with a much simpler theory, based on uniform notions and not requiring these heavy tools. See Calculus without limits in the previous reincarnation of this blog. By: misha on April 15, 2008 at 9:37 am 25. I’m in Calc2 and perusing this is giving me a headache. I think I’ll stick with how I’m currently being taught :p integrating from 1 to 2 is simple enough, why throw in another layer of thinking? besides, college is easy enough these days, why allow something that simplifies it even more. and btw I am a CS student. I know the importance of big O in CS but personally the mathematician side of me is offended to see it being “integrated” into calculus By: Michael on April 15, 2008 at 1:02 pm 26. To misha: I don’t see how you can control your error term in your proof of the Fundamental Theorem of Calculus without unpacking the definition of integration. Come to think of it, integration wasn’t defined at all in Knuth’s letter. Maybe he just didn’t have time. Instead of jumping ahead to wonder how the big theorem was proved with big-oh notation and without limits, I should have first asked how integration was defined with big-oh notation and without limits. Perhaps with that answer under my belt it would all become clear to me. By: Karl Juhnke on April 16, 2008 at 3:55 pm 27. On further consideration, perhaps the magic is not in the notation at all, but rather in assuming all functions are sufficiently well-behaved so that the notation is adequate? I can see a strong case for teaching “Calculus of Friendly Functions” to the majority of people who study calculus. For me, however, real analysis was full of intuition-breaking functions that forced me to go back to the definitions. It seems that having some practice with limits prior to real analysis helped me get in synch with all the mind-bending concepts, whereas had I been taught calculus with big-oh notation, my intuition would be under-developed. Are fans of Knuth’s proposal suggesting it is better for mathematicians as well as for computer scientists? By: Karl Juhnke on April 16, 2008 at 4:45 pm 28. Lipschitz function — got it now. By: Karl Juhnke on April 16, 2008 at 4:50 pm 29. To Karl Juhnke: Look, I’m not entirely against continuity, limits, pointwise differentiability and such. I’m just against starting with them. If you make the estimate in Knuth’s strong derivative definition uniform in $x$, you will end up with calculus of Lipschitz functions. Scroll down to Calculus without limits on the previous reincarnation of this blog for details. A lot of the intuition-breaking functions are artifacts of the weak “classical” definitions and are irrelevant in the vast majority of applications. Also “Calculus of Friendly Functions” can be a good stepping stone to the classical analysis even for math majors, who will have learn about Lipschitz functions and moduli of continuity anyway. By: misha on April 16, 2008 at 7:41 pm 30. Karl Juhnke said: I can see a strong case for teaching “Calculus of Friendly Functions” to the majority of people who study calculus. I cannot agree more. The only issue is elementary definition of an appropriate class of functions. By the way, there is a a well-developed theory of “o-minimal structures”, part of model theory, where every definable function (that is, “friendly” function) has a wonderful property: it is piecewise monotonous-and-taking-all-intermediate-values). Unfortunately, it is difficult to deal with general monotonous-and-taking-all-intermediate-values functions – they are continuous (and, moreover, obviously so — these are archetypal continuous functions, fitting into our intuition of continuity), but the class is not closed under addition. Finding a narrower explicitly and elementary defined subclass with good natural properties is an interesting problem — but it is unclear even whether it has a solution. The theory of o-minimal structures originates in a classical work by Alfred Tarski on decision procedure for Euclidean geometry. By: Alexandre Borovik on April 17, 2008 at 6:16 am 31. Coming back to calculus, we actually can define an increasing function on an inreval to be continuous if it doesn’t skip any values, and then we can define a fanction $f$ to be continuous at $a$ if there is an increasing continuous function $h$, defined for $x \geq 0$, such that $h(0)=0$ and $|f(x)-f(a)| \leq h(|x-a|)$. This is the way it is done in a ground-breaking book “An Infinite Series Approach to Calculus” by Susan Bassein (Publish or Perish, 1993), page 67. A short note on continuity I wrote a while ago, especially exercises 3 to 8, may clarify the matter (since then I have abandoned continuity in favor of explicit uniform estimates). Now, the class of increasing continuous functions is closed under addition (and multiplication by positive constants), it provides an adequate tool to develop the o-notation systematically. Pushing this approach a bit further by requiring the estimate on $|f(x)-f(a)|$ to be uniform in both $x$ and $a$, we arrive naturally at “Calculus without limits” from the previous reincarnation of this blog. Since any continuous function on a closed interval has a global modulus of continuity, all the continuous functions become “friendly functions,” and we can look at differentiation as division of $f(x)-f(a)$ by $x-a$ in the ring of friendly functions. Of course, it is natural to start with polynomials and then move to Lipschitz and maybe Holder functions as “friendly,” before exploring the general continuity. This is the approach that I love. By: misha on April 17, 2008 at 9:20 am 32. With all apologies to Knuth (and a lot of reverence) … I’m not a mathematician, but I tutored a lot of calculus to reluctant business majors to make ends meet in college. When I tried to get a student to understand what calculus “is” I often found Leibniz’s notation to be superior to Lagrangian notation. It stresses, simply and visually, the fact that we’re talking about slope when we’re talking about derivatives. While this notation looks like fun for CS majors (as noted above) it also appears like it gets away from what is fundamentally being emphasized: just take the slope. Add in the confusion of saying “=” no longer denotes transitive equality and you might have a bigger mess than the one you started with. By: b8sell on April 17, 2008 at 8:20 pm 33. I’m a big Knuth fan, but the thought of teaching calculus this way gives me a combination of a headache and fits of laughter. Some of my calculus students cannot remember how to add integer fractions, cannot solve 3x+2=0, and want to use the Product Rule to differentiate ln(x) (because it’s “ln” times x). Something tells me Knuth wasn’t entirely serious in pushing this as a “calculus reform” effort. By: Robert on April 18, 2008 at 1:45 am 34. Alexandre, I realize this may come across as nitpicking, but I’m not sure I’d agree that the theory of o-minimal structures originates in the Tarski-Seidenberg theorem. Certainly that work, and particularly their method of quantifier elimination, has been a source of inspiration for many, but the result is restricted to just one particular structure: that of semi-algebraic sets (sets which are definable by means of polynomial inequalities). My understanding is that only later did it dawn on model theorists that the o-minimality condition on a general structure had such powerful consequences — and due to a paucity of examples (the main one being Tarski’s), the subject didn’t really take off until Wilkie came out with his remarkable result, that the expansion of semi-algebraic sets that results by adding an exponential function is o-minimal. You may be more expert than I in this area, but I’d be more inclined to say that the theory really originates in those two developments, even conceding that Tarski-Seidenberg has always played an archetypal role. You can do a lot of fun calculus (from the big O point of view) with Wilkie’s structure and further expansions, but the uncomfortable fact remains that with o-minimality, you can never incorporate the sine function in this setting. Is this something that Shiota’s X-sets can handle? By: Todd Trimble on April 18, 2008 at 1:45 am 35. Pushing Knuth’s idea to its limit, take any modulus of continuity, i.e. a convex increasing function $m(\epsilon)$, defined for $\epsilon \geq 0$, $m(0)=0$ and not skipping any values (i.e., continuous at 0, continuity for strictly positive $\epsilon$ being automatic). Then Knuth’s “strong derivative” concept with $O(\epsilon)$ replaced by $O(m(|\epsilon|)$ and $O$ uniform in $x$, will give you calculus without limits. By: misha on April 19, 2008 at 3:36 am 36. It should have been …$O(\epsilon ^2)$ replaced by $O(|\epsilon|m(|\epsilon|))$“… in comment 30, sorry for messing up. To Robert: you remark only indicates that any “calculus reform” will not work without a reform of the rest of mathematical education. By: misha on April 19, 2008 at 3:49 am 37. Todd Trimble said: You can do a lot of fun calculus (from the big O point of view) with Wilkie’s structure and further expansions, but the uncomfortable fact remains that with o-minimality, you can never incorporate the sine function in this setting. Is this something that Shiota’s X-sets can handle? Yes, Shiota’s universe looks like a good idea. But in any case, a lot of serious mathematical work will be needed before a reasonable concept of “friendly functions” is born. By: Alexandre Borovik on April 19, 2008 at 6:19 pm 38. I claim that Calculus of Friendly Functions is already here for everybody to enjoy. Here is how. Take you favorite modulus of continuity $m$, then uniform $m$-differentiability can be defined by the inequality $|f(x+h)-f(x)-f'(x)h| \leq K|h|m(|h|)$ uniform in $x$. The derivative will be uniformly $m$-continuous, i.e., it will satisfy the inequality $|f'(x+h)-f(x)| \le 2Km(|h|)$ uniform in $x$. Any $m$-continuous function $f$ has a primitive $F$ that is uniformly $m$-differentiable. All the proofs are clean and simple, on the level of hight school algebra. How can it be friendlier? By: misha on April 19, 2008 at 7:39 pm 39. Misha: My dream is to get rid of inequalities and work with piecewise monotonous-and-taking-all-intermediate-values functions. This is a class of functions sufficient for doing most of mathematical economics (but not models of derivatives trading, I have to admit — but perhaps derivatives trading will eventually be outlawed). By: Alexandre Borovik on April 19, 2008 at 7:48 pm 40. To Alexandre: I must admit that getting rid of inequaities looks a bit too radical to me. What is left then? Aren’t inequalities implicitly present in O and o notations and semi-algebraic and semi-amalytic sets? What kind of problems in mathematical economics can be treated, or you would like to treat, without inequalities? Can you give a reference maybe? It’s probably a topic for another posting, since you and Todd clearly went off on a tangent here. I’m not sure whether complete formalization and axiomatization of O and o notations will help make calculus more widely understandable. By: misha on April 20, 2008 at 1:25 am 41. Misha, the “tangent” where I was a commenter was about friendly functions, where you were a participant as well. Alexandre was pointing out o-minimal structures as giving classes of friendly functions, but part of my point (and his too) was that these classes may not be general enough. The “o” stands for “order”, that is the binary relation < which is assumed to be part of the structure we are considering — the theory of o-minimal structures is therefore in the direction opposite to getting rid of inequalities. (“O-minimal” means that the only subsets of $\mathbb{R}$ which are definable in the structure are the ones already guaranteed to be in the structure: finite unions of points and intervals. The book by van den Dries, Tame Topology and O-minimal Structures, is a very good introduction — the word “tame” meaning friendly in the sense of being free of pathology; cf. Grothendieck’s Esquisse d’un Programme.) By: Todd Trimble on April 20, 2008 at 12:05 pm 42. Thanks for the explanations and the references, Todd. As for the tension between friendliness as the absence of pathologies (or amenability to explicit or numerical calculations) and generality of our axioms, I think it will always be with us. How to resolve this tension of course depends on the problem that the theory is applied to. My attitude is that theories are mostly the means to the ends (of solving problems), not the ends in themselves. By: misha on April 20, 2008 at 4:52 pm 43. I can not resist quoting the concluding remarks of the presidential address to the London Mathematical Society by Michael Atiyah (Bull. London Math. Soc., 10, 1978, 69-76), called “The Unity of Mathematics,” that still ring true today, maybe even more so than in 1976: The main theme of my lecture has been to illustrate the unity of mathematics by discussing a few examples that range from Number Theory through Algebra, Geometry, Topology and Analysis. This interaction is, in my view, not simply an occasional interesting incident, but rather it is of the essence of mathematics. Finding analogies between different phenomena and developing the techniques to exploit these analogies is the basic mathematical approach to the physical world. It is therefore hardly surprising that it should also figure prominently internally within mathematics itself. I feel that this have to be emphasized because the axiomatic era has tended to divide mathematics into special branches, each restricted to developing the consequences of a given set of axioms. Now I am not entirely against the axiomatic approach so long as it is regarded as a convenient temporary device to concentrate the mind, but it should not be given too high a status. A secondary theme implicit in my lecture has been the importance of simplicity in mathematics. The most useful piece of advice I would give to a mathematics student is always to suspect an impressive sounding Theorem if it doesn’t have a special case which is both simple and nontrivial.I have tried to select examples that satisfy these conditions. Both unity and simplicity are essential, since the aim of mathematics is to explain as much as possible in simple basic terms. Mathematics is still after all a human activity, not a computer programme, and if our accumulated experience is to be passed on from generation to generation we must continually strive to simplify and unify. By: misha on April 21, 2008 at 3:15 am 44. To Alexandre: you can “get rid of inequalities” in differentiation, by viewing differentiation as division in your favorite class of (globally defined) “friendly” functions (uniformly continuous functions will do). This will work for many variables as well, because you have to divide by polynomials only, and polynomials can vanish only on sets without any interior points. I tried to explain it elsewhere, but encountered some misunderstanding and resistance. So you can look at differentiation as a purely algebraic matter. But still inequalities sneak in through the back door, so to speak, because you have to describe your friendly functions, and some restrictions on the variation $f(x)-f(u)$ in terms of inequalities will be necessary (you have mentioned monotonicity, for example). Also at some point you will have to explain why tangents look like tangents, why they cling to the graphs, and you will need inequalities again to explain the very meaning of differentiation as linear approximation. A correction: In comment #30, line 2, it should be concave, not convex. By: misha on April 21, 2008 at 4:46 pm 45. Robert wrote: I’m a big Knuth fan, but the thought of teaching calculus this way gives me a combination of a headache and fits of laughter. Some of my calculus students cannot remember how to add integer fractions, cannot solve 3x+2=0, and want to use the Product Rule to differentiate ln(x) (because it’s “ln” times x). That is the most ridiculous thing. Why are these students even taking calculus? They should be taking remedial math. By: Al Bumen on February 24, 2009 at 7:16 pm 46. I have a moved my web page to http://www.mathfoolery.com a couple of days ago, sorry for any inconvenience. Also my article not quite finished article at http://arxiv.org/abs/0905.3611 may be of interest By: misha on September 5, 2009 at 6:22 am 47. i have some problem to understand tho big-o notation i need help to the function in maths practice ? By: younous on October 30, 2009 at 7:21 am Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 144, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9451412558555603, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/44295/a-property-of-continuous-maps-with-respect-to-compact-subsets
## A property of continuous maps with respect to compact subsets ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm interested in continuous maps between topological spaces $f:X\to Y$ such that for any compact subset $L$ of $Y$ contained in $f(X)$, there is a compact subset $K$ of $X$ such that $L$ is contained in $f(K)$. Proper maps satisfy this, but there are examples of continuous maps which don't, for example with discrete spaces, taking a non-stationary convergent sequence extended at infinity. I would like to know if there are characterizations for those topological spaces which have enough compact subsets in the sense that: each real-valued continuous function satisfy this property. - ## 2 Answers Every path-connected space $X$ satisfies this property. The image $f(X)$ is a connected subset of $\mathbb R$ and is hence a (possibly infinite, or trivial) interval. Every compact sub-interval $[y_0,y_1]$ is contained in the image of a compact path in $X$ connecting a point in $f^{-1}(y_0)$ to another point in $f^{-1}(y_1)$. Every compact subspace $L$ of $f(X)$ is contained in one such sub-interval, so you are done. Note however that it is easy to construct an $X$ with two path-connected components which does not satisfies the property. For instance, take $X$ as two copies of $\mathbb R$ and define $f$ as constantly $\pi/2$ on one copy and as $f(x) = \arctan x$ on the other. Added. If $Y = \mathbb C$ then this is no longer true. For instance, a (non-embedding) immersion $f:\mathbb R \to \mathbb C$ with compact image is a counterexample: simply take $L= f(\mathbb R)$. - I don't understand why a continuous image of a path-connected set has to be an interval. $X={\bf R}^2$ is path-connected, the identity map $f:X\to X$ is continuous, and its image is $X$, which is not an interval. – Gerry Myerson Oct 31 2010 at 10:30 Truly asks about real-valued maps, so $Y = \mathbb R$. – Bruno Martelli Oct 31 2010 at 10:47 Ah, in the last clause of the question. I missed that, just saw that in the rest of the question things were more general. Thanks. – Gerry Myerson Oct 31 2010 at 11:43 (I edited the answer to avoid ambiguities, thanks) – Bruno Martelli Oct 31 2010 at 11:45 Thanks. So there's a large class of spaces whose real-valued continuous functions satisfy this. What if we require this for all complex-valued continuous function ? It looks much more restrictive. – Truly Oct 31 2010 at 16:46 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Such maps are called compact-covering maps, and are a somewhat well-known and well studied object. They came up naturally in many contexts, and if you look for that keyword in mathscinet, you will find many matches. As Bruno mentions, it is very easy to construct examples of maps in very ordinary settings that fail to be CC. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498148560523987, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/104051/list
## Return to Answer 1 [made Community Wiki] I like the phrase counting in two different ways'' -- which I learned in a math camp. If you're not sure what it means, I suggest the exercise of trying to prove that $2^n = \sum_{i=0}^n {n\choose i}$ by looking for a proof which could be aptly described by this phrase.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9794574975967407, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/21466-logs.html
# Thread: 1. ## Logs Evaluate the expression without using a calculator. log3^3^11 2. Originally Posted by soly_sol Evaluate the expression without using a calculator. log3^3^11 is this log to the base 10? $\log 3^{3^{11}} = 3^{11} \log 3$ if you can't use a calculator, you'd have to use a log table or something to find $\log_{10}3$ 3. Originally Posted by Jhevon is this log to the base 10? $\log 3^{3^{11}} = 3^{11} \log 3$ if you can't use a calculator, you'd have to use a log table or something to find $\log_{10}3$ The problem might be $\log_3 3^{11}$. In which case, no calculator would be necessary. The 11 can come out in front alone, making $11\times\log_3 3 = 11\times1 = 11$. 4. Originally Posted by Soltras The problem might be $\log_3 3^{11}$. In which case, no calculator would be necessary. The 11 can come out in front alone, making $11\times\log_3 3 = 11\times1 = 11$. indeed. after reviewing his/her posts, i see that soly_sol has a hard time posting questions in an understandable manner. the same question was posted as log3311 in another post. but your interpretation makes more sense, i was merely trying to get a response from the poster 5. Originally Posted by Jhevon indeed. after reviewing his/her posts, i see that soly_sol has a hard time posting questions in an understandable manner. the same question was posted as log3311 in another post. but your interpretation makes more sense, i was merely trying to get a response from the poster Wow I'd be impressed if anyone could do log3311 in his head. All I know is that it's three-point-something. . .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606452584266663, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/31239/poincare-bendixson-theorem
# Poincare-Bendixson Theorem Can someone sketch some ideas of how to use the Poincare-Bendixson Theorem to prove that there must be a fixed point contained inside a periodic orbit? - I have a feeling that that statement is false. I do not have a counterexample though. – picakhu Apr 6 '11 at 2:43 I cant see the question, the image link isn't showing either. – Kate Apr 6 '11 at 3:22 1 This certainly doesn't hold for discrete or higher dimensional dynamical systems (ex: billiards in polygons), so I take it we can assume the system is continuous and 2 dimensional? – Alex Becker Apr 6 '11 at 5:13 @Alex: is there even a Poincare-Bendixson theorem in more than 2 dimensions? In anycase, certain topological assumptions must be made about the domain, else just consider the rotations on $\mathbb{S}^1\times \mathbb{R}$. – Willie Wong♦ Apr 6 '11 at 13:21 @Brad: please flesh out your question. Your statement is presumably true under certain assumptions, none of which you stated in the question. Please put more effort into describing what exactly it is that you want to know. – Willie Wong♦ Apr 6 '11 at 13:22 show 1 more comment ## 2 Answers What is meant here with inside is inside as in the Jordan-curve-theorem. The proof can be done using it as well as the Schauder-fixed-point theorem - see for example the proof in the book by G. Teschl (to be found on his homepage). - ## Did you find this question interesting? Try our newsletter email address I interpreted the question as follow: Let $f : \mathbb{R}^2 \to \mathbb{R}^2$ be a $\mathcal{C}^1$ vector field. Suppose there exists a periodic orbit $\gamma$. According to Jordan-curve-theorem, $\mathbb{R}^2 \backslash \gamma$ admits only one bounded connected component $D_{\gamma}$. Show that $D_{\gamma}$ contains a fixed point. In fact, you only need a weak version of Poincaré-Bendixson theorem: Theorem: Let $f : \mathbb{R}^2 \to \mathbb{R}^2$ be a $\mathcal{C}^1$ vector field and $x \in \mathbb{R}^2$. If the $\omega$-limit $\omega(x)$ is nonempty, compact and does not contain any fixed point, then $\omega(x)$ is a periodic orbit. Notice that it is also true for $\alpha$-limits, since by reversing time an $\alpha$-limit becomes an $\omega$-limit while the phase portrait is unchanged. By contradiction, suppose $D_{\gamma}$ does not contain any fixed point. Then, by Poincaré-Bendixson theorem, for all $x \in D_{\gamma}$, $\omega(x)$ and $\alpha(x)$ are periodic orbits. In particular, there are infinitely many periodic orbits in $D_{\gamma}$. For any periodic orbit $\tau$ in $D_{\gamma}$, let $K_{\tau}=\tau \cup D_{\tau}$ (where $D_{\tau}$ is defined like $D_{\gamma}$ but for $\tau$). We get a family of compacts $\{K_i : i \in I \}$ linearly ordered by inclusion, indexed by some unbounded set $I \subset \mathbb{R}_+$. Without loss of generality, we can suppose the family $\{K_i : i \in I\}$ nonincreasing (otherwise, take $\tilde{K}_i= \bigcap\limits_{j \leq i} K_j$). Because any $x \in \mathbb{R}^2$ is between its $\omega$-limit and its $\alpha$-limit, $\bigcap\limits_{i \in I} K_i=\emptyset$. Now, take $i_n \in I$ such that $i_n \underset{n\to + \infty}{\longrightarrow} + \infty$; then $\bigcap\limits_{n \geq 0} K_{i_n}=\emptyset$. So we find a nonincreasing sequence of compacts converging to the emptyset, a contradiction (since $\mathbb{R}^2$ is complete). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306367039680481, "perplexity_flag": "head"}
http://mathoverflow.net/questions/27794?sort=newest
## Relative canonical sheaf ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi. I want to know if for $f:X\to S$ a proper flat holomorphic map with n-dimensionnal fibers over reduced complex space S, the relative canonical sheaf $w_{X/S}:=H^{-n}(f^{!}O_{S})$ is a dualizing sheaf which imply that the two functor, on Coh(S), $G\to H^{-n}(f^{!}G)$ and $G \to f^{*}G\otimes w_{X/S}$ agree... Thanks. - 1 Thank you for the answer. But, it seems to me that a relative dualizing sheaf is necessarly flat over S and then compatible with any base change (Kleiman, "Relative duality.. Compositio math, 41, prop 9 p.9). We know that for cohen macaulay morphism (conrad: Grothendieck duality) or morphism with Dubois singularities ( Kollar-Kovaks, journal.Am.Math.soc. 23, p.791) it is the case. In general, the recent paper of Zsolt :Arxiv 28 may 2010 " base change for relative canonical sheaf in families of normal varieties) show us that it is no true. – kaddar Jun 11 2010 at 10:43 ## 2 Answers If the relative canonical sheaf $w_{X/S}:=H^{-n}(f^{!}O_{S})$ is a dualizing sheaf then necessarily the two functors $G-->H^{-n}(f^{!}G)$ and $G-->f^{*}G\otimes w_{X/S}$ agree as well. Remark that the first functor is left exact and the second is right exact which imply immediately that $w_{X/S}$ is flat over $S$ and then commute with any base change! But it is not true in general. For this, we can see the simple example of Zsolt in Arxiv AG-2008: Base change for relative canonical sheaf. In this paper, he consider a flat family of normal varieties over smooth curve. I have another very easy example which show us that $w_{X/S}$ is compatible with base change but the two functor dont agree. This example is giving by a surjective finite morphism between Normal Gorentein complex spaces. Of course, this morphism is neither flat, neither of finite tor dimension but it defines an analytic family of zero-cycles (and then we have a nice holomorphic trace map). Important remark: 1) In this two example, Kunz relative sheaf (relative regular meromorphic forms) and relative canonical sheaf agree. 2) Kunz relative sheaf is not a dualizing sheaf in general. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If $f\colon X \to S$ a proper flat of schemes map with $n$-dimensionnal fibers over a noetherian scheme $S$, the relative canonical sheaf $\omega_{X/S}:=H^n(f^!{\mathcal{O}_S})$ is a dualizing sheaf. I guess that this should imply what you want by a GAGA-type argument. Indeed, being $f$ flat we have that $f^!G = f^*G \otimes f^!{\mathcal{O}_S}$ [Lipman, SLN 1960, Theorem 4.9.4] for $G \in D^b_c(X)$ (or, more generally, for $G \in D^+_{qc}(X)$). Now, $f^!{\mathcal{O}_S}$ is concentrated between degrees 0 and $-n$ by looking at the fibers of $f$ and its description via residual complexes. Then, taking $-n$-th cohomology on both sides and one obtains the desired result. Unfortunately, I am not familiar enough with the analytic version of the story as in Ramis-Ruget-Verdier "Dualité relative en géométrie analytique complexe", but I guess the algebraic version may give you a clue for how to transpose the result to your setting. I would bet you do not need $S$ reduced as long as $f$ is flat. Addendum There was a confusion on my part. I was speaking about the dualizing complex while the original question was about the dualizing sheaf. If we agree that $\omega_X = H^{-n}(f^!O_B)$ for a equidimensional family (with $n$ the dimension of the fibers), then the base change isomorphism in the derived category for a square with base $u \colon P \to B$, ($B$ for "base scheme") a base change of the map $f \colon X \to B$ completing the square with $v \colon F \to X$ and $g \colon F \to P$, respectively. Then, by the formula in the derived category, $Lv^* f^! G \cong g^! Lu^* G$ we get an isomorphism of sheaves $H^{-n}(Lv^* f^!O_B) \cong H^{-n}(g^! Lu^*O_B) = H^{-n}(g'^!O_P)$ so we see that the dualizing sheaf over $F$ is the abutment of a spectral sequence involving higher $Tor$ sheaves related to the embedding of the fiber into the space $X$. But, of course one needs some information on this embedding to get the collapse of the spectral sequence. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9081861972808838, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/295379/why-is-the-moduli-space-of-flat-connections-a-symplectic-orbifold/295505
# Why is the moduli space of flat connections a symplectic orbifold? In her Lectures on Symplectic Geometry on page 159, Ana Cannas da Silva writes "It turns out that $\mathcal{M}$ is a finite-dimensional symplectic orbifold." Can somebody give me a reference for this result, preferably including a detailed definition of symplectic orbifold? I'm trying to become familiar with orbifolds, and I am familiar with symplectic manifolds, but I imagine a symplectic orbifold could have a more stringent definition than just an orbifold whose "smooth part" is equipped with a symplectic form (I expect maybe there is some condition on how the symplectic structure behaves near the singular part). - ## 1 Answer The original reference for this fact is the paper Yang-Mills Equations over Riemann Surfaces by Atiyah and Bott. The idea is as follows. Let $P \longrightarrow \Sigma$ be a principal $G$-bundle over a Riemann surface and let $\mathcal{A}$ denote the space of connections on $P$. For a connection $A \in \mathcal{A}$, we define $$\omega_A : T_A \mathcal{A} \times T_A \mathcal{A} \longrightarrow \Bbb R,$$ $$\omega_A(\alpha, \beta) = \int_\Sigma \langle \alpha \wedge \beta \rangle.$$ Here we have identified $T_A \mathcal{A}$ with $\Omega_\Sigma^1(\mathrm{Ad}(P))$ since $\mathcal{A}$ is an affine space modeled on $\Omega_\Sigma^1(\mathrm{Ad}(P))$. $\langle \alpha \wedge \beta \rangle$ is the composition $$\Omega_\Sigma^1(\mathrm{Ad}(P)) \times \Omega_\Sigma^1(\mathrm{Ad}(P)) \xrightarrow{~\wedge~} \Omega_\Sigma^2(\mathrm{Ad}(P)) \xrightarrow{~\langle \cdot, \cdot \rangle ~} \Omega_\Sigma^2,$$ where $\langle \cdot, \cdot \rangle$ is an $\mathrm{Ad}$-invariant inner product on the Lie algebra $\mathfrak{g}$. Now we have the following. Theorem. $\omega$ is a symplectic form on $\mathcal{A}$, and the action of the group of gauge transformations $\mathcal{G}$ on $\mathcal{A}$ is Hamiltonian with respect to this symplectic structure and has moment map $\mu(A) = -F_A$. Here we consider the curvature map $$F: \mathcal{A} \longrightarrow \Omega_\Sigma^2(\mathrm{Ad}(P))$$ as a map $$F: \mathcal{A} \longrightarrow \mathrm{Lie}(\mathcal{G})^\ast$$ via the identification $$\Omega_\Sigma^2(\mathrm{Ad}(P)) = \Omega_\Sigma^0(\mathrm{Ad}(P))^0 \cong \mathrm{Lie}(\mathcal{G})^\ast.$$ When we have a moment map, we can form the Marsden-Weinstein quotient $$\mathcal{A} /\!\!/ \mathcal{G} = \mu^{-1}(0)/\mathcal{G},$$ which is a stratified symplectic space (a symplectic orbifold if $0$ is a regular value of $\mu$, and a symplectic manifold if $0$ is a regular value of $\mu$ and the action of $\mathcal{G}$ is free). Since $\mu$ is minus the curvature, we see that $\mu^{-1}(0) = \mathcal{A}^\flat$, the space of flat connections on $P$. Therefore $$\mathcal{A} /\!\!/ \mathcal{G} = \mathcal{M}^\flat,$$ where $\mathcal{M}^\flat$ denotes the moduli space of flat connections on $P$. There are some caveats here. The usual Marsden-Weinstein quotient is defined for finite-dimensional symplectic manifolds with a Hamiltonian action; here $\mathcal{A}$ is infinite-dimensional. Nevertheless, one can show that the formal process above works and $\mathcal{M}^\flat$ is a finite-dimensional. As for the definition of symplectic orbifold, recall that an orbifold $\mathcal{O}$ has an orbifold atlas $\{(U_i, \tilde{U}_i, \phi_i, \Gamma_i)\}$, where $U_i \subset \mathcal{O}$ is open, $\tilde{U}_i \subset \Bbb R^n$ is open and connected, $\phi_i: U_i \longrightarrow \tilde{U}_i$ is a continuous map, and $\Gamma_i$ is a finite group of diffeomorphisms of $\tilde{U}_i$. A symplectic form on $\mathcal{O}$ is specified in terms of the orbifold atlas by a family of symplectic forms $\{\omega_i\}$, where $\omega_i$ is a symplectic form on $\tilde{U}_i \subset \Bbb R^n$ that is invariant under the action of $\Gamma_i$. We require the following compatibility condition between the $\omega_i$. Recall that the overlap condition for an orbifold goes as follows: if $x \in \tilde{U}_i$ and $y \in \tilde{U}_j$ are such that $\phi_i(x) = \phi_j(y)$, then there is a neighborhood $V_x$ of $x$ and $V_y$ of $y$ and a diffeomorphism $$\psi: V_x \longrightarrow V_y$$ such that $$\phi_i(z) = \phi_j(\psi(z)) \text{ for all } z \in V_x.$$ Then our compatibility condition is that $$\omega_i = \psi^\ast \omega_j.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346469640731812, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/6098/reynolds-number-turbulence-regime-and-drag-force/6156
# Reynolds number, turbulence regime, and drag force I am trying to model a system in which cubes of about 2 cm in size are floating in a circular water thank of about 30 cm in diameter. The cubes move around under the influence of the fluid flow induced by four inlets that point toward the center of the tank, and are located at the positions $0$, $\pi/2$, $\pi$, and $3\pi/2$. The flow velocity ranges from 0 to 10 cm/s, with an average velocity around 6 cm/s. My questions are the following: • What would be the Reynolds number of the system? In particular, should I take as characteristic length the size of the cubes, or that of the tank? • For such a system, what is the limit Reynolds number for the turbulent regime? • What would be the correct form of the drag force, and do you intuitively think that the orientation of the blocks is negligible from a drag coefficient point of view? - ## 2 Answers I'd say that you have several regimes that are well defined: • The behavior of the fluid as it exits in inlet jets and enters the bulk without interference from the cubes. [Length scale set by the exit aperture?] • Flow of the fluid around isolated cubes when far from the edges of the tank (far being several times the characteristic size of the cube). [Length scale set by the side of the cube.] • Flow of the fluid toward, along and away from the sides of the tank away from the jets and without interference from the cubes. [Length scale set by the boundary behavior?] which is the good news, unfortunately you also have all the cases that mix and match the various length scales: • case with cubes interacting with the jet near the aperture • case with cubes in motion near the walls • case with cubes in close proximity to one another You can probably find existing treatments for all the former cases, but the latter ones are going to be tricky, and you'll note that they feature at least two length scales. Yuck. This must be part of why they say CFD is hard. - Well, the answer is more complicated than I hoped, but that's interesting, thanks. – Greg Mar 2 '11 at 7:32 The characteristic length can be whatever you want it to be. Characteristic lengths are only prescribed for a limited range of "standard" problems. For that matter, what do you select as your velocity? Likewise, the dependence of flow characteristics on Reynolds number depends on how you define your Reynolds number. To my knowledge, unlike, say, flow between parallel plates, pipe flow, or flow over an airfoil, this is not a well-studied problem, so there really is no answer anyone can give to your second question. You'll have to physically do the experiment to see. If you are referring to the drag force on a specific block, I would ask: does the coefficient of drag for an airfoil change with angle of attack? There's your answer. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956912100315094, "perplexity_flag": "head"}
http://mathoverflow.net/questions/76545?sort=oldest
## Is it possible to reliably generate a particular approximation of an ideal knot via a simulated annealing approach? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Say I take a cord, tie a loose knot in three-dimensional space, and pull tightly on the ends to generate an approximation of an ideal knot. If the cord has a fixed knot topology and a random initial configuration in space, are there any conditions under which I am guaranteed to generate a particular ideal knot approximation? In other words, if I perform something akin to simulated annealing to approximate the ideal knot, is it possible for me to always arrive at the same final state (whatever that may be) within some small error? [Sept. 29th, 2011] - Are there any conditions/constraints under which I might be able to reliably achieve some local (or global) minima in the energy landscape of a knot? - What is an "ideal knot"? – Igor Rivin Sep 27 2011 at 19:41 @Igor Rivin, an ideal knot is a representation of a knot consisting of the shortest possible length of a frictionless cord of some thickness 'R'. One might attempt to approximate this by pulling on the ends of a knotted cord. – UltraBlue06 Sep 27 2011 at 20:00 2 One of the main motivations for looking at some knot energies is the hope that a gradient flow will move a generic realization of a knot to a unique global minimum. torus.math.uiuc.edu/jms/Videos/ke This appears to work often in practice, but AFAIK it has not been proven. I am quite skeptical that "pulling tightly on the ends" will straighten out a complicated presentation of a knot, or even for the unknot, since that seems to be an ineffective method in practice. – Douglas Zare Sep 27 2011 at 21:06 3 I don't have a proof of this but from talking with Sullivan and Kusner, my impression is that the "rope length" functional on knot spaces has many very distinct local minima. So "the ideal knot" generally does not exist. There are just many ropelength minimizers. For example, for the $(p,q)$-torus knot in R^3, the space of all knots isotopic to it (i.e. embeddings of $S^1$ in $\mathbb R^3$ modulo parametrization) has the homotopy-type of a double mapping cylinder $SO_3 / \mathbb Z_p \leftarrow SO_3 \to SO_3 / \mathbb Z_q$. The $SO_3 / \mathbb Z_p$ and $SO_3 / \mathbb Z_q$ appear to be ... – Ryan Budney Sep 27 2011 at 21:44 1 Just stumbled upon this, but Patrick D. Bangert's PhD thesis bit.ly/qmEkCN [PDF] deals with just this problem. He states the minimal word problem for braids, finds an NP-complete algorithm to solve the minimal word problem, then simulates braids as elastic strings and has them relax to heuristically "solve" the minimal word problem. Running his simulation at a non-zero "temperature" would be equivalent to using simulated annealing. – Kelly Davis Sep 29 2011 at 15:04 show 3 more comments ## 2 Answers I suspect what you are seeking is not known. One of the experts in this area is Jason Cantarella, who coauthored (with R. Kusner and J. Sullivan) one of the definitive papers on this topic, "On the minimum ropelength of knots and links," Inventiones Mathematicae, Volume 150, Number 2, 257-286, 2002; Springer link). Jason has developed software that he calls RidgeRunner which implements a rope-length minimization, knot-tightening procedure. (I cannot speak to the details of his software, in particular, I don't know if it is "akin to simulated annealing.") He maintains a wonderful web page with (Quicktime) movies of his minimizer approaching what you call the "ideal knot" for many (more than 100!) knots and links. Here, for example, is the endpoint of his optimizer for $8_5$: Update. See the just-released paper, "The Shapes of Tight Composite Knots," arXiv:1110.3262 (math.DG), by Jason Cantarella and Al LaPointe and Eric Rawdon, for a description of the RidgeRunner software mentioned above. It "proceeds by constrained gradient descent," and "is designed to stop at local minima of the ropelength function." They reduce the probability of "false local minima" with several strategies. Their dataset now contains almost 1000 knots and links. - 2 Does your exclamation point mean (a) more than 100 (excitement) or (b) more than 100 factorial? – MTS Sep 29 2011 at 21:50 @MTS: Ha! Only on MO could an excitement "bang" be misinterpreted as factorial! :-) – Joseph O'Rourke Sep 29 2011 at 23:15 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The only conditions that I know of under which it's absolutely known that gradient flow will converge to a minimum energy state is when the initial configuration is planar. Zheng-Xu He proved this for his renormalized $1/r^2$ potential 'Mobius' energy in The Euler-Lagrange equation and heat flow for the Möbius energy (CPAM 53, 2000). We proved a similar theorem for 'repulsive' energies on planar polygons in An energy-driven approach to linkage unfolding (SOCG 2004). For a nonplanar configuration, you're right to think that there's a general theorem that simulated annealing has a positive probability of converging to the global minimum" which applies to these problems, but it's ineffective in practice. The configuration space is in practice very high dimensional (3 x number of vertices) and reducing an energy functional such as ropelength generally requires a coordinated global motion of these vertices, so it's quite rare to generate such a move randomly. AFAIK, nobody knows how to estimate the probability of convergence under these circumstances, so there's no hard information on how long to run an annealer. FWIW, RidgeRunner really isn't an annealer: it generates 'coordinated motions' for tightening deterministically using a linear algebra algorithm to deflect the gradient of length for a polygon into a ropelength-decreasing direction using the active distance and curvature constraints. The algorithm is described in Knot Tightening By Constrained Gradient Descent (Experimental Math, 2011). You're welcome to play with RidgeRunner yourself if you'd like to try tightening: the software can be downloaded at link text. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.917454183101654, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/78873/list
3 edited tags 2 edited tags 1 # A model of CH +$\lnot \diamondsuit$ All of the models of CH which I know of also satisfy $\diamondsuit$. What is the easiest way to produce a model of CH wherein $\diamondsuit$ is false?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9199500679969788, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/9854/uniformly-sampling-from-convex-polytopes/9871
## Uniformly Sampling from Convex Polytopes ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) How to choose a point uniformly from a convex polytope $P \subset [0,1]^n$ defined by some inequalities, Ax < b ? (Here A is an m-by-n matrix, x is n-by-1 and b is m-by-1.) I imagine that you could start with a uniformly chosen point in the cube and do some process to get a point with Ax < b. - ## 5 Answers See the answers to this question — uniform sampling from polytopes forms the basis for the known algorithms for calculating their volumes. The methods from the papers mentioned in those answers mostly take the form of a random walk inside the polytope; they differ in the details of the walk and in the analysis of its mixing time. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Rejection sampling will definitely work. Take a hypercube that you know contains the polytope, sample from the hypercube, and accept only those samples that belong in the polytope. However if the relative volume of the polytope is small you'll end up rejecting most samples, and the method might get painfully slow. Depending on your needs you might want to find, e.g., the smallest enclosing ball first, so that you can draw your uniform samples from that instead of the hypercube. See Boyd & Vanderberghe's book on Convex Optimisation (it's online) for finding smallest enclosing sets. - For practicality it might be better to cover the polytope with boxes. Doing this optimally is hard/ill-posed/etc, but I would imagine it wouldn't be too challenging to fix the number of elements in a cover (start with a uniform "layering" of the hypercube) and minimize volume of the cover under the constraints of fixed cardinality and no overlap. Then rejection sample from that. – Steve Huntsman Dec 27 2009 at 16:33 That idea won't work if you reject 999,999 out of a million samples. There must be some intrinsic way of doing this. – John Mangual Dec 27 2009 at 16:43 Also, you might set up the box covering by taking a uniform mesh on the hypercube, then discarding boxes that don't intersect the polytope at all. Keep indices for the remaining boxes as well as a supplementary Boolean variable to indicate whether or not the box is entirely contained in the polytope. Sample uniformly on indices, then rejection sample or uniformly sample on the resulting boxes depending on the Boolean variable you've already set. This middle road is probably close to the best you can do in silico. – Steve Huntsman Dec 27 2009 at 20:53 John: if you want an intrinsic mechanism, you can use MCMC techniques to sample from the density asymptotically. For example, it would be be fairly easy to implement Gibbs sampling, but it might take a long time for the chain to converge. In general the problem you want to solve is very hard, computationally speaking. To find an efficient algorithm, you are going to make some assumptions about the structure of your polytope, or to use some preprocessing to find out more about it. That's what Steve is suggesting. – Simon Barthelmé Dec 29 2009 at 10:37 - If you use MATLAB cprnd on their File Exchange solves the problem. - A case with a fast simple method: to sample the "right simplex" $\ \sum{x_i} \le 1,\ x_i \ge 0$: 1. sample in $\sum{x_i} = 1$ by taking i.i.d. exponentials scaled to sum 1 2. scale by random-uniform$^\frac{1}{dim}$. (I have no idea how to generalize this.) In Python with NumPy, this is ````def random_simplex_sum1( N, dim ): """ N uniform-random points >= 0, sum x_i == 1 """ X = np.random.exponential( size=(N,dim) ) X /= X.sum(axis=1)[:,np.newaxis] return X def random_simplex_le1( N, dim ): """ N uniform-random points >= 0, sum x_i <= 1 """ return random_simplex_sum1( N, dim ) \ * (np.random.uniform( size=N ) ** (1/dim)) [:,np.newaxis] ```` -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9123268127441406, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/182000-mann-whitney-u-test.html
# Thread: 1. ## Mann-Whitney U test While I was analysing my data (regards to number of chemotherapy received by 2 groups of patients) I note the median values of both groups are same but applying the Mann-Whitney U test, it showed that there was significant difference in the number of chemptherapy received. Is it possible? Thank you Chandramsk 2. i think yes. The test statistic only depends on the ordering of data, not its actual value. You can contruct a sample where the test statistic is significant but the subsample medians are arbitrarily close to each other. The example below assumes that data is continuous). Suppose you had some data $X$ where the subsample medians were very different and the U statistic was significant. You could take a transformation of the data $X' = aX +b, ~~~ a \in (0, \epsilon]$ which would make both subsample medians arbitrarily close to b (and hence, arbitrarily close to each other). The U statistic would be unaffected since the above transformation doesn't change the ordering of any data. 3. ## Re: Mann-Whitney U test Suppose I have a non-normal distribution with a big skew, apply a treatment to it and then want to compare the distributions (same population) pre/post treatment. I think the Mann-Whitney U test should work right? Is there a better option?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504318833351135, "perplexity_flag": "middle"}
http://nrich.maths.org/5331/index?nomenu=1
## 'Decoding Transformations' printed from http://nrich.maths.org/ ### Show menu In this question, each of the letters $I$, $R$, $S$ and $T$ represents a different transformation. We can do one transformation followed by another. For example, $R S$ means do $R$, then $S$''. We can also undo transformations. For example, $R^{-1}$ means do the inverse (opposite) of $R$''. Here are the effects of some transformations on a shape. Can you describe the transformations $I$, $R$, $S$ and $T$? What single transformation has the same effect as $R S T I R^{-1}S^{-1}T^{-1}I^{-1}$? This problem is the first of three related problems. The follow-up problems are Combining Transformations and Simplifying Transformations .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8847764730453491, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/lebesgue-integral+limit
# Tagged Questions 0answers 64 views ### Show $f$ is in $L^1$ (d$\mu$) space and $_X\int f$ d$\mu=\lim_{n\to \infty}\int_X f_n d\mu$ Suppose $f$ is in $L^1$($\mu$). Prove that for each $\epsilon > 0$ , there exists a $\delta > 0$ so that the $\int |f|\mathrm d\mu$ < $\epsilon$ over the set $E$ whenever \$\mu(E) < ... 2answers 85 views ### Solve limits in Lebesgue integral Solve the limits of below: (1) $\lim\limits_{n \to \infty} \int_0^n (1+\frac{x}{n})^n e^{-2x}dx$. (2) $\lim\limits_{n \to \infty} \int_0^n (1-\frac{x}{n})^n e^{\frac{x}{2}}dx$. (3) ... 1answer 47 views ### The Lebesgue Theory basic Application , get stuck Ok, I am working on a very easy question but I get stuck when I trying to justify my answer. I know that, in order to use Lebesgue's dominated Convergence Theorem, there are two conditions that we ... 0answers 70 views ### $\lim_{n \to \infty} \int^n_{-n}fdm=\int fdm$ Let $f:\mathbb{R} \to \mathbb{R}$ such that $f$ is integrable over $[-n,n]$ for every $n \in \mathbb{R}$ and assume that $$\lim_{n \to \infty} \int^n_{-n}fdm < \infty.$$ Proposition: $f$ is ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9191791415214539, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=482183
Physics Forums ## Exact meaning of a local base at zero in a topological vector space... I am confused as to exactly what a local base at zero (l.b.z.) tells us about a topology. The definition given in Rudin is the following: "An l.b.z. is a collection G of open sets containing zero such that if O is any open set containing zero, there is an element of G contained in O". Ok, great. But I have seen some proofs in my functional analysis class that suggest something like the following: Any open set in the topology can be formed by taking unions (possibly uncountable) of *translations* of sets in a l.b.z. Is this true, or am I just missing something? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Yes, the two are equivalent! Basically, take an open set G in the topology. If a is in G, then G-a contains 0, thus we can find an element V_A of the lbz, such that $$V\subseteq G-a$$. Thus $$a+V$$ contains a and is smaller than G. Now, we can write G as $$G=\bigcup_{a\in G}{a+V_a}$$ Thus we have written G as union of translations of the lbz... Quote by micromass Yes, the two are equivalent! Great, thanks! Now that I know that, I'm going to try to work out a proof. But is this discussed in Rudin, or on the web, somewhere in case I get stuck? Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus ## Exact meaning of a local base at zero in a topological vector space... Sorry I posted too fast. I was going to include a proof. I've edited my post 1 with the proof... Recognitions: Science Advisor You are already familiar with a neighbourhood base in any topogical space. Now, the topology on a t.v.s. (or a topological group for that matter) is translation-invariant. This is because "translation by a fixed g" $T_g:x\mapsto x+g$ is a homeomorphism (which is because addition is by definition continuous, and T_g is obviously invertible). So it suffices to consider the neighborhood base of any point, in particular 0. Thread Tools Similar Threads for: Exact meaning of a local base at zero in a topological vector space... Thread Forum Replies Calculus & Beyond Homework 6 Special & General Relativity 2 Calculus 2 Special & General Relativity 2 Classical Physics 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9292677640914917, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/288656/product-rule-of-stochastic-exponents
# Product rule of stochastic exponents we know that for standard exponents, $(e^x)(e^y)=e^{(x+y)}$. What is the product rule for stochastic exponents? $E_n(U)E_n(V)=E_n(U+V+[U,V])$ where $U$ and $V$ are stocchastic sequences, $E_n$ is the stochastic exponent and $[U,V]_n$=$\sum_{k=1}^n(\Delta_Uk*\Delta_Vk)$. That is the rule, but whats the proof!? thx :) - What's the definition of $E$ you use? – Ilya Jan 28 at 12:40 ## 1 Answer I assume that by "stochastic exponents", you mean stochastic exponentials, as in Doleans-Dade exponential semimartingales (at least, this definition is consistent with the product rule you mention). The proof of the property $$\mathcal{E}(X)\mathcal{E}(Y) = \mathcal{E}(X+Y+[X,Y])$$ for semimartingales $X$ and $Y$ can be found as the proof of Theorem II.38 of P. Protter's book "Stochastic integration and Differential equations", 2nd edition. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9178298711776733, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/140922/karhunen-loeve-expansion-of-poisson-process?answertab=oldest
# Karhunen-Loève expansion of Poisson process Let $X_t,t\geq 0$ be a Poisson process with rate parameter $\lambda$. Compute the Karhunen-Loève expansion of $X$ in interval $[0, T]$. How about the KL expansion of the centered process $X_t−\lambda t$? The auto-correlation function of Poisson process is $R(s,t)=\lambda^2st+\lambda \min(s,t)$. By definition, KL expansion should satisfy $\int^T_0 R(s,t)\phi_n(t)dt=\lambda_n \phi_n(s)$. I've problems figuring out how to solve the integrated equation. For Wiener process, this link and Wikipedia article on KL expansion was useful. This is a mirror question of this MO question. - ## 1 Answer The obvious works: plugging in KL integral equation the value of $R$ and splitting the integral on $(0,T)$ into a sum of integrals on $(0,s)$ and on $(s,T)$, one gets $$\lambda_n\phi_n(s)=\lambda^2s\int_0^Tt\phi_n(t)\mathrm dt+\lambda\int_0^st\phi_n(t)\mathrm dt+\lambda s\int_s^T\phi_n(t)\mathrm dt.$$ Differentiating this twice yields $$\lambda_n\phi_n''(s)=-\lambda\phi_n(s),$$ from which an expression of the eigenfunctions $\phi_n$ and eigenvalues $\lambda_n$ follows. - Thanks, general solution for this form of ODE is $\phi_n(t) = A Sin(\sqrt(\frac{\lambda}{\lambda_n})t)$ (The cosine term will be zero, since $\phi(0)=0$. But for finding the value for A, I have to plug-it in back to the main equation and I cannot solve this one! – Adel Ahmadyan May 5 '12 at 15:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8909817337989807, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/240667/determining-conjugates-of-sqrt-5-sqrt-6-over-mathbbq/240679
# Determining conjugates of ${\sqrt 5 + \sqrt 6 }$ over $\mathbb{Q}$ The easy part is algebraic manipulation: $x - \left( {\sqrt 5 + \sqrt 6 } \right) = 0 \Rightarrow {x^2} = 5 + 6 + 2\sqrt {30} \Rightarrow {x^2} - 11 = 2\sqrt {30} \Rightarrow p\left( x \right) = {x^4} - 22{x^2} + 1 = 0$. Solutions to this equation are ${x_{1,2,3,4}} = \pm \sqrt 5 \pm \sqrt 6$ and these are all candidates for the conjugates. However, not all of them are necessarily conjugates of ${\sqrt 5 + \sqrt 6 }$. The hard part is proving that $p$ is the minimal polynomial of ${\sqrt 5 + \sqrt 6 }$ over $\mathbb{Q}$. Eisenstein criterion doesn't give irreducibility when taking $p\left( {x + a} \right)$. Even though $p$ has no rational zeroes, that still doesn't mean that it is irreducible over $\mathbb{Q}$. One method which I can think of would be to calculate each of 7 divisors of $p$ and show that they are not in $\mathbb{Q}\left[ X \right]$. However, that would be impractical in the next problem which has a polynomial of degree 31. How to see that all the candidates are indeed the conjugates of ${\sqrt 5 + \sqrt 6 }$ ? - ## 3 Answers You can show by a direct calculation that $\mathbb{Q}(\sqrt{5}+\sqrt{6})=\mathbb{Q}(\sqrt{5},\sqrt{6})$. The latter is the splitting field of $(x^2-5)(x^2-6)$ over $\mathbb{Q}$, hence is a Galois extension of $\mathbb{Q}$. You need to show that this field has four automorphisms, which by the fundamental theorem is the same as showing it has degree $4$ as an extension of $\mathbb{Q}$. But $[\mathbb{Q}(\sqrt{5},\sqrt{6}):\mathbb{Q}]=[\mathbb{Q}(\sqrt{5},\sqrt{6}):\mathbb{Q}(\sqrt{5})]\cdot[\mathbb{Q}(\sqrt{5}):\mathbb{Q}]$, and it's not hard to see that both factors are $2$. - The direct method is to show that $\mathbb Q(\sqrt 5+\sqrt 6) \cong \mathbb Q(\sqrt 5)(\sqrt 6)$, then show explicit automorphisms of the second field that sends $\sqrt 5+\sqrt 6$ to its conjugates. - One way to proceed would be to show that the extension field $\mathbb{Q}(\sqrt{5} + \sqrt{6}) / \mathbb{Q}$ has degree $4$. If you can do this, then since you've already found a degree $4$ polynomial for the primitive element $\alpha = \sqrt{5} + \sqrt{6}$, then that polynomial must be the minimum polynomial so the conjugates would be the ones you listed. To prove that the extension has degree $4$, you can try proving that actually $\mathbb{Q}(\sqrt{5} + \sqrt{6}) = \mathbb{Q}(\sqrt{6}, \sqrt{5})$ and this is not hard to do. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9720661640167236, "perplexity_flag": "head"}
http://mathoverflow.net/questions/18877/are-there-countable-index-subrings-of-the-reals
## Are there countable index subrings of the reals? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) 1. Does ${\mathbb R}$ have proper, countable index subrings? By countable I mean finite or countably infinite. By subring I mean any additive subgroup which is closed under multiplication (I don't care if it contains $1$.) By index, I mean index as an additive subgroup. 2. Given some real number $x$, when is it possible to find a countable index subring of ${\mathbb R}$ which does not contain $x$? - 3 Not measurable ones. Every measurable proper subgroup of ${\mathbf R}$ has measure zero, and thus so does a union of countably many of its cosets. – Jonas Meyer Mar 20 2010 at 21:26 1 For Borel (or analytic) subrings you can say even more ... a proper one has Hausdorff dimension zero. – Gerald Edgar Mar 20 2010 at 22:01 ## 3 Answers Perhaps surprisingly, it turns out that such subrings do exist. This was proved in Section 2 of my paper: Simon Thomas, Infinite products of finite simple groups II, J. Group Theory 2 (1999), 401--434. The basic idea of the proof is quite simple. Clearly the ring of $p$-adic integers has countable index in the field of $p$-adic numbers. Now the $p$-adic integers are the valuation ring of the obvious valuation on the field of $p$-adic numbers ... and it turns out to be enough to show that $\mathbb{C}$ has an analogous valuation. This is true because $\mathbb{C}$ is isomorphic to the field of Puiseux series over the algebraic closure of $\mathbb{Q}$, which has an appropriate valuation. - +1: Very nice, Simon! – François G. Dorais♦ Mar 21 2010 at 3:06 +1 This is great! – Harry Gindi Mar 21 2010 at 3:23 Nice answer! This is really interesting (I'm reading your paper now.) – Fabrizio Polo Mar 21 2010 at 9:06 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Simon Thomas's approach answers Question 2 too. The answer is that for any nonzero real number $x$ there exists such a subring (possibly without 1) not containing $x$. Proof: Let $K$ be the Puiseux series field `$\overline{\mathbf{Q}}((t^{\mathbf{Q}}))$`, let $A$ be its valuation ring, and let $\mathfrak{m}$ be its maximal ideal. If $x \in \mathbf{R}$ is not algebraic over $\mathbf{Q}$, then choose an identification $\mathbf{C} \simeq K$ sending $x$ to the transcendental element $1/t$, and use the subring $\mathbf{R} \cap A$. If $x \in \mathbf{R}^\times$ is algebraic over $\mathbf{Q}$, then choose an identification $\mathbf{C} \simeq K$ again, and use $\mathbf{R} \cap \mathfrak{m}$ (a subring of $\mathbf{R}$ without $1$). $\square$ Remark: If one insists on using subrings with $1$, then the answer is that such a subring not containing $x$ exists if and only if $x \notin \mathbf{Z}$. Proof: Repeat the argument above, but in the case where $x$ is algebraic (and outside $\mathbf{Z}$), use $\mathbf{R} \cap (\mathbf{Z} + \mathfrak{m})$. - By the way, rings really should have a 1, so that one can take products of any finite sequence of elements, including the empty sequence! – Bjorn Poonen Mar 21 2010 at 5:24 Thanks for the help. I have to award the answer to Simon, but your argument definitely handles question 2 so I gave you a vote. I knew a while ago that if I could miss a given transcendental then I could miss any other, but I had no idea if I could miss even one. I also didn't know if the algebraic case needed to be treated separately. I could easily get addicted to this website. It's dangerous. – Fabrizio Polo Mar 21 2010 at 9:12 2 You did the right thing by accepting Simon's answer; my answer is really just a small addendum to his, so I would have felt guilty if you had accepted mine! And yes, this site is very dangerous... – Bjorn Poonen Mar 21 2010 at 18:10 I think the answer is no, but I didn't get anywhere. [Edit: I used to think the answer was no, but Simon Thomas convinced me otherwise.] Here is the condensed version of what I posted earlier, which seems to put serious constraints on $S$. Let $R$ be any field and let $S$ be an additive subgroup of $R$ which is closed under multiplication. If $S$ has index less than $|S|$ as an additive subgroup of $R$, then every element of $R$ is of the form $a/b$ for $a, b \in S$. To see this, pick $r \in R$ and consider the multiples $ur$ for $u \in S$. Since $S$ has index less than $|S|$, there must be $u \neq v$ such that $a = ur - vr \in S$ then $r = a/b$ where $b = u - v$. - Note that there are plenty of proper additive subgroups $S$ of $R$ closed under multiplication such that every element of $R$ is of the form $a/b$ for some $a,b \in S$. So my observation is not that helpful. – François G. Dorais♦ Mar 21 2010 at 2:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9442379474639893, "perplexity_flag": "head"}
http://mathoverflow.net/questions/97138/functions-holomorphic-on-a-region-minus-a-cantor-set
## Functions holomorphic on a region minus a Cantor set ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ and $Y$ be simply connected open regions of $\mathbb{C}$, and let $Z \subset X$ be a Cantor set. Assume we have a homeomorphism $f$ from $X$ to $Y$, which is holomorphic on $X \setminus Z$. Is $f$ necessarily holomorphic on $X$? - ## 6 Answers This belongs to the subject of holomorphic removability. See this Wiki article for more references. In particular, the article implies that any set with Hausdorff dimension smaller than $1$ is holomorphically removable, and if its Hausdroff dimension is greater than $1,$ it is not. If it is equal to $1,$ you remain puzzled. - Just a note : analytic capacity and those results you mention are for the study of compact sets removable for bounded holomorphic functions – Malik Younsi May 16 2012 at 18:16 @Malik: good point. The OP did not specify, so s/he can comment on whether this is what is being sought. – Igor Rivin May 16 2012 at 18:19 Yes, I believe that in the cases I'm interested in one can say that $f$ is bounded. – uncooltoby May 16 2012 at 18:45 1 I thought the OP's question was about a specific homeomorphism $f:X \to Y$, asking whether the fact of $f$ being holomorphic on $X-Z$ implies that it is holomorphic on all of $X$. Why then should one care that holomorphic removability is inapplicable in the case that the Hausdorff dimension is $>1$? For example, the restriction of any holomorphic map $X \to Y$ to the subset $X-Z$ extends to a holomorphic map $X \to Y$. – Lee Mosher May 17 2012 at 15:40 As I mention below, the question becomes quite different when asking about "conformal" removability; i.e. removability for conformal homeomorphisms. Of course holomorphic removability in the sense here implies conformal removability, but the converse is far from true. For example, you can easily have conformally removable sets of Hausdorff dimension 2 (but zero Lebesgue measure). – Lasse Rempe-Gillen Feb 11 at 13:14 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Yes, if $Z$ has Hausdorff 1-dimensional measure $0$. Then for any $\epsilon > 0$ you can cover $Z$ by a finite number of disks the sum of whose circumferences is less than $\epsilon$. The integral of $f$ over the boundary of the union of these disks is bounded by a constant times $\epsilon$. Use this together with Morera's theorem. - Removability with respect to homeomorphisms is different from the removability with respect to bounded functions mentioned in another answer. In particular, it is not necessary to have Hausdorff dimension at most 1. Indeed, any quasicircle is removable with respect to homeomorphisms, as mentioned by Hrant. For much more complicated sets, see Jeremy Kahn's thesis "Holomorphic Removability of Julia sets": He shows that many Julia sets of quadratic polynomials are in fact removable. http://arxiv.org/abs/math/9812164 (As above, we consider the set in question to be compact.) In particular, he discusses the notion of "absolute area zero": A set $K$ has absolute area zero if there is no conformal isomorphism from the complement of $K$ to the complement of some set with positive area. Any such set is removable, and any Cantor set that is well-surrounded has absolute area zero. On the other hand, as has been noted elsewhere, there are many examples of sets that are not holomorphically removable. The simplest example of a Cantor set would be a Cantor set of positive measure. More interesting examples are provided by Chris Bishop, as cited in Misha's answer. EDIT: You may also wish to look at the paper "Removability theorems for Sobolev functions and quasiconformal maps" by Peter Jones and Stas Smirnov, which contains a number of sufficient conditions for conformal removability: http://www.unige.ch/~smirnov/papers/hr-j.pdf Graczyk and Smirnov use these criteria to prove removability of a large class of Julia sets. - Yes, if the Cantor set has measure zero. This is a consequence of the Measurable Riemann Mapping Theorem which guarantees that the map is quasiconformal, combined with the theorem that if a quasiconformal map is conformal almost everywhere then it is conformal. - 1 @Lee and @uncooltoby: Actually, Measurable Riemann Mapping Theorem does not imply quasiconformality of the extension. What you need is that the extension is absolutely continuous on a.e. line parallel to each coordinate axis (ACL). Equivalently, you need the extension to be in $W^{2,1}$ locally. None of this follows from the measure zero assumption on the Cantor set $Z$: Even though $Z$ has measure zero, a set of horizontal lines of positive measure can still hit $Z$. On the other hand, I do not have counter-examples for measure zero Cantor sets. – Misha Jun 14 at 12:19 Ah, that's a very good point. – Lee Mosher Jun 14 at 13:49 Yes, if $H_1(E)=0$. For every $1< t\leq 2$ there are examples of Cantor sets of Hausdorff dimension $t$ which are non-removable. Of course there are also examples of $t$-dimensional sets which are removable (e.g. quasicircles). Complete characterization of removable sets is an interesting and open problem. - In Theorem 3 of this paper, for every $\alpha>1$, Chris Bishop constructs examples of Cantor sets $E\subset {\mathbb C}$ whose Hausdorff dimension is in the interval $(1, \alpha)$ and homeomorphisms $f: {\mathbb C}\to {\mathbb C}$ which are conformal outside of $E$, but are not conformal on $E$. Furthermore, in his examples, the set $f(E)$ has zero Lebesgue measure. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334065914154053, "perplexity_flag": "head"}
http://samjshah.com/tag/calculus/
# Some Random Things I Have Liked Posted on May 10, 2013 by ## The Concept of Signed Areas In calculus, after first introducing the concept of signed areas, I came up with the “backwards problem” which really tested what kids understood. (This was before we did any integration using calculus… I always teach integration of definite integrals first with things they draw and calculate using geometry, and then things they do using the antiderivatives.) I made this last year, so apologies if I posted it last year too. [.d0cx] Some nice discussions/ideas came up. Two in particular: (1) One student said that for the first problem, any line that goes through (-1.5,-1) would have worked. I kicking myself for not following that claim up with a good investigation. (2) For all problems, only a couple kids did the easy way out… most didn’t even think of it… Take the total signed area and divide it over the region being integrated… That gives you the height of a horizontal line that would work. (For example, for the third problem, the line $y=\frac{2\pi+4}{7}$ would have worked.) If I taught the average value of a function in my class, I wouldn’t need to do much work. Because they would have already discovered how to find the average value of a function. And what’s nice is that it was the “shortcut”/”lazy” way to answer these questions. So being lazy but clever has tons of perks! ## Motivating that an antiderivative actually gives you a signed area I have shown this to my class for the past couple years. It makes sense to some of them, but I lose some of them along the way. I am thinking if I have them copy the “proof” down, and then explain in their own words (a) what the area function does and (b) what is going on in each step of the “proof,” it might work better. But at least I have an elegant way to explain why the antiderivative has anything to do with the area under a curve. Note: After showing them the area function, I shade in the region between $x=3$ and $x=4.5$ and ask them what the area of that bit is. If they understand the area function, they answer $F(4.5)-F(3)$. If they don’t, they answer “uhhhhhh (drool).” What’s good about this is that I say, in a handwaving way, that is why when we evaluate a definite integral, we evaluate the antiderivative at the top limit of integration, and then subtract off the antiderivative at the bottom limit of integration. Because you’re taking the bigger piece and subtracting off the smaller piece. It’s handwaving, but good enough. ## Polynomial Functions In Precalculus, I’m trying to (but being less consistent) have kids investigate key questions on a topic before we formal delve into it. To let them discover some of the basic ideas on their own, being sort of guided there. This is a packet that I used before we started talking formally about polynomials. It, honestly, isn’t amazing. But it does do a few nice things. [.docx] Here are the benefits: • The first question gets kids to remember/discover end behavior changes fundamentally based on even or odd powers. It also shows them that there is a difference between $x^2$ and $x^4$… the higher the degree, the more the polynomial likes to hang around the x-axis… • The second question just has them list everything, whether it is significant seeming or not. What’s nice is that by the time we’re done with the unit, they will have a really deep understanding of this polynomial. But having them list what they know to start out with is fun, because we can go back and say “aww, shucks, at the beggining you were such neophytes!” • It teaches kids the idea of a sign analysis without explaining it to them. They sort of figure it out on their own. (Though we do come together as a class to talk through that idea, because that technique is so fundamental to so much.) • They discover the mean value theorem on their own. (Note: You can’t talk through the mean value theorem problem without talking about continuity and the fact that polynomials are continuous everywhere.) ## The Backwards Polynomial Puzzle As you probably know, I really like backwards questions. I did this one after we did  So I was proud that without too much help, many of my kids were really digging into finding the equations, knowing what they know about polynomials. A few years ago, I would have done this by teaching a procedure, albeit one motivated by kids. Now I’m letting them do all the heavy lifting, and I’m just nudging here and there. I know this is nothing special, but this course is new to me, so I’m just a baby at figuring out how to teach this stuff. [.docx] Tagged Calculus, Pre-Calculus # Related Rates, Yet Another Redux Posted on February 12, 2013 by I posted in 2008 how I didn’t actually find related rates all that interesting/important in calculus. The problems that I could find were contrived, and I didn’t quite get the “bigger picture.” In 2011, I posted again about something I found from a conference that used Logger Pro, was pretty interesting, and helped me get at something less formulaic. I still don’t know how I feel about related rates. I’m torn. Part of me wants to totally eliminate them from the curriculum (which means I can also possibly eliminate implicit differentiation, because right now I see one of the main purposes of implicit differentiation is to prime students for related rates). Part of me feels there is something conceptually deeper that I can get at with related rates, and I’m missing it. I still don’t have a good approach, but this year, I am starting with the premise that students need to leave with one essential truth: Often times, as we change one thing, it affects a number of other things. However, the way that the other things are affected can vary greatly. Right now, to me, that’s the heart of related rates. (To be honest, it took some conversation with my co-teacher before we were able to stumble upon this essential understanding.) In order to get at this, we are starting our related rates unit with these two worksheets. A nice bonus is that it gets students to think about the shape of a graph, which is what we’ll be embarking on next. The TD;DR for the idea behind the worksheets: Students study a circle which has it’s radius increase by 1 cm each second, and see how that changes the area and circumference. Then students study a circle which has it’s area increase by 10 cm^2 each second, and see how that changes the radius and circumference. The big idea is that even though one thing is changing, that one thing affects a number of different things, and it changes them in different ways. [.docx] [.docx] (A special thanks to Bowman for making the rocket and camera problem dynamic on Geogebra.) It’s not like this is a deep investigation or they come out knowing anything super special. But the main takeaway that I want them to get from it becomes pretty apparent. And what’s really powerful (for me, as a teacher trying to illustrate this essential understanding) is seeing the graphs of how the various thing change. *** I had students finish the first packet one night. Before we started going over it, or talking about it, I started today’s class asking for a volunteer to blow up balloons. (We got a second volunteer to tie the balloons.) While he practiced breathing even breaths, I tied and taped an empty balloon to the whiteboard. Then I asked our esteemed volunteer to use one breath to blow up the first balloon. Taped it up. Again, for two breaths. Taped. Et cetera until we got a total of six balloons taped. Then I asked what things are measurable in the balloons. Bam. List. (We should have listed more. Color. What it’s made of. Thickness of rubber.] Then I asked what we did to the balloon. Added volume. A constant volume (ish) in each balloon. Which of the other things changed as a result? How did they change? This five minute start to class reinforced the main idea (hopefully). We changed one thing. It changed a bunch of other things. But just because one thing changed in one particular way doesn’t mean that everything changed in that same way. For example, just because the volume increased at a constant rate doesn’t mean the radius changed at a constant rate. *** This is about all I got for now. I’m going to teach the rest of the topic the way I always do. It’s not up to my personal standards, but I still am struggling to get it there. I suppose to do that, I’ll have to see a more nuanced bigger picture with related rates, or find something that approaches what’s happening more visually, dynamically, or conceptually. PS. The more I mull it over, the more I think that geogebra has to be central to my approach next year… teaching students to make sliders to change one parameter, and having them develop something that dynamically illustrates how a number of other things change. And then analyzing how those things change graphically and algebraically. (A simple example: Have a rectangle where the diagonal changes length… what gets affected? The sides, the angle between the diagonal and the sides of the rectangle, the area, the perimeter, etc. How do each of these things get affected as the diagonal changes?) # What does it mean to be going 58 mph at 2:03pm? Posted on November 1, 2012 by That’s the question I asked myself when I was trying to prepare a particular lesson in calculus. What does it mean to be going 58 mph at 2:03pm? More specifically, what does that 58 mean? You see, here’s the issue I was having… You could talk about saying “well, if you went at that speed for an hour, you’d go 58 miles.” But that’s an if. It answers the question, but it feels like a lame answer, because I only have that information for a moment. That “if” really bothered me. Fundamentally, here’s the question: how can you even talk about a rate of change at a moment, when rate of change implies something is changing. But you have a moment. A snapshot. A photograph. Not enough to talk about rates of change. And that, I realized, is precisely what I needed to make my lesson about. Because calculus is all about describing a rate of change at a moment. This gets to the heart of calculus. I realized I needed to problematize something that students find familiar and understandable and obvious. I wanted to problematize that sentence “What does it mean to be going 58 mph at 2:03pm?” And so that’s what I did. I posed the question in class, and we talked. To be clear, this is before we talked about average or instantaneous rates of change. This turned out to be just the question to prime them into thinking about these concepts. Then after this discussion, where we didn’t really get a good answer, I gave them this sheet and had them work in their groups on it: I have to say that this sheet generated some awesome discussions. The first question had some kids calculate the average rate of change for the trip while others were saying “you can’t know how fast the car is moving at noon! you just can’t!” I loved it, because most groups identified their own issue: they were assuming that the car was traveling at a constant speed which was not a given. (They also without much guidance from me discovered the mean value theorem which I threw in randomly for part (b) and (c)… which rocked my socks off!) As they went along and did the back side of the sheet, they started recognizing that the average rate of change (something that wasn’t named, but that they were calculating) felt like it would be a more accurate prediction of what’s truly going on in the car when you have a shorter time period. In case this isn’t clear to you because you aren’t working on the sheet: think about if you knew the start time and stop time for a 360 mile trip that started at 2pm and ended a 8pm. Would you have confidence that at 4pm you were traveling around 60 mph? I’d say probably not. You could be stopping for gas or an early dinner, you might not be on a highway, whatever. But you don’t really have a good sense of what’s going on at any given moment between 2pm and 8pm. But if I said that if you had a 1 mile trip that started at 2pm and ended at 2:01pm, you might start to have more confidence that at around 2pm you were going about 60 mph. You wouldn’t be certain, but your gut would tell you that you might feel more confident in that estimate than in the first scenario. And finally if I said that you had a 0.2 mile trip that started at 2pm and ended at 2:01:02pm, you would feel more confident that you were going around 72mph at 2pm. And here’s the key… Why does your confidence in the prediction you made (using the average rate of change) increase as your time interval decreases? What is the logic behind that intuition? And almost all groups were hitting on the key point… that as your time interval goes down, the car has less time to fluctuate its speed dramatically. In six hours, a car can change up it’s speed a lot. But in a second, it is less likely to change up it’s speed a lot. Is it certain that it won’t? Absolutely not. You never have total certainty. But you are more confident in your predictions. Conclusion: You gain more certainty about how fast the car is moving at a particular moment in time as you reduce the time interval you use to estimate it. The more general mathematical conclusion: If you are estimating a rate of change of a function (for the general nice functions we deal with in calculus), if you decrease a time interval enough, the function will look less like a squiggly mess changing around a lot, and more and more like a line. Or another way to think about it: if you zoom into a function at a particular point enough, it will stop looking like a squiggly mess and more and more like a line. Thus your estimation is more accurate, because you are estimating how fast something is going when it’s graph is almost exactly a line (indicating a constant rate of change) rather than a squiggly mess. I liked the first day of this. The discussions were great, kids seemed to get into it. After that, I explicitly introduced the idea of average rate of change, and had them do some more formulaic work (this sheet, book problems). And then  finally, I tried exploiting the reverse of the initial sheet. I gave students an instantaneous rate of change, and then had them make predictions in the future. It went well, but you could tell that the kids were tired of thinking about this. The discussions lagged, even though the kids actually did see the relationships I wanted them to see. My Concluding Thoughts: I came up with this idea of the first sheet the night before I was going to teach it. It wasn’t super well thought out — I was throwing it out there. It was a success. It got kids to think about some major ideas but I didn’t have to teach them these ideas. Heck, it totally reoriented the way I think about average and instantaneous rate of change. I usually have thought of it visually, like But now I have a way better sense of the conceptual undergirding to this visual, and more depth/nuance. Anyway, my kids were able to start grappling with these big ideas on their own. However, I dragged out things too long. We spent too long talking about why we have to use a lot of average rates of changes of smaller and smaller time intervals to approximate the instantaneous rate of changes, instead of just one average rate of change over a super duper small time interval. The reverse sheet (given the instantaneous rate of change) felt tedious for kids, and the discussion felt very similar. It would have been way better to use it (after some tweaking) to introduce linear approximations a little bit later, after a break. There were too much concept work all at once, for too long a period of time. The good news is that after some more work, we finally took the time to tie these ideas all together, which kids said they found super helpful. # Advice from Calculus Students Past, Informing the Calculus Student Present Posted on September 6, 2012 by I’ve done Standards Based Grading in Calculus for two years now. This is the start of my third year. One of the things I have my kids do at the end of each school year (not just in calculus, but in all my classes) is to write a letter to themselves. But in the past. Yes, I tell kids to compose a letter that can be sent back into time, to them, at the beginning of the year. Things they wish they had known at the start of the year that they know now that it is the end of the year. And I let them know whatever they write is up to them, and that I don’t look at this until way into the summer. We seal them up. I usually share these letters with kids the following year. When I do, I ask kids to think about commonalities they noticed in the advice from students, and also, if anything struck them. We have a conversation about that. I definitely emphasize that what works for one person might not work for another. Without further ado, here is the advice that my 2011-2012 calculus kids wrote to their past selves, which I will be sharing with my 2012-2013 calculus kids. To me, the major commonalities are… advice to do their homework even though it’s not graded, not to use reassessments as a crutch because it’s to your benefit to learn things the first time around, and to ask for help from colleagues and Mr. Shah. With that, I’m out like a light. # Wealth Inequality! A Calculus Investigation Posted on June 7, 2012 by First off, I want to say that I took this wholesale from the North Carolina School of Science and Math.  Thank you NCSSM. They have a conference each year on high school math, and each time I’ve gone, the speakers I’ve liked best are the actual teachers at the school. So any good things you might want to say about this, please don’t say them to me. This is the product of the hardworking teachers over there. So please, please check out the NCSSM project here. All I will be doing in this post is talking about how I coopted it for my classroom. So the year came to a close in my calculus class. And in the last week, I wanted to try something new. And there was a confluence of things that led me to this. I had students teach themselves how to find the area of two curves previously, when I was out sick, but then I didn’t do anything with it. I had also just seen an interesting piece on wealth inequality which piqued my interest. And I had heard of the Gini Index and the Lorenz curve before but had never pursued it seriously. So here we are, the perfect time to go whole hog. And when doing my massive internet search, I came across NCSSM’s awesome activity and realized it was better than anything I could devise on my own. I really loved the scaffolding of the packet. To start out class, I laid out the objective. I showed some photos from Occupy Wall Street. We read the protester’s posters aloud. And we focused on one of them: “This is not the world our parents wanted for us, nor the one we want for our kids.” I focused on that, because it implied that there was a difference in the world from the previous generation. The protester, and others, have been saying that the rich are getting richer while the rest of us are not. And my question to the class is: do you think this is true? We talked about it generally, and I followed it up with a conversation about how we might decide if the distribution of wealth were different now than it was later. Students shared their thoughts in pairs, and they came up with some good ideas. Many pairs talked about making a histogram (wealth vs. number of people with that wealth). Others talked about comparing the top 5% with the bottom 5%. We shared our ideas as a class. I liked making them think about how one might decide this, because the answer is: there are many ways, but they all are going to involve math. We also talked about how we could compare one wealth distribution to another — and then we realized that it became tricky, fast. I then had them make conjectures on the actual distribution of wealth in the US. And then I showed them the true answer. The true distribution shocked them. The best part of the discussion was around what kids picked for “what they would like it to be.” We got to talk about capitalism and socialism and oligarchies. I made it really clear that I wasn’t here to make a case for one type of economic system or another. (Though some students had some strong opinions of their own.) This initial prelude set up the remaining 2 days kids spent working on this. It gave them our overarching question (“Is income truly becoming more and more unequally distributed in the past 40 years? Or is it propaganda used by Occupy Wall Street protesters and sensational journalists?”) And off then went. I basically made the most minor revisions to the NCSSM document and gave it to my class… Each day, I had a goal that students had to reach, and if they didn’t they were asked to finish it at home. (At most, they only had 5 minutes of work each night. It was the last week of classes, and I wanted it to be more relaxed.) We talked at the start of each class, and I had them work in pairs. We had mini breaks/discussions to talk about big ideas. One of these included the trapezoidal rule. When introducing Riemann Sums a month or two prior, we only did them as left and right handed rectangles. But we saw how bad those approximations would be in this case where we only had 5 divisions… which necessitated the use of the trapezoidal rule. I didn’t teach my kids it, but they could do it. And some found a quicker formula to find the area, because they got sick of calculating all the areas of the trapezoids together. Huzzah! One student make a calculator program to calculate the Gini Index because it became tedious to do the calculations. We ended the packet by just going through the US Gini Indexes for the last 40 years. We didn’t do the part asking for an investigation on other countries. Results We did this informally. I threw it together, I framed it in the context of Occupy Wall Street, and we went off. I didn’t collect formal feedback from my students on this (it was the last week), but I had a number of students individually let me know how much they liked it. A couple told me it was their favorite thing all year — and they loved that this had applications. One told me they spoke with an economist last summer and they were talking about economics and calculus, and the economist was talking about the Lorenz curve — but the student (at the time) didn’t understand it. I love that we could clear that up! Also, I had two teachers observe my class the first day we started it, and I had them participate, and they said they enjoyed thinking about the questions and working on the packet. Using This in the Future I love the idea of using this in the future. I hope to do so next year, earlier in the year. I think I need to make the packet a little more conceptually deep, and ask some probing questions as we go along. One type of probing question might be to ask students to draw figure out what a Lorenz curve looks like for the Gini Index to be 0 and what a Lorenz curve looks like for the Gini Index to be 1 (the packet just tells them that). Or to explain why the Lorenz curve cannot go above the line y=x. In other words, why it can’t look like: I also think it could easily be extended to be a good poster project. One obvious idea is having students pick two countries, do a little research on them and come up with a hypothesis for which has more income inequality and justify it without mathematics. Then they would calculate the Gini Index for each. Finally they would make a poster showcasing their hypothesis and their findings. Additionally, I could have each of them (after our in class work) read the section in the book on the Trapezoidal Rule, and make part of their poster explain this rule and how it works for any general function divided into N equally spaced rectangles. (Since I don’t formally teach it, nor do I think it needs to be formally taught.) Alternatively, I could have students (especially since we analyzed a program which calculated Riemann Sums) see if they could come up with a program that would calculate the Gini Index. As a personal note for next year: Oh yeah, I have to remember to make a distinction between wealth inequality and income inequality. I kept conflating the two, but they are very different and I need to make sure I get that across. # Algebra Bootcamp in Calculus Posted on June 1, 2012 by So it was the Old Math Dog who pointed out that I never wrote a post explaining how I deal with the issue of kids not knowing basic algebra in calculus. I started this practice two years ago (when I also started standards based grading) and I have seen a remarkable difference in how my classes go from my life pre-bootcamps to my life post-bootcamps… An issue in any calculus course — and I don’t care if you’re talking about non-AP Calculus or AP Calculus — is the student’s algebra skills. They might see $\frac{1}{4}x+\pi x -4=0$ and have no idea how to solve that. Or they might not know how to find $\tan(\pi/6)$. Or they might cancel out the -1s in $\frac{x^2-1}{x-1}$ to get $\frac{x^2}{x}$. It depends on where they are coming from, but I can pretty much guarantee you that every calculus teacher says the same thing to their classes on the first day: Calculus is easy. Algebra is hard. In my first three years of teaching calculus, I started with how all the books started, and all my calculus teacher friends started: a precalculus review. Then we went into limits. The problem with that is that we might review some basic trigonometry, and then we wouldn’t see it again for months. And by then, they had forgotten it. And who could blame them. The precalculus review unit at the beginning of the course wasn’t working. As I transitioned into Standards Based Grading, I looked at everything I taught really closely, and I honed in on the particular skills/concepts I was going to be testing. And since I’d taught calculus for a number of years prior, I knew exactly where the algebra sticking points were. Thus was born The Algebra Bootcamp. Before our first unit on limits, I carefully analyzed what things I needed students to know to understand limits to the depth I required. I then looked at all the skills and thought of all the algebraic things, and all the old concepts, they would need in order to understand limits. And from that, I crafted an algebra bootcamp, and I made SBG skills out of just those limited skills. For example, here was our first bootcamp (which, admittedly, was longer than most of the others, because we were settling in and I was gauging where the kids were at): and I did the same for other units… just the targeted prior knowledge that they tended to not know or struggle with… Notice how they tend to be very concrete and specific? Like “rationalize the numerator” (because I knew we were going to be doing that when using the formal definition of the derivative) or “expand $(x+h)^n$ using the binomial theorem. Very specific things that they should know that they are going to be using in the following unit. It’s kind of funny because it is a hodgepodge of little (and often unconnected) things, and they have no idea why we’re doing a lot of what we’re doing (why are we rationalizing the numerator? why are we doing the binomial theorem?) and I don’t tell them. I say “it’s our bootcamp… once training is over you’ll see why these tools are useful.” It is called “bootcamp” because I am not reteaching it from scratch. I’m reviewing it, and I go through things quickly. I only do a few of them in the first quarter and maybe the start of the second quarter. By that point, we’ve done what we needed to do, and they die off. The reason that this has been so effective for me is because students aren’t having to relearn old topics/algebraic skills while concurrently learning the ideas of calculus. We review these very specific things beforehand so that when we approach the calculus topics, the focus is not on the algebraic manipulation or remembering how to find the trig values of special angles or what a piecewise function is… but  on the larger picture…. the calculus. Remember: calculus is easy, it’s the algebra which is hard. So we took care of the algebra beforehand, so we can see how easy calculus is. My kids in the past two years have made so many fewer mistakes, and we’ve been able to really delve into the concepts more, because I’m no longer fielding questions like “could you review how to do X?” Doing this has also forced me to think about what the purpose of calculus class is. The more I teach it, the more I take the algebraic stuff out and the more I put the conceptual stuff in. For example, I don’t use $\cot(x)$, $\sec(x)$, and $\csc(x)$ in my course anymore  [1], because I wasn’t trying to test them on their knowledge of trigonometry. Doing these bootcamps coupled with standards based grading has forced me to keep my eye on what I really care about. Students deeply understanding the fundamental concepts of calculus. And I think you can do that without knowing how to integrate $\sec(x)\tan(x)$ just fine. [2] [1] With the exception of $\sec^2(x)$ for the derivative of $\tan(x)$. [2] I teach a non-AP calculus, so I have this luxury. But it’s nice. Each year I strip more and more stuff off the course and add in more and more depth. And I am glad that I understand depth to mean something other than “more complicated algebra in the same old calculus problems.” # A calculus optimization poster project Posted on May 31, 2012 by I covered optimization very differently this year, as I started documenting here. Besides their assessments asking them to solve optimization problems both algebraically and on their calculators (and explaining how they did both), they did a poster project. Here are some of the finished products: And here was the assignment… I never do projects, so this was new to me. But my kids really took to it in a way I really enjoyed. I had most of them pair up and find how “volume optimized” a can in. In other words, they took photos of cans, they decided how much metal was used to make the can (the surface area… we ignored thickness), and we asked if we could recast the can to hold more volume. That was our overarching question… We started this the week before spring break. I think students had three days in class to work on it, and then it was due after spring break (many just had some gluing to do). I provided the posterboard and colored paper. They provided the rest. An Example Close Up Student Thoughts I asked students to talk about the project in their third quarter reflections. Here are all the quotes from the reflections, where I asked them to talk about the quarter, and about the can project in particular, and give advice for changes I should make next year on it: * I am particularly proud of the project that ___ and I worked on together. We worked really hard on it and stayed after school and although it was sort of confusing at first, once we got the hang of it I began to really understand optimization… I generally prefer projects because it allows me to be more creative and think more deeply than tests so I actually did enjoy the can project. I thought that that having to do the same thing for five cans got a bit repetitive so maybe if you were to do it again have the students do some different kinds of shapes or types of problems. * The can project I was really proud of. ___ and I worked for hours and it and I think the end result was really good. Our poster was well made and looked good… I really liked the can project. I think we could have gone over the project more before starting because the goals were a little unclear. * This may seem insignificant, but one of the most memorable things [from the quarter] for me was the way that this mountain of math for the cans project simplified into this beautiful little thing (h=2r) after doing all this calculus. It was quite cool when I saw that… [As for making changes for the project next year] Honestly, I’d ditch the poster element. It added nothing to my understanding, and ended up being more of a burden than anything… The calculus was certainly worth-while, but that was only like a quarter of the work. The rest was repeatedly plugging the numbers into a program I made (I tried writing a python script for the first time) and writing them down. So basically, make us do more complicated (and more in general) calculus warter than a wee-bit of calculus and a lot of “filler” kinda stuff. * I did like the can project, but I was sometimes confused about the exact requirements. It was also difficult to finish everything in class, but it worked out when we had the extention until the Monday we got back [from Spring Break]. * The most memorable event from this quarter must have been the “Can Can” project. It gave the class and I time to apply our calculus knowledge to real world concepts… I thoroughly enjoyed the can project because I felt like I understood it entirely from day 1. The amount of work when done between a pair was not tedious at all as well. * As for the can project, I did enjoy working on it but found it to be a bit repetitive and tedious. I also think had we more time to complete it I would have had more fun with it. I did feel I understood exactly what we were doing. I think if you were to do it next year you should allow more time so students can be more creative with their project. * The can project was definitely worthwhile. The only thing I disliked about the project was that we used the same shape every time. I think we could have optimized different objects to make it more interesting, just  because the process became kind of repetitive. I think you should still do it next year if you would like but you could choose to alter it a little bit. * I really liked the can project. For me, the can project was able to show directly the connection between what we were learning in Calculus and the real world which is something that really interests me. I felt like I understood what was being asked of us, and I think that it would be a good addition to next year’s Calculus curriculum as well. * In general, optimization was my favorite/most memorable part of the quarter. It’s probably the only math I’ve ever done that requires logical, real world thinking at every step (for example, who cares about the optimization of the graph when it’s less than x=0, because you can’t have negative distance). In the past, I’ve felt that a lot of math does correlate closely to things in the real world, but this is the first time where it’s so clear how everything relates. That said, I felt like the can project went extremely well, considering this is the first time it was done in this class. I felt like I totally understood everything that was going on, and I enjoyed taking measurements, doing calculations, and seeing how much the lima bean companies were ripping us off (hint: they’re not! It’s the tuna companies that are evil). The only change I would suggest is allowing one or two days more of time to finish it. Although we got all our measurements and calculations done, the most difficult and lengthiest part of the project proved to be printing everything out, cutting it, and creating the poster. * Volume optimization, more than any other topic, really stood out for me this quarter. When we first started doing it, I was confused and didn’t entirely understand what to do. I think I was a bit taken aback by translating words/pictures into mathematical equations, but once I worked at it and practiced a bunch I became better at making that translation. I thought that the can project was very interesting, and it helped me make the translation better, as well as illuminating an important real-world connection. I was interested to see which companies used their material properly! I did feel, though, that 5 cans was more than was needed — it was basically the same thing every time, so fewer cans could have been enough to still get the point across. * I really actually liked the can project and got pretty into it. I liked it because it felt like we were working independently on applying what we learn in class to the real world. I think it should be done again next year. * I liked the can project a lot. It was cool figuring out how much volume a can could hold if we changed the dimensions of it. At first I did not understand what to do after I found the things I needed to know (height, radius, etc.) — if there was a group where both partners did not know how to figure out the equations needed, then the project would be difficult for them. Maybe having a quick intro/hint class discussing the project will help. I think you should do it again. * I thought that the can project was very effective because it took what we were learning and applied it to real life. I thought it was very good in allowing us to see how optimization works in reality. I definitely think it should be done again in alter years. * Even thought I like the idea of the project, my experience with it was not a good one. It certainly illustrates the idea of optimization very well and it’s always nice to see a practical application of things we learn. But due to the circumstances of my partnership with [...] it felt very tedious. I don’t think there is much you could do to change it if you are going to keep it, so I would recommend devoting more class time to this project. * I liked the can project, however it was a little hard to do while also focusing on the problem set. It was also hard to focus on both of those in the week leading up to spring break, so if possible I would recommend splitting them up and doing at least one of them in weeks other than the one before the break. I did enjoy the project, though with the problems above I probably did not enjoy it as much as I could have. I would say to do it again next year because (as math classes don’t always directly relate to the real world) it was cool to apply what we have learned to something we may experience once we leave school. * I actually really enjoyed the can project. It was a nice break from regular busy work and I definitely got a good handle on the concept it was trying to teach. I would highly recommend doing it again next year. * I enjoyed the Can Project, making our poster, and working with my group members to find the optimized volumes. I definitely think you should do it again next year. * I though the can project was good. I liked working with people to create something fun and pretty, and I liked the splitting up of labor rather than doing it on our own. I would say next year maybe give people a bit more time for the project — I felt very rushed doing it. Of course we ended up finishing, but kind of just barely, and so maybe a big more time would help. * The most memorable thing from this quarter is the can project. In the beginning, I had difficulty understanding optimization but after doing the project it made a lot more sense. Applying the concepts to real life made them much more understandable. At first I had difficulty understanding the purpose of this project, however it proved to be beneficial to me. Thoughts for Next Year I got a lot of good feedback from the students, and I am glad that they are comfortable enough to share their thoughts as frankly as they did. Overall I think this thing, which I whipped up in a couple hours the day or two before I decided to do it, worked out as a good thing to do before spring break. It was low key, kids were working independently (with their partners), it allowed for some mindless work and some very mindful work, and kids seemed to learn from each other. I also got the sense from their responses that they really had their understanding of what is truly going on with optimization problems solidify. I clearly have two big changes to make next year. First, I need to give more time. I think the three class days that they had was appropriate to get the math done and the poster started, but I think that after this class time, I should give students a week to work on it at their leisure outside of class, while we forged forward with the material. That seemed to be one of the biggest problems — me thinking students could do everything in three days. Second, I think I need to give a bit more choice and make things a bit more scaffolded. For some, doing 5 cans was tedious. For others, it felt appropriate. Ways to do this would be to require 3 cans, and then some options of other things to take their knowledge further. One question (which I almost did) I could ask them is to measure the volume of a can, and ask them if they could create a can with the same volume but smaller surface area (so it would be cheaper to produce). Or, as a student suggested, I could assign them different shapes and ask them to volume optimize it (boxes, spheres, cones, etc.). Finally, an observation of my students reflections. I am surprised at how many of them seem to crave or find happiness in the “real world application” activity. I just don’t find “real world” stuff that interesting, compared to the mathematical ideas themselves. And most of our real world applications/problems feel forced or fake, or too simplistic compared to what really happens. So I tend to eschew these sorts of things. But these comments remind me that even though I eschew them, my kids (for some reason) like them. It helps them to find a purpose for what we’re doing, and apparently they need that because I’m not able to totally convince them of the inherent beauty and interestingness of what we’re doing. (Something I work on every year.) # Optimization: An Introductory Activity & Project Posted on March 15, 2012 by I switched things around with optimization in calculus this year, and I realized if I had the time, I would spend a month on it. [1] I wonder if this shouldn’t be a crux of the class. Not the stupid “maximization and minimization” problems but finding some real good ones — in economics, physics, chemistry, ordinary situations. There have got to be tons of non-crappy ones! Anyway, I wanted to share with you two things. First, how I introduced the idea of optimization to my kids. Instead of going for the algebra/calculus approach, I wanted them to toy with the idea of maxima and minima, so I had them spend 35-40 minutes working on this in class: [doc] I thought it was pretty cool to see my kids engaged. I rarely do things like this, but I did it (I was being videotaped during this lesson… and I had never done it before… and I had the idea to create it the night before…). It was fun! And although I cut the debrief the next day short (ugh, why?), I enjoyed seeing kids engaged in problem solving through various strategies. And there was a healthy level of competition. (The winners for the 1st and 2nd tasks got a package of jelly beans, but they were so gross I threw them out! One student gave them to his rabbit who likes jelly beans, and even the rabbit didn’t like them!) But when it came down to it, it drove home the idea that optimization was something that trial and error is good for, sometimes we do it intuitively, sometimes our intuition is terrible and sometimes it is good, and sometimes we get an answer but we don’t know how to prove there isn’t a better answer (e.g. in problem #3). Some kids liked that this felt more “real world” than this world of algebra and graphing that we’ve been meandering in. Second, I have allotted a few days for students to work on this project during class (it’s the week before Spring Break and kids are overburdened, so I didn’t want to have them do something which involved a lot of at-home time). They’ve been working on it this week, and I’ve heard some good conversations thus far. (They’re doing this in pairs, and I have one group of three.) The fundamental question is: with a given surface area, what are the dimensions of a cylinder with maximal volume? Now I don’t quite know how their posters will turn out yet, or whether students will have truly gotten a lot of “mathematical” knowledge out of it. But each day, I’ve had a couple kids say things that indicate that this isn’t a terrible project. (I don’t do projects, so that’s why I’m very conscientious about it.) A few said something equivalent to “Wow, the companies could be giving me x% more creamed corn!” or how they like doing artsy-crafty things. At the very least, I can pretty much be assured that students — if I ask them if there is any question that calculus can answer at the grocery store — will be able to say yes. Next year I will probably add the reverse component (for a given volume of liquid you want to contain, how can we package it in a cylinder to minimize cost… what about a rectangular prism… what about a cube… what about a sphere… etc.?). [1] The one thing I found in this book my friend gave me (on science and calculus) was an experiment where you shoot a laser at some height at some angle into an aquarium, so that it hits a penny at the bottom (remember the laser beam will “change” angles as it hits the water) to minimize the time it takes for the photon to travel from the laser to the penny. I almost did it, but deciding to do it was too last minue. # Two crazy good Do Nows Posted on February 29, 2012 by Recently, I’ve been trying to be super duper conscientious of every part of my lesson. For example, I wrote out comprehensive solutions to some calculus homework, paired my kids up, handed each pair a single solution set, and had them discuss their own work/the places they got stuck/the solutions. I actually had made enough copies for each person, but I very intentionally gave each pair a single solution set. It got kids talking. (Afterwards, I told them I actually had copies for each of them.) That’s what I’m talking about — the craft of teaching. I don’t always think this deeply about my actions, but when I do, the classes always go so much better. In that vein, of super thoughtful intentional stuffs, I wanted to share two crazy good “do nows” from last week. Not because they’re deep, but because they were so thought-out. For one calculus class, I needed my kids to remember how to solve $5\ln(x)+1=0$ (that equation was going to pop up later in the lesson and they were going to have to know how to solve it). I also know my kids are terrified of logs, but they actually do know how to solve them. I threw the slide below up, I gave them 2 minutes, and by the end, all my kids knew how to solve it. I didn’t say a word to them. Most didn’t say a word to anyone else. How I got them to remember how to solve that in 120 seconds, without any talking, when they are terrified of logarithms and haven’t seen them in a looong while? I can’t quite articulate it, but I’m more proud of this single slide than a lot of other things I’ve made as a teacher. (Which is pretty much everything.)  Not deep, I know. It’s not teaching logs or getting at the underlying concept, I know. But for what I intended to do, recall prior knowledge, this was utter perfection. The flow from each problem to the next… it’s subtle. To me, anyway, it was a thing of perfection and beauty. The second slide is below, and I threw it up before we started talking about absolute maximums/minimum in calculus. As you can imagine, we had some good conversations. We talked about (again) whether 0.9999999… is equal to 1 or not (it is). We talked about a property of the real numbers that between any two numbers you can always find another number (dense!). I even mentioned the idea of nonstandard analysis and hyperreal numbers. So I know it isn’t anything “special” but I was proud of these and wanted to share. # Infection Points: The Shape of a Graph Posted on February 23, 2012 by Everyone here knows that I think Bowman Dickson is the bee’s knees, the cat’s pajamas, ovaltine! Recently he posted about how he introduces inflections points in his calculus class… and just a couple days later, I was about to introduce how we use calculus to find out what a function looks like. Usually, I introduce this in a really unengaging lecture-format. But he inspired me to … copy him. And so I did, extending some of his work, and I have had an amazing few days in calculus. So I thought I’d share it with you. The Main Point of this Post: By creating the need for a word to talk about inflection points on graphs, we actually saw the math arise naturally. And through interrogating inflection points, we were able to articulate a general understanding of concavity. In other words… the activity we did motivated the need for more general mathematical concepts. First, definitely read Bowman’s post. All I did was formalize it, and extend it in a few ways, by making a worksheet. I put my kids in pairs and I had them work on it (.docx): What naturally will happen when students generate their graphs is they will get a logistic function. (Which has a beautiful inflection point! But they don’t know the word… they just see the graph.) So here we are. The students have a graph, and they’ve been asked to explain their graph for (a) the layperson and (b) the mathematician. Most get some of it done with their partners, and then they take it home to finish individually. The next day, at the start of class, I assign students to work in groups of 3 (with different people than their partners the previous day). They are asked to take a giant whiteboard and: (Now I want to give credit where credit is due. I have really been struggling with using the giant whiteboards well, and having students present their work effectively and efficiently. My dear friend Susanna, when I told her about this activity, suggested the groups, the underlining of the mathy words, etc.) This worked splendedly. (click to enlarge) And they had such great observations. Some groups picked up on that change where the function was increasing in one way to increasing a different way. Others talked about how the rate of change (of infected over time) was greatest. Others talked about how the function was “exponential” for the first thing, seemingly linear for the middle third, and “something else” for the last third. Those gave rise to good short discussions, and we came up with the language for inflection points (which I call INFECTION POINTS!!! GET IT!?!) and concave up/down. After they had a sense what those words meant, I had students work in partners on the following (.docx): The point was to get students comfortable with the ideas before we delve into the heavy mathematical lifting. It was powerful. Especially the last page, which got students thinking about patterns, exceptions, and ways to generalize. Our big conclusions: And with that, I’m too exhausted to type more. But that’s the general sense of what went on in an attempt to teach how to use calculus to analyze the shape of a function. # Tags This is a work in progress (not all posts are tagged yet). But it's all due to the efforts of @crstn85. Thank you! Algebra II Pre-Calculus Calculus Multivariable Calculus Standards Based Grading General Ideas for the Classroom Big Teaching Questions Good Math Problems Mathematical Communication Other # Email Subscription Join 250 other followers # Blogroll (0.6014,-0.1169) (-0.4777,0.1747) Blog at WordPress.com. | Theme: Confit by Automattic. Follow ### Follow “Continuous Everywhere but Differentiable Nowhere” Get every new post delivered to your Inbox. Join 250 other followers
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9757350087165833, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/97428/understanding-tensor-product-surfaces?answertab=active
# Understanding tensor product surfaces Recently I bought a book about curves and surfaces (Curves and Surfaces for Computer Graphics by David Salomon), and I'm having trouble understanding the tensor product surfaces. I think I understand the idea behind the basic curves: • Linear interpolation • Lagrange polynomials • Hermite curves • Bézier curves • B-splines And the basic surfaces: • Ruled surfaces (two curves on opposite sides from u=0 to u=1, and in the w-direction each point is a linear combination of these two curves) • Bilinear surfaces (like above, but now the two mentioned curves are linear as well) • Translational surfaces (one curve in the u-direction and another in w, having the same point $\mathbf{P}_{00}$. Just translate them to get the surface) • Coons patches (the four boundary curves are given, and the middle points are obtained by adding the linear interpolation in both u- and w-direction, and then subtracting the bilinear surface made from the corners) Can I use some of the surfaces mentioned above to get a grasp of the tensor product surfaces? In the book it says, you start with two parametric curves: $$\begin{align} \mathbf{Q}(u) = \sum_{i=1}^n f_i(u) \mathbf{Q}_i\\ \mathbf{R}(w) = \sum_{i=1}^n g_i(w) \mathbf{R}_i \end{align}$$ And then you can write the function $$\begin{align} \mathbf{P}(u,w) = \sum_{i=1}^n \sum_{j=1}^m f_i(u) g_j(w) \mathbf{P}_{ij} \end{align}$$ I find it difficult to interpret this double summation (luckily the writer included matrix notation), but more importantly, where do these points $\mathbf{P}_{ij}$ come from? Like, how does a biquadratic surface work? And a bicubic surface? By the way, I'm talking about the simplest case, the interpolating polynomial surfaces -- as soon as I understand these, the others (e.g. approximating Bézier patches) won't be that difficult I guess. I hope some of you can provide a basic explanation, or perhaps referral to other readable material (mind you, I'm a mechanical engineer, so the more explanation and examples, the better). Thanks! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346256852149963, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/17396/can-a-group-be-a-finite-union-of-left-cosets-of-infinite-index-subgroups
## Can a group be a finite union of (left) cosets of infinite-index subgroups? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) To be more precise (but less snappy): is there an example of a group G with finitely many infinite-index subgroups H_1, ..., H_n and elements k_1, ..., k_n such that G is the union of the left cosets k_1 H_1 , ..., k_n H_n? And what if we relax the requirement that these all be left cosets, and ask: can G be the union of finitely many such cosets, some being left cosets, others being right cosets? If G is amenable then this can't happen, since any coset of an infinite-index subgroup must have measure 0. So this immediately rules out any abelian group G. I've tried playing around with the only non-amenable groups that I'm comfortable with, the free groups on two or more generators. A few months ago I thought I found a simple counterexample in the free group on $\aleph_0$ generators, but now I've lost my notes and am beginning to doubt I ever had such an example. (This question was asked to me by a friend who's interested in some kind of application to model theory, but I think it's interesting as a stand-alone puzzle.) - ## 1 Answer No. This follows from a beautiful theorem of B.H. Neumann: Let $G$ be a group. If `$\{x_iH_i\}_{i=1}^n$` is a covering of $G$ by cosets of proper subgroups, then $n \geq \min_{i} [G:H_i]$. Explicitly, this is Lemma 4.1 in http://www.math.uga.edu/~pete/Neumann54.pdf As Neumann remarks, the identity $gH = (g H g^{-1}) g$ shows that it is no loss of generality to restrict to coverings by left cosets. - That was quick! Thanks, actually I read about this a long time ago and forgot that it applied to any group G... – John Goodrick Mar 7 2010 at 17:37 You're quite welcome. I have had a side interest in covering problems of various sorts for a couple of years now, and Neumann's result is one of the nicest in this area. – Pete L. Clark Mar 7 2010 at 17:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9418299198150635, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/43790/where-is-noncommutativity-in-the-state-effect-formalism-of-quantum-mechanics?answertab=active
# Where is noncommutativity in the state-effect formalism of quantum mechanics? In quantum information theory, one can adopt the basic formalism where every system is given by an operator algebra, state preparation procedures correspond to linear functionals on that algebra (macroscopic systems preparing quantum systems), yes-no measurements correspond to projections in that algebra (quantum system affecting a macroscopic measurement device), and finally evaluating a state at a given projection is interpreted as the relative frequency of obtaining `yes' for the given projection. More general observables are given by projection-valued measures on a set of measurement outcomes, or equivalently, self-adjoint operators. The need for an operator algebra (in the notes above, for example) is apparently that the spectral projections of a self-adjoint operator must be found in the observable algebra. The trouble I have here is that it is not clear where, in the above formalism, one is to multiply linear operators representing observables? If an observable corresponds to a measurement, hence giving a `reading' on a macroscopic device (which is stipulated in the definition of measurement), how is it possible to perform a second measurement once a first has been made? (I'm thinking of a detector, here...like in Stern-Gerlach or double-slit.) Does the need for noncommutativity come from an observed statistical uncertainty relation? Does one prepare a system in a specific way, say one of the beams of a Stern-Gerlach apparatus, and then make a measurement of two different quantities on the single prepared beam, and then try to observe a statistical uncertainty relation on the measured quantities? (i.e. does one look for an uncertainty relation in the standard deviations of the resulting measurements?) Is this how we'd determine that the observables assigned to the two measurements taken do not commute? Note: "composed" Stern-Gerlach apparati and composition of quantum operations are not what I'm interested in. I want to know about noncommutativity of observables (which are viewed as measurements). EDIT: It should be noted that a positive answer to the above question essentially protects uncertainty principle from weak measurement objections... Thanks for your help in advance! Further Edit: Please tell me if the following is flawed. In classical mechanics, every observable is given by a real-valued function on the phase space. Each such function has a Fourier decomposition. In the case of a classical radiating atom under Maxwell's equations, it is predicted that the Fourier transform of the algebra of functions (observables) yields (and is isomorphic to) the convolution algebra of an additive abelian subgroup of the real numbers. However experimentally, the Rydberg-Ritz combination formula holds. Heisenberg drew a parallel between the convolution algebra of the group and that of the "Rydberg-Ritz rules" which yields a matrix algebra. My question originally asked, essentially, how one is to get a noncommutative algebra by looking only at measurement results. This seems to be precisely what Heisenberg did. Regarding the observables of this algebra (hermitian elements): at first glance it seems strange that Heisenberg assigns a noncommutative algebra to rules derived from spacing of frequencies of spectral lines, when Quantum Mechanics tells us to use measurement outcomes to label the eigenspaces of a hermitian operator in an operator algebra. I was under the impression that the spectral projections of the Hermitian operator, should generate the observable algebra. This sometimes holds in the commutative (classical) case by coincidence...since all the operators commute, a hermitian operator can determine a common eigenbasis for all observables, therefore the spectral projections generate the algebra. What an observable seems to do in the general (perhaps noncommutative matrix algebra) case is select an orthonormal basis with respect to which the matrices of the algebra are represented. If the observable is not invertible (e.g. a projection) this requires the additional choice of a basis for the kernel. Of course, everywhere I'm restricting my attention to finite dimensions. The latter role for the Hermitian operator is not incompatible with the view that "measurement is final". We never need to compose two observables, but can effectively look at every operator in the algebra representing the system in terms of this basis. The noncommutativity came before this fact. (von Neumann) measurement, as orthogonal projection into eigenspaces of the given observable seems to make more sense from this viewpoint, as well. Would a genuine physicist tell me if I'm off the mark here? - ## 2 Answers Let us formulate the question in mathematical terms. We have a system in a general state $\rho$, and two observables $A$ and $B$, which do not commute each other ($AB\not= BA$). To prove this non-commutativity, you have just to measure the observable $AB$ (which means measure first $B$ and then $A$, and take as result the product of the two single measurements) several times, and estimate the value $\langle AB\rangle_{\rho}$, then make the same for the observable $BA$, estimating $\langle BA\rangle_{\rho}$. If $\langle AB\rangle_{\rho}\not=\langle BA\rangle_{\rho}$ for one $\rho$, then $A$ and $B$ cannot commute. If you get an equality for all possible initial states $\rho$, it means that the two observables commute each other. Basically what I have written comes out from a possible mathematical definition of commutativity of two observables: $[A,B]=0$ if and only if $$\|AB-BA\|_\infty\equiv\sup_{\rho} \left|\text{Tr}[\rho(AB-BA)]\right|=0.$$ - The trouble I'm having is, measuring A and then B seems the same as measuring B and then A...since measurement is "final" in what I'm considering. Any interaction with a macroscopic piece of equipment is too violent to allow for subsequent measurement. The only way to discuss measuring A and then B is to prepare state $\rho$ and then insert a measuring apparatus to measure A and then remove that apparatus and then measure B. I don't see how this is different from measuring B and then A...if I can measure "in tandem" then one of the measurements is actually a preparation. – Jon Bannon Nov 11 '12 at 0:12 Let assume that your measurement device is not disturbing our system. In this case measuring $A$ and then $B$ is not the same as measuring first $B$ and then $A$. Mathematically because the two observable does not commute. Physically because if you make experiment, effectively you will get different results in some case. If your problem is that the experimental device is disturbing the system, in many cases you can design a scheme to implement a measurement without disturbing this. These kind of measures are called Quantum Nondemolition Measurement. – Bob Nov 11 '12 at 0:35 In that case, we certainly agree. – Jon Bannon Nov 12 '12 at 0:36 The paper that you link uses ordinary quantum mechanics (e.g. check sections 2.1.1--2.1.4). The state is given by the state operator $\rho$ and averages are obtained by the usual trace $\langle A \rangle = \mathrm{Tr}(A \rho)$. The algebra of operators can be found in many (all?) textbooks in quantum mechanics. The non-commutativity is present in that the algebra includes a non-commutative product. For some operators $AB \neq BA$. For instance, this is true for P and X operators as shown in ordinary textbooks in quantum mechanics. - This is true. The trouble is, though, that if I strictly view observables as corresponding to measurement procedures, I can never conduct one measurement and then a subsequent one...because the first "measurement" is actually a preparation. Physically, there is no room for composing observables. I can't, for example, compose two Stern-Gerlach apparati, since that would amount to composing two states...(density operators) not two observables. – Jon Bannon Nov 9 '12 at 17:57 @JonBannon: products of operators come in when computing conditional probabilities – Christoph Nov 9 '12 at 18:14 @Christoph: This is helpful. Can you flesh this out a bit more? – Jon Bannon Nov 9 '12 at 18:18 @JonBannon: Why not? If at time $t_1$ you perform a measurement you have that use the state $\rho(t_1)$ in the trace. After measurement this state evolves to $\rho(t_2)$ and if you perform a second measurement at $t_2$ you must use this latter state in the trace. – juanrga Nov 9 '12 at 20:37 @juanrga: I'm assuming that there is no "after the measurement"...that the measurement is such a violent interaction (like with a detector) that the system cannot evolve further. The type of interaction that would allow for further evolution would be considered a preparation. – Jon Bannon Nov 9 '12 at 22:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9253328442573547, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3961676
Physics Forums ## Is potential energy real? Potential energy has always bothered me. Is it just an accounting trick to describe that energy is always conserved ( E = K + U)? Because if so, there are probably other ways we can describe this. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Mentor There is another thread here somewhere on the same topic you should look for. But anyway: Yes, PE is as real as KE. Quote by russ_watters There is another thread here somewhere on the same topic you should look for. But anyway: Yes, PE is as real as KE. So far from reading the thread, the general consensus seems to point that it is fictitious. But I'll carry the conversation on the other thread. Mentor ## Is potential energy real? Hmm....must be the wrong thread then. Quote by russ_watters Hmm....must be the wrong thread then. This is it http://www.physicsforums.com/showthr...al+energy+real Quote by Nano-Passion Potential energy has always bothered me. Is it just an accounting trick to describe that energy is always conserved ( E = K + U)? Because if so, there are probably other ways we can describe this. In some sense, you're right. Potential energy looses its meaning in Relativity. In Relativity, the interaction between the particles is carried through a field. But, the field itself becomes a physical system with its own (innumerably infinitely many) degrees of freedom. Then, the potential energy is the energy carried by the field due to its disturbance by the presence of the particles within it. But, this way of looking at things is so hard to imagine, that, especially when the speeds of the particles are much smaller than the speed of light in vacuum, it is still beneficial to introduce a potential energy. One relativistic consequence of the difference between the two ways of looking at things is that, classically, the orbit of an electron around a proton is inherently unstable. Namely, as the electron revolves, it is accelerated. Accelerated charges emit electromagnetic waves. Thus, some of the energy of the proton-electron system gets radiated away in a form of electromagnetic waves (which are disturbances of the electromagnetic field). A similar thing should occur in a gravitationally bound system, although the energy emitted through gravitational waves is very much lower. For what I mean, see Darwin Lagrangian. Quote by Dickfore For what I mean, see Darwin Lagrangian. That's funny, I've stumbled onto that page earlier today. The math formalism is a bit over my head but the concept is a bit more familiar. One thing that made me really want to question potential energy is that I derived a formula that is almost exactly analogous to it. I'm a bit hesitant to share it at this point though. Recognitions: Gold Member Homework Help Science Advisor Whether potential energy..or any form of energy...is real, is debatable. What really is real, are changes in energy...ΔK, ΔU, ΔE, etc....(definition of reality not withstanding). What is the potential energy of a charged particle moving in an electromagnetic field? Quote by Dickfore What is the potential energy of a charged particle moving in an electromagnetic field? Hmm.. In an electric field: $$k \frac{q_1 q_2}{r}$$ In a uniform magnetic field, with the particle perpendicular to it: F=qvB For potential energy, we take the integral of that with respect to ? In a uniform field, velocity & B will not change. So this approach is limited. Note, I've only completed Calc based Physics II. Quote by PhanthomJay Whether potential energy..or any form of energy...is real, is debatable. What really is real, are changes in energy...ΔK, ΔU, ΔE, etc....(definition of reality not withstanding). Fair enough. Thread Tools | | | | |------------------------------------------------|----------------------------------------|---------| | Similar Threads for: Is potential energy real? | | | | Thread | Forum | Replies | | | General Physics | 73 | | | Introductory Physics Homework | 3 | | | High Energy, Nuclear, Particle Physics | 1 | | | Advanced Physics Homework | 1 | | | Classical Physics | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9370232820510864, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/65721/uses-of-lebesgues-covering-lemma
# Uses of Lebesgue's covering lemma Consider Lebesgue's covering lemma in the following form: Let $(X,d)$ be a compact metric space and let $\{U_i\}_{i\in I}$ be an open cover of $X$. Then there exists $\delta>0$ such that each subset $Y$ of $X$ of diameter less than or equal to $\delta$ lies within some $U_i$. What are possibly the important and striking uses of this lemma named after a famous mathematician? I have seen only one use and that was in the derivation of the fundamental group of the circle, using $\mathbb R$ as the universal cover. However I can't imagine that this is the only one, especially because in the more general setting of covering spaces, it is possible to do without this lemma. Is this lemma more fundamentally important befitting its name, and if so, what are some uses to convince myself? - I recall $X$ being a compact metric space in the lemma. – Asaf Karagila Sep 19 '11 at 6:25 Oh yes, sorry, fixed. Thanks. – Lit Sep 19 '11 at 6:29 ## 2 Answers If I remember correctly, the basic application of this in my topology class was to prove that continuous maps from compact metric spaces were uniformly continuous. However, the lemma is really important in algebraic topology. It is used almost everywhere where you need to cut up your domain (the interval for a path or the square for a homotopy) into sufficiently small parts. From the top of my head, we used it in the proof of Seifert-van Kampen's theorem, various results on covering spaces, probably the excision theorem for singular homology etc. - This lemma is the key ingredient in the proof that, for metrizable spaces, topological compactness is the same as sequential compactness. Cfr. Munkres Topology, chapter 3, theorem 28.2. - It’s not needed at all for that proof. A sequentially compact space is easily shown to be countably compact. If it’s metric, it’s totally bounded, hence separable, hence Lindelöf, and a countably compact Lindelöf space is compact. Total boundedness is easy. If for some $\epsilon>0$ $X$ has no finite $\epsilon$-net, recursively construct an infinite sequence whose points are pairwise at least $\epsilon$ apart; clearly it has no convergent subsequence. – Brian M. Scott Sep 19 '11 at 22:45 @BrianM.Scott: In the proof I'm thinking of, Lebesgue lemma is used to show that (sequentially compact) => (compact): indeed, fist one shows that a seq. compact space is totally bounded, then uses this to find a finite number of small enough balls that cover the whole space. By Lebesgue lemma each ball is contained in one set of the cover, enabling us to choose a finite subcover. – Giuseppe Negro Sep 20 '11 at 13:21 Yours is a different approach that I didn't know. I will think about it, thank you! – Giuseppe Negro Sep 20 '11 at 13:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9273366332054138, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/5388-very-very-urgent-plz-help.html
# Thread: 1. ## Very Very Urgent!! Plz Help a) how many times must a die be thrown to be sure that the same number occurs twice? b) How many times must two dice be thrown to be sure that the same total occurs at least six times? c) How many times must n dice be thrown to be sure that the same total score occurs at least p times. 2. Originally Posted by xXxSANJIxXx a) how many times must a die be thrown to be sure that the same number occurs twice? There are six different possible outcomes, so it's quite possible that none of the six first throws are the same, but the seventh throw has to be the same as one of the others. b) How many times must two dice be thrown to be sure that the same total occurs at least six times? Two die can give any combination between 2 and 12, so theres 11 combinations. To get the same combination six times, you have to realize that each of those eleven combinations can possibly happen five times before one happens six. Therefore you have to roll the dice 11x6+1 times to be sure of the answer. c) How many times must n dice be thrown to be sure that the same total score occurs at least p times. Try this with the information above and tell me if you can't get it. 3. Originally Posted by xXxSANJIxXx a) how many times must a die be thrown to be sure that the same number occurs twice? If you throw a die and it land on a 1 what is the probability it lands on the one the next throw? Simple 1/6. What about on the second time. Consider the probability of it not landing on 1. Then the probability is $(5/6)^2$. Thus, the probability that it would is, $1-(5/6)^2$ For three it is $1-(5/6)^3$ This pattern continues. You need an $n$ such that, $1-(5/6)^2\geq 1/2$ for that is the meaning of "likely". Basic computation shows that number is $n=4$ 4. Originally Posted by ThePerfectHacker $1-(5/6)^2\geq 1/2$ for that is the meaning of "likely". Basic computation shows that number is $n=4$ "likely" is different than "sure" 5. Originally Posted by Quick "likely" is different than "sure" In that case it is not possible to be sure. I assumed he meant likely, as with many probability problems. 6. Originally Posted by ThePerfectHacker In that case it is not possible to be sure. I assumed he meant likely, as with many probability problems. It's possible to be sure, I showed it in my previous post. 7. I misunderstood the question. I assumed the thing meant that I should find when of getting the same number again as on your first rule. --- Quick, you might not be ware but you used the "Pigeonhole Principle". Next time you want to sound smart and rely on it say "According to the pigeonhole principle...". And if you really want to sound smart say "According to Dirichelet's Pigeonhole Principle...." 8. i cant get the last one 9. Originally Posted by ThePerfectHacker If you throw a die and it land on a 1 what is the probability it lands on the one the next throw? Simple 1/6. Quicks solution is right. In a simple system where it is only the probability of occurrence that we need worry about, that the probability of a 1 occurring is 1/6 does not guarantee that you will ever see a 1, or if you have seen one that you will ever see one again. (you will with probability 1, but that is not the same thing as it will definitely happen - the details of why are too technical for this thread). RonL 10. Originally Posted by Quick There are six different possible outcomes, so it's quite possible that none of the six first throws are the same, but the seventh throw has to be the same as one of the others. (b) Two die can give any combination between 2 and 12, so theres 11 combinations. To get the same combination six times, you have to realize that each of those eleven combinations can possibly happen five times before one happens six. Therefore you have to roll the dice 11x6+1 times to be sure of the answer. Look at part (b) and (c) together. The only difference is that 6 is replaced with p. So look at Quick's answer to (b) and where he put 6 put p.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951556384563446, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/254814/why-must-we-distinguish-between-rational-and-irrational-numbers/254836
# Why must we distinguish between rational and irrational numbers? The difference between rational and irrational numbers is always stated as: rational numbers can be written as the ratio of two integers, and irrational numbers can't. However, why do mathematicians make a distinction between these two types of numbers? Why are integers special anyway, other than being historically significant? Is there any property that sets rational or irrational numbers apart, other than the way they are written in our number system? - 11 Why do we make distinctions between $7$ and $12$ in the first place, except for historical significance? – Hagen von Eitzen Dec 9 '12 at 20:11 2 You have a point. In United States law, it was eventually confirmed that "separate but equal" was not really equal at all. – Will Jagy Dec 9 '12 at 22:00 Did you ever wonder how to charcterise the real numbers among the complex numbers (without using real/imaginary parts, complex conjugation and such, which notions need real numbers to be defined in the first place)? – Marc van Leeuwen Dec 9 '12 at 23:03 "Rational numbers can be written as the ratio of two integers, and irrational numbers can't." I'm pretty sure that's not quite how it's defined; you're assuming all numbers are real. – Joe Z. Dec 10 '12 at 23:51 4 Well, if you don't want the Pythagoreans to drown you, it is important not to make this distinction... – Asaf Karagila Dec 12 '12 at 20:26 ## 7 Answers Here's one example of where the difference between rational numbers and irrational numbers matters. Consider a circle of circumference $1$ (in any units you choose), and suppose we have an ant (of infinitesimal size, of course) on the circle that moves forward by $f$ instantaneously once per second. Then the ant will return to its starting point if and only if $f$ is a rational number. Maybe that was a little contrived. How about this instead? Consider an infinite square lattice with a chosen point $O$. Choose another point $P$ and draw the line segment $O P$. Pick an angle $\theta$ and draw a line $L$ starting from $O$ so that the angle between $L$ and $O P$ is $\theta$. Then, the line $L$ passes through a lattice point other than $O$ if and only if $\tan \theta$ is rational. In general the difference between rational and irrational becomes most apparent when you have some kind of periodicity in space or time, as in the examples above. - 1 If "moves forward by $f$" means "travels distance $f$ along the circle", then I think your claim is false since the circumference of the circle is irrational. In this case, my feeling is $f$ need be a rational multiple of $\pi$. – Austin Mohr Dec 10 '12 at 2:23 10 @AustinMohr: My first reaction was the same as yours, but no, the answer specifies "a circle of circumference 1", so the circumference is not irrational. – ruakh Dec 10 '12 at 4:29 3 @ruakh Thanks for the clarification. I guess it's time I learned to read... – Austin Mohr Dec 10 '12 at 4:44 One thing is in how you construct them. Starting from the natural numbers (and $0$) you construct the integers by saying that $\mathbb{Z}$ is the smallest set that contains the naturals and is a group under addition. Similarly, the rationals $\mathbb{Q}$ is the smallest set containing $\mathbb{Z}$ that forms a group under multiplication (when $0$ is taken out). The reals $\mathbb{R}$ can then be constructed by defining it to be the smallest set containing $\mathbb{Q}$ in which every bounded set has a least upper bound. - 2 Thanks. A follow-up: How are the natural numbers constructed, or do we only start from them arbitrarily? – mage Dec 9 '12 at 20:19 7 Start with nothing (a.k.a. $0$), and keep adding one... – Zhen Lin Dec 9 '12 at 20:19 4 @ZhenLin: That will eventually construct each individual natural number (if you live long enough), but not the set of natural numbers. That is why one needs an axiom of infinity. – Marc van Leeuwen Dec 9 '12 at 22:56 Another example where they differ. Take any polynomial $a_nx^n + a_{n-1}x^{n-1} + \ldots a_1x + a_0$ with integer coefficients. You can easily find all rational roots of that polynomial: Any rational root $\frac{p}{q}$ (with $p$, $q$ relativly prime integers) must satisfy: $p$ divides $a_0$ and $q$ divides $a_n$. So there are finitely many possibilities which you can check by hand. It's not easy in general to find irrational roots. - – eacousineau Dec 10 '12 at 2:21 1 Most important properties that polynomials (in one variable over field) share with integers are: division algorithm, being principal ideal domain, and being unique factorisation domain. You can learn a lot about it in e.g. Hungerford's Algebra. – rafaelm Dec 10 '12 at 2:37 From an algebraic perspective, if you believe in the "naturalness" of $\mathbb{R}$ or $\mathbb{C}$, then $\mathbb{Q}$ sits naturally inside them as the minimal subfield of characteristic zero. Topologically it is also worth noting that $\mathbb{Q}$ is dense in $\mathbb{R}$. These are just two properties that illustrate that $\mathbb{Q}$ is really a natural and interesting set. - 1 $\mathbb{Q}$ sits inside of every field of characteristic zero (isomorphically). That makes it very important from an algebraic perspective. – asmeurer Dec 10 '12 at 1:34 Why are integers special anyway, other than being historically significant? From a technical perspective, it is good to know that we can perform exact computations on rationals. On irrational numbers, you have to approximate unless you restrict yourself to a suitable subset of irrational numbers, like e.g. an extension like $\mathbb Q[\sqrt2]$ or the algebraic numbers $\bar{\mathbb Q}$. But even when performing computations with such fields, many operations are internally formulated on rational numbers, which in turn are formulated on integers. So in a sense, anything you can express with rational numbers is something which you can compute in a straight-forward way (although still using rationals based on arbitrary length integers, as opposed to floating point numbers) without loosing exactness. Anything involving reals has to fail there: you cannot even enter a single irrational real number unless you do so using a formula describing how to compute it from rational numbers. - The Dirichlet function, which holds different values for rationals and irrationals is an example of a function defined for each real x, but continuous nowhere. This wouldn't have been true were the set of rationals not dense. This function is a straightforward example of a nonintegrable function. Moreover, the fact that the rationals are countable helps in proving the Lindelof Covering theorem. http://en.wikipedia.org/wiki/Lindel%C3%B6f's_lemma Also, the construction of the field of fractions of an integral domain is motivated from the construction of rationals from the integers. - I think irrational numbers are for specifying an amount of something accurately(or far more specific if not accurate) up to infinitely small scale and can never be reached by getting the ratio of any 2 whole numbers(integers). Examples: amount of liquid, exact volume of a container, exact length of a rope around a drum, density of moisture in the air. (Though, when computing these quantity, we get rational numbers. But that can't be easily know.) Rational numbers occur when counting some whole/countable objects like apple, orange, houses, trees, ballpens, etc, are involve, which getting the count of an object as a whole(ignoring their sizes) are more important than getting their volume, or the space they occupy nor their weight. Example: "how many coins do you have right now?" ask for a rational number while "how much profit you earned in your bank last year" ask for a quantity that has a whole number count and a an amount less than a whole number to specify exactness. Non-integer rational numbers still involves whole numbers. Example: I have apples double your number of apples, or your apples are just 50% of my apples. We know that 50% comes from 50/100 or 1/2. If we know these, 50% or 0.5 is obviously a rational number. They are accurate if expressed as fractions compared to floating point notation which involves rounding off when writing them to save space. I think, rational is more on finite measurement while irrational is more on infinite measurement(infinitely small approximations). I'm just answering based on my opinion and have no complete info about these two set of numbers. But for informal explanation that is my answer. I hope it helps. - This is not so true. Saying that there are exactly 4 of something is no less exact than saying that the ratio between the circumference and diameter is $\pi$. – mixedmath♦ Dec 10 '12 at 7:56 @mixedmath, have you received the email I sent to the six mods for whom I could find email addresses? If not, could you please send me an email from some account you use? Note, it is just, as requested, the original text and IP address for one of the cheaters in the "sticks and stones" idiocy. – Will Jagy Dec 10 '12 at 19:19 @Will: I have - but if in the future you need to contact me, you can send me email at mixedmath [at] gmail [dot] com. – mixedmath♦ Dec 10 '12 at 20:04 @mixedmath, good, sent anyway so my browser remembers your address. – Will Jagy Dec 10 '12 at 20:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414231777191162, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/41697/what-is-the-spin-rotation-operator-for-spin-1-2
# What is the spin rotation operator for spin > 1/2? For spin $\frac{1}{2}$, the spin rotation operator $R_\alpha(\textbf{n})=\exp(-i\frac{\alpha}{2}\vec{\sigma}\cdot\textbf{n})$ has a simple form: $$R_\alpha(\textbf{n})=\cos\biggl(\frac{\alpha}{2}\biggr)-i\vec{\sigma}\cdot\textbf{n}\sin\biggl(\frac{\alpha}{2}\biggr)$$ What about spin > $\frac{1}{2}$ ? - 1 I'm pretty sure the general form for the exponential would be too complicated. The first observation is that the spectrum of $-i\alpha\vec\sigma\cdot\bf n$ in a system with spin $s$ is $(-i\alpha(-s),-i\alpha(-s+1),\ldots,-i\alpha(s-1),-i\alpha s)$ and thus goniometric functions of arguments $\frac12\alpha$ through $s\alpha$ for half-integer spins, or $0$ (i.e., a constant term) through $s\alpha$ for whole spins would be present. – Vašek Potoček Oct 28 '12 at 22:32 – Vašek Potoček Oct 28 '12 at 22:37 The reason the exponential has so simple form in the case $s=\frac12$ is that $\frac\alpha2$ is the only frequency allowed by the analysis I posted above. It helps here that the powers of $\vec\sigma\cdot\bf n$ are always either identity or the original matrix. In higher dimensions, the set $\{S_x^k\}^n$ trajects a $2s+1$-dimensional space in a non-periodic fashion. Consider $S_z = \mathord{\rm diag}(-s,-s+1,\ldots,s-1,s)$. – Vašek Potoček Oct 28 '12 at 22:41 ## 1 Answer The same, except that the $\sigma_k$ are now not Pauli matrices but the generators of a su(2) representation of the desired spin. For example, the $3\times 3$ matrices $$\sigma_\ell:=(2\epsilon_{jk\ell})_{j,k=1:3}$$ define the spin 1 representation on 3-vectors. [Maybe the factor 2 should take a different value.] The corresponding explicit formula comes from the Rodrigues formula $$e^{X(a)}=1+\frac{\sin|a|}{|a|}X(a)+\frac{1-\cos|a|}{|a|}X(a)^2,$$ where $X(a)$ is the matrix mapping a vector $b$ to $X(a)b=a \times b$. For higher spin, the corresponding formula will depend on how you write the representation. Numerically, one would just diagonalize the matrix in the exponent; then computing the exponential is trivial. I don't know whether for general spin if there is any advantage in having an explicit formula. - Well, the exponential form is the same, but the exponential won't be computed in the same easy formula, using just one $\cos$ and one $\sin$ of the half angle. That was justified by $-i\frac\alpha2\vec\sigma\cdot\bf n$ having just two pure imaginary eigenvalues of opposite sign. The generators in higher-dimensional representations will have an accordingly higher number of eigenvalues, e.g., an additional $0$ in the case you used for an example. – Vašek Potoček Oct 25 '12 at 22:30 Of course I am asking about the analogous formula for the expansion of the exponential in terms of cosines and sines not about the spin matrix ! – Tarek Oct 26 '12 at 9:16 One cannot guess from your question what you want unless you write it down clearly. Maybe you wish to update your question. – Arnold Neumaier Oct 27 '12 at 7:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8807221055030823, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/2150-trig-theory-question.html
# Thread: 1. ## Trig Theory Question In order to have an inverse a normal function must be one to one, What must happen to the trigonometric functions in order for them to have inverses. Thanks alot 2. Originally Posted by aussiekid90 In order to have an inverse a normal function must be one to one, What must happen to the trigonometric functions in order for them to have inverses. Thanks alot Actually, the proper terminology is a bijective function but I will not go into that. (If the function is countinous and one-to-one then it has an inverse). Yes, dreadfully true it is that the trigonometric functions have no inverses because they are not one to one. However, what we can do is restrict the domain. We define the $y=\sin^{-1}(x)$ (not reciprocal but rather inverse) as that $\sin(y)=x,-\frac{\pi}{2}\leq x\leq \frac{\pi}{2}$, notice that this is the inverse on this interval. Checking the requirements for the interval $[-\pi/2,\pi/2]$ we see that, $\sin(\sin^{-1}(x))=\sin^{-1}(\sin(x))=x$. Similar things are done to the other 5 trigonomteric functions, however, some domains are restricted differently than others. For example, the domain of $\cos^{-1}(x)$ is $0\leq x\leq\pi$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447113275527954, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/156548-time-series.html
# Thread: 1. ## Time series I have a book series of time and says in part: An example of time series would be: $Y_t = 0.8Y_{t -1} + \epsilon_t,$ $t = 1, 2 ...$ Then they say, this series can also be expressed as: $Y_t = \displaystyle\sum_{i=1}^t{0.8^{t - i}} \epsilon_i$ How do they do and why they develop the series? Thank you very much. Greetings. Dogod 2. Hello, Dogod11! I'm not familiar with Time Series. I've had to come up with my own theories. An example of time series would be: $Y_t \:=\: 0.8Y_{t -1} + \epsilon_t, \;\;t = 1, 2 \hdots$ Then they say, this series can also be expressed as: . . $Y_t \:=\: \displaystyle\sum_{i=1}^t{0.8^{t - i}} \epsilon_i$ How do they do it and why they develop the series? It only makes sense if $Y_0 = 0$ . . . the initial quantity is zero. Then we have: . . $\begin{array}{cccccccccc}<br /> Y_0 &=& 0 \\<br /> Y_1 &=& 0.8(0) + \epsilon_1 &=& \epsilon_1\\<br /> Y_2 &=& 0.8(\epsilon_1) + \epsilon_2 &=& 0.8\epsilon_1 + \epsilon_2 \\<br /> Y_3 &=& 0.8(0.8\epsilon_1 + \epsilon_2) + \epsilon_3 &=& 0.8^2\epsilon_1 + 0.8\epsilon_2 + \epsilon_3 \\<br /> Y_4 &=& 0.8(0.8^2\epsilon_1 + 0.8\epsilon_2 + \epsilon_3) + \epsilon_4 &=& 0.8^3\epsilon_1 + 0.8^2\epsilon_2 + 0.8\epsilon_3 + \epsilon_4\end{array}$ And we see the pattern: . . $Y_t \;=\;0.8^{t-1}\epsilon_1 + 0.8^{t-2}\epsilon_2 + 0.8^{t-3}\epsilon_3 + \hdots + 0.8^2\epsilon_{t-2} + 0.8\epsilon_{t-1} + \epsilon_t$ which can be written: . $\displaystyle \sum^t_{i=1} 0.8^{t-i}\epsilon_i$ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Why did they derive this formula? So we can find, say, the 20th term . . without cranking out the first 19 terms. 3. Thank you very much
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9045224189758301, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/72329/list
## Return to Answer 3 added 296 characters in body Here is a trial proof for the question over $Q_p$. Write $J[f(x)^k]$ for the general Jordan form of a irreducible $f$, being $k$ identical blocks joined by 1's in general (minimal polynomial of block is $f^k$). Let $A$ be finite order over $Q_p$, so $A\sim\oplus J[f(x)]$ where the $f$ have $f|\Phi_m$ (cyclotomic polynomials) and finiteness implies the $f$ are irreducible (not powers). Note $\bar f$ determines $m$ up to $p$-powers, writing $m=up^v$. Furthernote, $\bar\Phi_u\rightarrow\oplus J[\bar g(x)]$ gives m=up^v$for$\bar\Phi_{up^v}\rightarrow\oplus J[\bar g(x)^{\phi(p^v)}]$where (u,p)=1$. Further note, if $\phi(p^v)\neq 1$ as \bar\Phi_u=\prod \bar g$then$p\neq 2$. That \bar\Phi_{up^v}=\prod\bar g^{\phi(p^v)}$, and what is more, the corresponding Jordan block to $\bar g^{\phi(p^v)}$ does not split, in other words this is the minimal polynomialof a block of . This follows since the reduction (mod $\bar\Phi_{up^v}$ corresponding to powers p$) of the companion matrix of$\bar g$f$ is as large as possible, namely itself a companion matrix (ones above the diagonal) over a field $F_p$, and so has its minimal and characteristic polynomials equal to $\bar f=\bar g^{\phi(p^v)}$. So, every reduction to $\bar f$ from the $A\sim\oplus J[f]$ decomposition , has $\bar f(x)=\bar g(x)^{\phi(p^v)}$ for some irreducible $\bar g|\bar\Phi_u$, that lifts to $g|\Phi_u$. What is more, $\bar A\sim\oplus J[\bar g(x)^{\phi(p^v)}]$. From this, $\bar A\sim\oplus J[\bar g(x)^{\phi(p^v)}]$ determines the general Jordan form of $A$ uniquely as something like $A\sim\oplus J[g(x^{\phi(p^v)})]$J[\Phi_{pu}^{g-part}(x^{p^{v-1}})]$. This The general Jordan form classifies it the conjugacy type over a field, like as is$Q_p\$. Note that, $\Phi_3\Phi_6$ and $\Phi_6^2$ give 4x4 matrices with order 6, failing for $p=2$. 2 added 93 characters in body Here is a proof for the question over $Q_p$. Write $J[f(x)^k]$ for the general Jordan form of a irreducible $f$, being $k$ identical blocks joined by 1's in general (minimal polynomial of block is $f^k$). Let $A$ be finite order over $Q_p$, so $A\sim\oplus J[f(x)]$ where the $f$ have $f|\Phi_m$ (cyclotomic polynomials) and finiteness implies the $f$ are irreducible (not powers). Note $\bar f$ determines $m$ up to $p$-powers, writing $m=up^v$. Furthernote, $\bar\Phi_u\rightarrow\oplus J[\bar g(x)]$ gives $\bar\Phi_{up^v}\rightarrow\oplus J[\bar g(x)^{\phi(p^v)}]$ where $\phi(p^v)\neq 1$ as $p\neq 2$. That is, the minimal polynomial of a block of $\bar\Phi_{up^v}$ corresponding to powers of $\bar g$ is as large as possible, namely $\bar g^{\phi(p^v)}$. So, every reduction to $\bar f$ from the $A\sim\oplus J[f]$ decomposition, has $\bar f(x)=\bar g(x)^{\phi(p^v)}$ for some irreducible $\bar g|\bar\Phi_u$, that lifts to $g|\Phi_u$. From this, $\bar A\sim\oplus J[\bar g(x)^{\phi(p^v)}]$ determines the general Jordan form of $A$ uniquely as $A\sim\oplus J[g(x^{\phi(p^v)})]$. This classifies it over a field, like $Q_p$. Note that, $\Phi_3\Phi_6$ and $\Phi_6^2$ give 4x4 matrices with order 6, failing for $p=2$. 1 Here is a proof for the question over $Q_p$. Write $J[f(x)^k]$ for the general Jordan form of a irreducible $f$, being $k$ identical blocks joined by 1's in general (minimal polynomial of block is $f^k$). Let $A$ be finite order over $Q_p$, so $A\sim\oplus J[f(x)]$ where the $f$ have $f|\Phi_m$ (cyclotomic polynomials) and finiteness implies the $f$ are irreducible (not powers). Note $\bar f$ determines $m$ up to $p$-powers, writing $m=up^v$. Furthernote, $\bar\Phi_u\rightarrow\oplus J[\bar g(x)]$ gives $\bar\Phi_{up^v}\rightarrow\oplus J[\bar g(x)^{\phi(p^v)}]$ where $\phi(p^v)\neq 1$ as $p\neq 2$. That is, the minimal polynomial of a block of $\bar\Phi_{up^v}$ corresponding to powers of $\bar g$ is as large as possible, namely $\bar g^{\phi(p^v)}$. So, every reduction to $\bar f$ from the $A\sim\oplus J[f]$ decomposition, has $\bar f(x)=\bar g(x)^{\phi(p^v)}$ for some irreducible $\bar g|\bar\Phi_u$, that lifts to $g|\Phi_u$. From this, $\bar A\sim\oplus J[\bar g(x)^{\phi(p^v)}]$ determines the general Jordan form of $A$ uniquely as $A\sim\oplus J[g(x^{\phi(p^v)})]$. This classifies it over a field, like $Q_p$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 105, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299706816673279, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/1679-circle-proof.html
# Thread: 1. ## Circle proof No idea how to get this one out, i know it is probably simple, just can't see it...grrr Attached Thumbnails 2. Originally Posted by jacs No idea how to get this one out, i know it is probably simple, just can't see it...grrr There is something wrong here, or in my interpretation of this problem. $\angle CGF$ is right, and $\angle CAF$ cannot be right as $AF$ is a cord and not a tangent to the circle with centre $O$. But opposite angles in a cyclic quad add to 2 right angles! RonL 3. Looks like the question has been structured incorrectly then, since it is a scan straight from the actual paper. I will go in and smack my teacher (lol....only joking, will point it out very politely and then glare since i spent so much time trying to work it out....grrrr) jacs 4. Originally Posted by jacs Looks like the question has been structured incorrectly then, since it is a scan straight from the actual paper. I will go in and smack my teacher (lol....only joking, will point it out very politely and then glare since i spent so much time trying to work it out....grrrr) jacs You should check what I wrote, to be sure you agree with its argument. We can all make mistakes, and so benefit from others checking our work. RonL 5. Originally Posted by CaptainBlack You should check what I wrote, to be sure you agree with its argument. We can all make mistakes, and so benefit from others checking our work. RonL Speak for yourself, I never make mistakes 6. Originally Posted by ThePerfectHacker Speak for yourself, I never make mistakes lol..... heee heee yes i went over it a number of times and your reasoning looks good to me. I have treid to prove that question over and over and jsut coudltn get it to work, least i know why now thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956312894821167, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/43964?sort=votes
## Folner sequences of amenable groups of exponential growth ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be an amenable group of exponential growth and let $S$ be a finite symmetric generating set. For each $k$, let $B_{k}$ be the closed ball of radius $k$ about the identity element in the corresponding Cayley graph of $G$ and let $b_{k} = |B_{k}|$. If $\lim b_{k+1}/b_{k}$ exists, then $\lim b_{k+1}/b_{k} = \lim b_{k}^{1/k} > 1$ and this easily implies that no subsequence of the $B_{k}$ forms a Folner sequence for $G$. But is this also true for those amenable groups of exponential growth for which $\lim b_{k+1}/b_{k}$ does not exist? - @Simon: I do not know any group for which the limit $\lim b_{k+1}/b_{k}$ does not exist. – Mark Sapir Oct 28 2010 at 15:21 @Mark: see mathoverflow.net/questions/36126/… – Andreas Thom Oct 28 2010 at 15:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8546192049980164, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/96781-radicals.html
# Thread: 1. ## Radicals I have no clue how to do these, and the book I have does not help at all. $(\sqrt{3} + \sqrt{5})^2$ On this one i got stuck after $8+\sqrt{15}+\sqrt{15}$ Multiply. $(\sqrt{14}+\sqrt{5})(\sqrt{14}+\sqrt{5})$ Solve. $\sqrt[3] {x-7}=2$ Solve $\sqrt{2x+1}=2x-5$ 2. Originally Posted by alexl13 I have no clue how to do these, and the book I have does not help at all. $(\sqrt{3} + \sqrt{5})^2$ On this one i got stuck after $8+\sqrt{15}+\sqrt{15}$ e^(i*pi): you need only combine those radicals like you would with rational numbers and factorise Multiply. $(\sqrt{14}+\sqrt{5})(\sqrt{14}+\sqrt{5})$ Solve. $\sqrt[3] {x-7}=2$ Solve $\sqrt{2x+1}=2x-5$ 1. Use FOIL as normal: $(\sqrt{3} + \sqrt{5})^2 = 3 + 2\sqrt{3}\sqrt{5} + 5 = 8 + 2\sqrt{15} = 2(4+\sqrt{15})$ 2. It's the same as above but with different numbers 3. Cube to remove the radical: $x-7 = 2^3 = 8$ - easy to see that x=15 4. Square to remove the radical, use FOIL on the RHS and solve the resulting quadratic 3. Originally Posted by alexl13 I have no clue how to do these, and the book I have does not help at all. $(\sqrt{3} + \sqrt{5})^2$ On this one i got stuck after $8+\sqrt{15}+\sqrt{15} \textcolor{red}{= 8 + 2\sqrt{15}}$ ... that's all. Multiply. $(\sqrt{14}+\sqrt{5})(\sqrt{14}+\sqrt{5}) \textcolor{red}{= 19 + 2\sqrt{90} = 19 + 3\sqrt{10}}$ Solve. $\sqrt[3] {x-7}=2$ cube both sides ... $\textcolor{red}{x-7 = 8}$ $\textcolor{red}{x = 15}$ Solve $\sqrt{2x+1}=2x-5$ square both sides, then solve the resulting quadratic ... don't forget to check your results in the original equation. go ahead, try it. . 4. Originally Posted by alexl13 I have no clue how to do these, and the book I have does not help at all. $(\sqrt{3} + \sqrt{5})^2$ On this one i got stuck after $8+\sqrt{15}+\sqrt{15}$ Multiply. $(\sqrt{14}+\sqrt{5})(\sqrt{14}+\sqrt{5})$ Solve. $\sqrt[3] {x-7}=2$ Solve $\sqrt{2x+1}=2x-5$ FOIL $(\sqrt{3}+\sqrt{5})^2=(\sqrt{3})^2+\overbrace{\sqr t{3}\sqrt{5}+\sqrt{3}\sqrt{5}}^{\text{like terms}}+(\sqrt{5})^2$ $=3+2\sqrt{3}\sqrt{5}+5=3+2\sqrt{15}+5=8+2\sqrt{15}$ 5. Originally Posted by skeeter $<br /> (\sqrt{14}+\sqrt{5})(\sqrt{14}+\sqrt{5}) \textcolor{red}{= 19 + 2\sqrt{90} = 19 + 3\sqrt{10}}<br />$ Where did you get get 90 from in Q2? I got $19+\sqrt{70}$ with $\sqrt{70} = \sqrt{14}\sqrt{5}$ 6. Originally Posted by e^(i*pi) 4. Square to remove the radical, use FOIL on the RHS and solve the resulting quadratic I got $2x+1=4x^2-20x=25$ but what confuses me is the exponent. Would I make is a square root so that it would be 4x? 7. Originally Posted by e^(i*pi) Where did you get get 90 from in Q2? I got $19+\sqrt{70}$ with $\sqrt{70} = \sqrt{14}\sqrt{5}$ my mistake ... i was thinking 5 times 18 for some reason. 8. Originally Posted by alexl13 I got $2x+1=4x^2-20x=25$ but what confuses me is the exponent. Would I make is a square root so that it would be 4x? $(\sqrt{2x+1})^2=(2x-5)^2$ $2x+1=4x^2-10x+25$ $4x^2-12x+24=0$ $x^2-3x+6=0$ $x=\frac{-(-3)\pm\sqrt{(-3)^2-4(1)(6)}}{2(1)}$ 9. Originally Posted by alexl13 I got $2x+1=4x^2-20x=25$ but what confuses me is the exponent. Would I make is a square root so that it would be 4x? No you can solve the quadratic using factorising, completing the square or the quadratic equation $<br /> (\sqrt{2x+1})^2=(2x-5)^2<br />$ $2x+1 = 4x^2 - 20x + 25$ $4x^2 - 22x +25 = 0$ $x = \frac{22 \pm \sqrt{(-22)^2 - 4(4)(25)}}{2(4)}$ As $b^2-4ac > 0$ there will be two, distinct real solutions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 44, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938970685005188, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/suvat-equations+homework
# Tagged Questions 1answer 45 views ### Projectile's angle in midflight For a missile travelling from (0,0) at angle $\theta$ (to the horizontal) and initial velocity $u$, the y (vertical) position at time t is given by $s_{y} = u\sin (\theta) t - 0.5gt^{2}$ and the x ... 2answers 90 views ### Doubt in Kinematics I know that this isn't the place for such basic questions, but I didn't find the answer to this anywhere else. It's pretty simple: some particle moves in straight line under constant acceleration from ... 2answers 141 views ### Equation of motion for average acceleration I am trying to solve the following: A man of mass 83 kg jumps down to a concrete patio from a window ledge only 0.48 m above the ground. He neglects to bend his knees on landing, so that his ... 0answers 54 views ### At an acceleration of 2ft/s^2, how fast could I reach 9.8MPH? [closed] Question title is self explanatory. If I'm accelerating at $2ft/s ^{2}$, how long does it take to reach $9.8MPH$? 0answers 54 views ### Accelerated motions of a car and a truck [closed] A car and a truck start from rest at the same instant with the car initially at same distance behind the truck. The truck has constant acceleration of 3.4 m/s^2. The car overtakes the truck within the ... 2answers 165 views ### Calculating vertical velocity component of a particle with mass, given the hit point of parabolic motion Consider the following situation: I have a particle with a given mass that at a given instant of time (let's say $t_{0}$) is placed at the system origin. The particle has a constant velocity ... 1answer 300 views ### How do I find the initial velocity in this problem? An X-ray tube gives electrons constant acceleration over a distance of $20\text{ cm}$. If their final speed is $2.0\times 10^7\text{ m/s}$, what are the electrons' acceleration? I know this ... 1answer 404 views ### Calculate acceleration and time given initial speed, final speed, and travelling distance? [closed] A motorcycle is known to accelerate from rest to 190km/h in 402m. Considering the rate of acceleration is constant, how should I go about calculating the acceleration rate and the time it took the ... 2answers 88 views ### Proof of $T=\sqrt{2y/a}$ in uniformaly accelerating object [closed] Suppose that there is a object that does a y-axis-only free fall to ground. The initial distance from the ground is defined as $H$. How does one prove that time the object takes to reach the ground ... 1answer 126 views ### Proving $t=(1+\sqrt{1+2hg/v^2 } ) (v/g)$ for a thrown ball If we throw a ball from the hight point $h$ from the earth, with initial velocity $v’$, how to prove that the time it takes the ball to reach the earth is given by: ... 0answers 74 views ### Jumping on a landing pad [closed] I'm trying to make a character jump on a landing pad who stays above him. Here is the formula I've used (everything is pretty much self-explainable, maybe except character_MaxForce that is the total ... 0answers 200 views ### Projectile Motion I know the angle at which a projectile is launched, how far it needs to go, and also the maximum height. How can I find the initial velocity needed (disregarding air resistance)? Currently, I am ... 2answers 822 views ### How do you calculate angle of projection? At what angle the projectile should throw with initial velocity v in order to reach distance d? discard the air resistance, only gravitation acts. So far I got the equations for horizontal and ... 0answers 109 views ### Confusing elevator Question , Please help by giving answer and detailed procedure [closed] The given data is : An elevator starts from rest and is uniformly accelerating at 2m/s2 . After 1 second , a person throws up a ball with initial velocity 4m/s. The hands of the thrower remain in ... 1answer 2k views ### Finding deceleration and velocity using distance and time A car is moving down a street with no brakes or gas. The car is slowing due to wind resistance and the effect of friction. The road is flat and straight. The only data I have are timings taken at 100m ... 3answers 361 views ### A freefalling body problem, only partial distance and time known Well, I've been trying to figure out a problem which I imposed on myself, so no literal values included. Unfortunately, my brain is not cooperating. The problem states: What is the height from ... 1answer 369 views ### The acceleration of a particle moving only on a horizontal plane is given by a= 3ti +4tj, [closed] The acceleration of a particle moving only on a horizontal plane is given by a= 3ti +4tj, where a is in meters per second-squared and t is in seconds. At t = 0s, the position vector r= (20.0 m)i + ... 2answers 74 views ### Acceleration: Value Disparity? If we consider a ball moving at an acceleration of $5ms^{-2}$, over a time of 4 seconds, the distance covered by the ball in the first second is $5m$. In the 2nd second will $5 + 5 = 10m$. In the ... 2answers 775 views ### What do I need to do to find the stopping time of a decelerating car? [closed] The question is: A car can be stopped from initial velocity 84 km/h to rest in 55 meters. Assuming constant acceleration, find the stopping time. Sorry for my ignorance, but I need to review ... 2answers 177 views ### Why wouldn't this system of equations determine where two balls meet? A ball is thrown vertically upwards at $5\text{ m/s}$ from a roof top of $100\text{ m}$. The ball B is thrown down from the same point $2\text{ s}$ later at $20\text{ m/s}$. Where and when will ... 1answer 122 views ### I think I disprove this with kinematics, but energy says it is right ! Here is my kinematics argument. For now I am only going to look at ball 2 and ball 3. Make note of the following data. $|v_0| = 10m/s$, $y_0 = 10m$, $\theta_2^0 = 30^0$, \$\theta_3^0 = -45^0, g = ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374561309814453, "perplexity_flag": "middle"}
http://cdsmith.wordpress.com/2009/09/14/on-inverses-of-haskell-functions/?like=1&_wpnonce=f652223489
software, programming languages, and other ideas September 14, 2009 / cdsmith Recall that given a function $f : A \to B$, a function $g : B \to A$ is called a left inverse of f in case $g \circ f = \mathrm{id}_A$.  In other words, we want g to undo whatever f did to its parameter.  A function has a left inverse if and only if it is injective, a.k.a. one-to-one. Question: How can we obtain left inverses of Haskell functions automatically? In other words, I’d like to have a function `f :: a -> b` and obtain, without explicitly writing it, another function ```g :: b -> a g . f = (id :: a -> a)``` I might first set out to accomplish the gold standard; to do this automatically, for an arbitrary domain.  Alas I wouldn’t get very far.  To succeed in that task would imply that the proposition $\forall P,Q : (P \Rightarrow Q) \Rightarrow (Q \Rightarrow P)$ is constructively valid.  But it isn’t even classically valid, so that’s hopeless.  Of course, if the domain of f is enumerable, I can in principle simply try all inputs to f in parallel until I find one that works… this is not terribly satisfying, however; it’s completely impractical for most purposes. So what if I restrict the domain of f further? In particular, I will require that f operate polymorphically on values of some type class, and then restrict the type class to ensure that I can get inverses easily.  By requiring that f be polymorphic, I can ensure that only “good” (i.e., easily invertible) things happen in the implementation of f.  (By the way, we need rank-2 types for this.) ```class Fooable a where foo :: Int -> a -> a foo' n = foo (-n) -- an inverse for (foo n)``` I now proceed to implement an automatic inverter for functions defined from and to Fooable types. ```newtype FooInversion = FooInversion { unInversion :: Fooable a => a -> a } instance Fooable FooInversion where foo n (FooInversion inv) = FooInversion (inv . foo' n) invertFooable :: Fooable a => (forall b. Fooable b => b -> b) -> a -> a invertFooable f = unInversion (f (FooInversion id))``` Finished!  I did something a little tricky there, I suppose — I defined functions from Fooable things to Fooable things to actually be Fooable things themselves.  Not too unusual a trick to play in abstract algebra, really. ```-- Declare a Fooable instance for testing instance Fooable Int where foo = (+) -- Define a QuickCheck property to ensure it's working prop a b = invertFooable (foo b) a == a - b``` Unfortunately, while this worked out, it’s fairly fragile.  It turns out that Fooable isn’t really a terribly useful type class.  The only fully polymorphic functions that can be defined from Fooable things to other Fooable things are actually finite iterations of one operation.  I can add other operations, sure, but the thing I’m missing is any kind of choice.  Since I have no way to inspect the value I’ve got as input to my function, I can’t make good use of if, case, pattern matching, or anything else of that form.  While it might be interesting to characterize precisely what type signatures can exist in the Fooable type class without making inversion impossible, I’m going in a different direction. Suppose that Fooable contained functions to inspect properties of the value you’re working with.  Then we can’t play the trick above quite as cleanly as one might hope. ```class Barrable a where look :: a -> Int bar :: Int -> a -> a bar' n = bar (-n)``` Now if we tried to proceed as before, how should we define look in the inversion instance?  Really, we can only get an inverse for specific sequences of operations that we end up in, when computing in the forward direction.  To implement that, we need to know the input (for the forward function) in advance, and compute with it. ```data BarInversion a = BarInversion { unBarInversion :: Barrable b => b -> b, barInvertible :: a } instance Barrable a => Barrable (BarInversion a) where look (BarInversion f x) = look x bar n (BarInversion f x) = BarInversion (f . bar'') (bar n x) where bar'' y = let y' = bar' n y in if look y' == look x then y' else undefined invertBarrable :: (Barrable a, Barrable b) => a -> (forall c. Barrable c => c -> c) -> b -> b invertBarrable x f = unBarInversion (f (BarInversion id x))``` This is similar to what was done above, except for two changes: 1. In addition to an inverse function, BarInversion carries around a current value at which the inverse is defined. 2. The inverse function checks to ensure that all the properties of the current value that are observable via the type class are the same, since the original function might have relied on those properties in deciding what to do.  If they don’t match, it gives up. The intention is that our look function doesn’t really give away the house; i.e., it provides partial, not complete, information about the value.  So for our test case, we’ll only give partial information: ```instance Barrable Int where look n = n `div` 5 bar = (+)``` Now: ```let f = invertBarrable 5 (bar 2) f 7 == 5 f 8 == 6 f 9 == 7 f 10 == 8 f 11 == 9 f 12 == *** undefined ***``` In other words, as long as the input is observably the same as the point at which we performed the forward calculation, the inverse is available.  If not, then the result is undefined.  We’ve got a partial inverse, at least. It’s worth noting that even if the inverse were only defined at the one point we started with, the inverse function could be valuable.  Note that the type of invertBarrable only requires that x be of some Barrable type, and that the inverse operation on som Barrable type.  They need not actually be the same type!  This is quite useful if, for example, you want to trace some other computation through the calculation of the inverse.  I think I’ll write another blog post on the practice of tracing one calculation through another one using type classes… but, some other time.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.896089494228363, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21473/linear-algebra-and-regular-orbits/21477
## Linear algebra and regular orbits ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) If A is an n×n matrix over a field, and Ak = I, with k the least positive integer such that this occurs, then must there be some vector v such that { v, Av, A2v, …, Ak−1v } = { Aiv } has k distinct elements in it? In other words: Must every matrix of finite multiplicative order have a regular orbit? If A has prime power order, k = pm, then Apm−1−I is nonzero, so its kernel is proper, and everything outside of that kernel is a vector in a regular orbit. Over a finite field of size q, the index of a proper subspace is at least q, so we can even just choose (on average) q random vectors to find one in a regular orbit. Over an infinite field, the same idea roughly says any random vector should work, as long as one can make some sort of "uniformly" distributed choice. If A has order a product of two prime powers, then I am assured this is true by a (special case) of an exercise in Isaacs's Finite Group Theory. I cannot imagine an argument that does not work for arbitrary orders k, but I also cannot find a convincing proof even for the product of two prime powers. The sum of vectors in regular orbits of the p-parts of A need not themselves be in regular orbits of A. Every matrix (over a finite field) I've tried has a regular orbit. Assuming this is easy, how does one handle the case where A is an automorphism of a finite group G, and the order of A is a product of two prime powers? In other words: Prove every automorphism of order paqb of a finite group has a regular orbit. Assuming the first question's answer is "yes", then what goes wrong for arbitrary orders? Isaacs's book gives an example where the general automorphism can fail to have a regular orbit, but it is impossible to compare this until I have at least some idea of why the two-prime case does work. A related version of this question is: regular orbits are quite important in permutation and (finite) matrix groups and are a standard technique in several important (solved and unsolved) problems in modular representation theory. Is there sort of a gentle introduction that puts these techniques in context? For any individual paper is clear that what they say works, but my picture of this area is incredibly disjointed and I suspect that is not true for everyone. For instance Khukhro has an excellent book on automorphisms of p-groups with few fixed points, and many finite group theory texts have chapters on fixed-point-free automorphisms and the consequences for the group structure of the group being acted upon. However, I haven't found any "textbook" exposition of regular orbits yet. - "Prove every automorphism of order paqb of a finite group has a regular orbit." Isn't this also covered in Isaacs's FGT, exercise 3A.8? – Steve D Apr 19 2010 at 16:27 ## 2 Answers For your first question, I presume you also wish to insist that $k$ be the least integer such that $A^k=I$. The matrix $A$ is then similar over your field to a direct sum $B_1,\ldots,B_m$ of companion matrices of (over your field $F$) factors of $X^k-1$, say $f_1,\ldots,f_m$. Then $F^n$ decomposes as a direct sum of subspaces where $A$ acts cyclically with generator $v_i$ annihilated by $f_i(A)$. Let $v=v_1+\cdots +v_m$. Then for a polynomial $g$, $g(A)v=0$ if and only if $g(A)v_i=0$ for all $i$ if and only if $f_i\mid g$ for all $i$. Hence $F\mid g$ where $F=f_1\cdots f_m$. But then $F(A)u=0$ for all $u$ (so that $F$ is the minimum polynomial of $A$). If $A^l=I$ where $l < k$ then $F\mid(X^l-1)$ and then $A^l-I=0$, contrary to hypothesis. So yes, $A$ has a regular orbit. - Thanks, I think this takes care of the first problem, though I am little worried at how close it sounds to my original idea that has counterexamples. In my language your B_i are basically my p-parts of A, and your v_i are representatives of special regular orbits of the B_i. Your sums are "direct", and so everything should work. I tried using kernels of A^(p^i)-1, and somehow this didn't work in an example, but I think it must be equivalent to what you've said. – Jack Schmidt Apr 15 2010 at 17:30 I should add that this is a special case of the fact that if the matrix $A$ has minimum polynomial $g$ then there is a vector $v$ such that $f(A)v=0$ iff $g\mid f$. This fact drops straight out of the theory of the rational canonical form of a matrix. – Robin Chapman Apr 15 2010 at 20:26 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Thanks to Marty Isaacs for reminding me the exercise was supposed to be easy and for giving a reference to my larger question. I'll post it here, since I think some people are following it. If the order of A is paqb then the subgroup generated by A has two minimal subgroups, generated by P=Apa−1qb and Q=Apaqb−1. If the orbit of g under A is not regular, then the stabilizer is a non-identity subgroup of A, so it contains either P or Q. Hence either P or Q centralize g. Hence g centralizes either P or Q (I am always amazed at the power of noticing "centralize" is symmetric), so g is in the union CG(P) ∪ CG(Q). Both of these subgroups are proper subgroups of G, since A acts faithfully on G itself. However, G is not the union of two proper subgroups, so there is some g in G − ( CG(P) ∪ CG(Q) ), and such a g represents a regular orbit. An investigation of which groups G must have regular orbits is in: Horoševskiĭ, M. V. "Automorphisms of finite groups." Mat. Sb. (N.S.) 93(135) (1974), 576–587, 630. (Math. USSR Sbornik 22 (1974) 4, 584–594) MR347979 DOI: 10.1070/SM1974v022n04ABEH001707 and in particular proves that every automorphism of a nilpotent group or a semisimple (that is, Fitting-free) group has a regular orbit. The paper has exercise 3A.8 as a remark after corollary 1 (page 592 in the English translation), and corollary 3.3 as theorem 2. Its lemma 4 fixes my difficulties with dealing with the orbits prime by prime (don't look at the p-parts of A where the obvious statement has obvious counterexamples, look at non-faithful orbits instead). I could not generalize Robin Chapman's argument to finite abelian groups, since one no longer has that finite Z/nZ[t] modules are direct sums of cyclic modules (for instance, Z[t]/(4,tt-1) has the ideal (2,t+1)/(4,tt-1) of type C2 × C4 with t=A acting as the matrix [1,2;0,1]. This module is non-cyclic and indecomposable. Of course t has a regular orbit, but I could not simply choose a "generator". Marty Isaacs has shown me how to use Horoševskiĭ's argument to reduce to the case where G is indecomposable, where presumably it is easier than I think. In the other direction, keeping G elementary abelian but letting A be an entire group of automorphisms, one is still quite interested in whether there is a regular orbit. I found this article helpful for getting an idea of how this works: Fleischmann, Peter. "Finite groups with regular orbits on vector spaces." J. Algebra 103 (1986), no. 1, 211–215. MR860700 DOI: 10.1016/0021-8693(86)90180-8. In particular, nilpotent groups tend to have regular orbits except when p=2 is involved (either in A or G), and the specific problems with p=2 are addressed. Its methods for abelian groups A give an alternative view of Robin Chapman's answer (basically the paper shows that you can reduce to the algebraically closed/absolutely irreducible case, and then the Bi are all 1x1, and the matrix A is diagonal). - Note that the same arguments given by Isaacs show that if 4 doesn't divide $|G|$, then we can let $|A|$ be divisible by three primes. – Steve D Apr 20 2010 at 16:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386822581291199, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/66981-nasty-looking-but-actually-easy.html
# Thread: 1. ## nasty looking but actually easy I ran across this integral that looks horrendous, but is actually quite simple. I am sure a lot of you will see it right off, but it is kind of fun. $\int sin(x)^{x}\left(ln(sin(x))+xcot(x)\right)dx$ Just thought I would share. 2. Originally Posted by galactus I ran across this integral that looks horrendous, but is actually quite simple. I am sure a lot of you will see it right off, but it is kind of fun. $\int sin(x)^{x}\left(ln(sin(x))+xcot(x)\right)dx$ Just thought I would share. by inspection: $\int ( \sin(x))^{x}\left[ \ln( \sin(x))+x \cot(x)\right]dx = ( \sin x)^x + C$ i just had the urge of checking what the derivative of $(\sin x)^x$ was. because of that log sitting there 3. Yes, of course, I knew you would see it. I just thought it was cool. I am sure we could say that about a lot of them. Note that it includes ln(sin(x)) and xcot(x). Two famous non-elementary integrals having been addressed here on MHF. 4. Originally Posted by galactus Yes, of course, I knew you would see it. I just thought it was cool. I am sure we could say that about a lot of them. Note that it includes ln(sin(x)) and xcot(x). Two famous non-elementary integrals having been addressed here on MHF. indeed. i probably only saw it because you said it was really easy though. because you said that, i looked for ways to find the integral without doing anything difficult. looking to reverse some derivative seemed obvious after that 5. Yes, one would've studied first $(\sin x)^x=e^{x\ln(\sin x)},$ and the derivative seems clear from there. 6. Same concept with $\int\frac{\ln(x)}{(1+\ln(x))^2}~dx$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9888750314712524, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/29991/list
## Return to Question 2 Clarified question One manifestation of the uncertainty principle is the fact that a compactly supported function $f$ cannot have a Fourier transform which vanishes on an open set. As stated, this phenomenon applies when $f$ lives on the integers, or on Euclidean space, but it is false for, say, the p-adic rationals, on which the characteristic function of the p-adic integers provides a counterexample. One normally proves this phenomenon on Euclidean space through complex analysis, although it does follow from the real-variable argument on this post of Tao on Hardy's Uncertainty Principle. For $f$ on the integers, it's even easier because only the zero polynomial has infinitely many roots (and this appears to me to be more or less the kind of input which goes into the other proofs as well). What I want to know is whether there is a proof of this phenomenon -- namely, the refusal of a compactly supported functions F.T. to vanish on an open set -- which directly relates to the connectedness of the frequency space, if that is what is behind it. For example, you Here is a failed effort in this direction... You know that $f = f \chi$ for any function $\chi$ which is $1$ on the support of $f$. Then $\hat{f}$ should remain unchanged when convolved with the Fourier transform of $\chi$, but since $\hat{f}$ lives on the reals you would like to think that this convolution would partially fill up some open set connected to the boundary of the original support of $\hat{f}$ and thus enlarge the support of $\hat{f}$. This argument goes through as long as you don't get an absurd cancellation; but while $\hat{f}$ may be assumed positive, one has less freedom to renormalize $\chi$. Does anyone know if there is a version of this argument which actually goes through? As stated it's no different than saying "$\hat{f} \ast \hat{\chi} = 0$ whenever $\chi$ lives away from the support of $f$; how weird..." It's possible my intuition here is all wrong and it's really the regularity of $\hat{f}$ which is completely behind the phenomenon, but I am at a loss for other examples of connected, locally compact abelian groups (are there any?) and I don't know how connectedness behaves with respect to Pontrjagin duality. 1 # Fourier transforms of compactly supported functions One manifestation of the uncertainty principle is the fact that a compactly supported function $f$ cannot have a Fourier transform which vanishes on an open set. As stated, this phenomenon applies when $f$ lives on the integers, or on Euclidean space, but it is false for, say, the p-adic rationals, on which the characteristic function of the p-adic integers provides a counterexample. One normally proves this phenomenon on Euclidean space through complex analysis, although it does follow from the real-variable argument on this post of Tao on Hardy's Uncertainty Principle. For $f$ on the integers, it's even easier because only the zero polynomial has infinitely many roots (and this appears to me to be more or less the kind of input which goes into the other proofs as well). What I want to know is whether there is a proof of this phenomenon which directly relates to the connectedness of the frequency space, if that is what is behind it. For example, you know that $f = f \chi$ for any function $\chi$ which is $1$ on the support of $f$. Then $\hat{f}$ should remain unchanged when convolved with the Fourier transform of $\chi$, but since $\hat{f}$ lives on the reals you would like to think that this convolution would partially fill up some open set connected to the boundary of the original support of $\hat{f}$ and thus enlarge the support of $\hat{f}$. This argument goes through as long as you don't get an absurd cancellation; but while $\hat{f}$ may be assumed positive, one has less freedom to renormalize $\chi$. Does anyone know if there is a version of this argument which actually goes through? As stated it's no different than saying "$\hat{f} \ast \hat{\chi} = 0$ whenever $\chi$ lives away from the support of $f$; how weird..." It's possible my intuition here is all wrong and it's really the regularity of $\hat{f}$ which is completely behind the phenomenon, but I am at a loss for other examples of connected, locally compact abelian groups (are there any?) and I don't know how connectedness behaves with respect to Pontrjagin duality.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9527626037597656, "perplexity_flag": "head"}
http://www.abstractmath.org/Word%20Press/?tag=algebra
# Gyre&Gimbleposts about math, language and other things that may appear in the wabe ## Explaining math 2013/03/26 — SixWingedSeraph To manipulate the demos in this post, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. The source for the demos is the Mathematica notebook SolvEq.nb. This post explains some basic distinctions that need to be made about the process of writing and explaining math.  Everyone who teaches math knows subconsciously what is happening here; I am trying to raise your consciousness.  For simplicity, I have chosen a technique used in elementary algebra, but much of what I suggest also applies to more abstract college level math. ## An algebra problem Solve the equation "$ax=b$" ($a\neq0$). Understanding the statement of this problem requires a lot of Secret Knowledge (the language of ninth grade algebra) that most people don't have. • The expression "$ax$" means that $a$ and $x$ are numbers and $ax$ is their product. It is not the word "ax". You have to know that writing two symbols next to each other means multiply them, except when it doesn't mean multiply them as in "$\sin\,x$". • The whole expression "$ax=b$" ostensibly says that the number $ax$ is the same number as $b$.  In fact, it means more than that. The phrase "solve the equation" tells you that in fact you are supposed to find the value of $x$ that makes $ax$ the same number as $b$. • How do you know that "solve the equation" doesn't mean find the value of $a$ that makes $ax$ the same number as $b$? Answer: The word "solve" triggers a convention that $x$, $y$ and $z$ are numbers you are trying to find and $a$, $b$, $c$ stand for numbers that you are allowed to plug in to the equation. • The conventions of symbolic math require that you give a solution for any nonzero value of $a$ and any value of $b$.  You specifically are not allowed to pick $a=1$ and $b=33$ and find the value just for those numbers.  (Some college calculus students do this with problems involving literal coefficients.) • The little thingy "$(a\neq0)$" must be read as a constraint on $a$.  It does not mean that $a\neq0$ is a fact that you ought to know. ( I've seen college math students make this mistake, admittedly in more complex situations). Nor does it mean that you can't solve the problem if $a=0$ (you can if $b$ is also zero!). So understanding what this problem asks, as given, requires (fairly sophisticated in some cases) pattern recognition both to understand the symbolic language it uses, and also to understand the special conventions of the mathematical English that it uses. ### Explicit descriptions This problem could be reworded so that it gives an explicit description of the problem, not requiring pattern recognition.  (Warning: "Not requiring pattern recognition" is a fuzzy concept.)  Something like this: You have two numbers $a$ and $b$.  Find a number $c$ for which if you multiply $a$ by $c$ you get $b$. This version is not completely explicit.  It still requires understanding the idea of referring to a number by a letter, and it still requires pattern recognition to catch on that the two occurrences of each letter means that their meanings have to match. Also, I know from experience that some American first year college students have trouble with the syntax of the sentence ("for which…", "if…"). The following version is more explicit, but it cheats by creating an ad hoc way to distinguish the numbers. Alice and Bob each give you a number.  How do you find a number with the property that Alice's number times your number is equal to Bob's number? If the problem had a couple more variables it would be so difficult to understand in an explicit form that most people would have to draw a picture of the relationships between them.  That is why algebraic notation was invented. ### Visual descriptions Algebra is a difficult foreign language.  Showing the problem visually makes it easier to understand for most people. Our brain's visual processing unit is the most powerful tool the brain has to understand things.  There are various ways to do this. Visualization can help someone understand algebraic notation better. You can state the problem by producing examples such as • $\boxed{3}\times\boxed{\text{??}}=\boxed{6}$ • $\boxed{5}\times\boxed{\text{??}}=\boxed{2}$ • $\boxed{42}\times\boxed{\text{??}}=\boxed{24}$ where the reader has to know the multiplication symbol and, one hopes, will recognize "$\boxed{\text{??}}$" as "What's the value?". But the reader does not have to understand what it means to use letters for numbers, or that "$x$ means you are suppose to discover what it is".  This way of writing an algebra problem is used in some software aimed at K-12 students.  Some of them use a blank box instead of "$\boxed{\text{??}}$". Such software often shows the algorithm for solving the problem visually, using algebraic notation like this: I have put in some buttons to show numbers as well as $a$ and $b$.  If you have access to Mathematica instead of just to CDF player, you can load SolvEq.nb and put in any numbers you want, but CDF's don't allow input data. You can also illustrate the algorithm using the tree notation for algebra I used in Monads for high school I  (and other posts). The demo below shows how to depict the value-preserving transformation given by the algorithm.  (In this case the value is the truth since the root operation is equals.) This demo is not as visually satisfactory as the one illustrating the use of the associative law in Monads for high school I.  For one thing, I had to cheat by reversing the placement of $a$ and $x$.  Note that I put labels for the numerator and denominator legs, a practice I have been using in demos for a while for noncommutative operations.  I await a new inspiration for a better presentation of this and other equation-solving algorithms. Another advantage of using pictures is that you can often avoid having to code things as letters which then has to be remembered.  In Monads for high school I, I used drawings of the four functions from a two-element set to itself instead of assigning them letters.  Even mnemonic letters such as $s$ for "switch" and $\text{id}$ for the identity element carry a burden that the picture dispenses with. ## Naming mathematical objects 2013/03/09 — SixWingedSeraph ### Commonword names confuse Many technical words and phrases in math are ordinary English words ("commonwords") that are assigned a different and precisely defined mathematical meaning. • Group  This sounds to the "layman" as if it ought to mean the same things as "set".  You get no clue from the name that it involves a binary operation with certain properties. • Formula  In some texts on logic, a formula is a precisely defined expression that becomes a true-or-false sentence (in the semantics) when all its variables are instantiated.  So $(\forall x)(x>0)$ is a formula.  The word "formula" in ordinary English makes you think of things like "$\textrm{H}_2\textrm{O}$", which has no semantics that makes it true or false — it is a symbolic expression for a name. • Simple group This has a technical meaning: a group with no nontrivial normal subgroup.  The Monster Group is "simple".  Yes, the technical meaning is motivated by the usual concept of "simple", but to say the Monster Group is simple causes cognitive dissonance. Beginning students come with the (generally subconscious) expectation that they will pick up clues about the meanings of words from connotations they are already familiar with, plus things the teacher says using those words.  They think in terms of refining an understanding they already have.  This is more or less what happens in most non-math classes.  They need to be taught what definition means to a mathematician. ### Names that don't confuse but may intimidate Other technical names in math don't cause the problems that commonwords cause. Named after somebody The phrase "Hausdorff space" leads a math student to understand that it has a technical meaning.  They may not even know it is named after a person, but it screams "geek word" and "you don't know what it means".  That is a signal that you can find out what it means.  You don't assume you know its meaning. New made-up words  Words such as "affine", "gerbe"  and "logarithm" are made up of words from other languages and don't have an ordinary English meaning.  Acronyms such as "QED", "RSA" and "FOIL" don't occur often.  I don't know of any math objects other than "RSA algorithm" that have an acronymic name.  (No doubt I will think of one the minute I click the Publish button.)  Whole-cloth words such as "googol" are also rare.  All these sorts of words would be good to name new things since they do not fool the readers into thinking they know what the words mean. Both types of words avoid fooling the student into thinking they know what the words mean, but some students are intimidated by the use of words they haven't seen before.  They seem to come to class ready to be snowed.  A minority of my students over my 35 years of teaching were like that, but that attitude was a real problem for them. ## Audience You can write for several different audiences. Math fans (non-mathematicians who are interested in math and read books about it occasionally) Renaming technical concepts, I wrote about several books aimed at explaining some fairly deep math to interested people who are not mathematicians.  They renamed some things. Math newbies  (math majors and other students who want to understand some aspect of mathematics).  These are the people abstractmath.org is aimed at. For such an audience you generally don't want to rename mathematical objects. In fact, you need to give them a glossary to explain the words and phrases used by people in the subject area. Postsecondary math students These people, especially the math majors, have many tasks: • Gain an intuitive understanding of the subject matter. • Understand in practice the logical role of definitions. • Learn how to come up with proofs. • Understand the ins and outs of mathematical English, particularly the presence of ordinary English words with technical definitions. • Understand and master the appropriate parts of the symbolic language of math — not just what the symbols mean but how to tell a statement from a symbolic name. It is appropriate for books for math fans and math newbies to try to give an understanding of concepts without necessary proving theorems.  That is the aim of much of my work, which has more an emphasis on newbies than on fans. But math majors need as well the traditional emphasis on theorem and proof and clear correct explanations. Lately, books such as Visual Group Theory have addressed beginning math majors, trying for much more effective ways to help the students develop good intuition, as well as getting into proofs and rigor. Visual Group Theory uses standard terminology.  You can contrast it with Symmetry and the Monster and The Mystery of the Prime Numbers (read the excellent reviews on Amazon) which are clearly aimed at math fans and use nonstandard terminology. ## Terminology for algebraic structures I have been thinking about the section of Abstracting Algebra on binary operations.  Notice this terminology: ## The "standard names" are those in Wikipedia.  They give little clue to the meaning, but at least most of them, except "magma" and "group", sound technical, cluing the reader in to the fact that they'd better learn the definition. I came up with the names in the right column in an attempt to make some sense out of them.  The design is somewhat like the names of some chemical compounds.  This would be appropriate for a text aimed at math fans, but for them you probably wouldn't want to get into such an exhaustive list. I wrote various pieces meant to be part of Abstracting Algebra using the terminology on the right, but thought better of it. I realized that I have been vacillating between thinking of AbAl as for math fans and thinking of it as for newbies. I guess I am plunking for newbies. I will call groups groups, but for the other structures I will use the phrases in the middle column.  Since the book is for newbies I will include a table like the one above.  I also expect to use tree notation as I did in Visual Algebra II, and other graphical devices and interactive diagrams. ### Magmas In the sixties magmas were called groupoids or monoids, both of which now mean something else.  I was really irritated when the word "magma" started showing up all over Wikipedia. It was the name given by Bourbaki, but it is a bad name because it means something else that is irrelevant.  A magma is just any binary operation. Why not just call it that? Well, I will tell you why, based on my experience in Ancient Times (the sixties and seventies) in math. (I started as an assistant professor at Western Reserve University in 1965). In those days people made a distinction between a binary operation and a "set with a binary operation on it".  Nowadays, the concept of function carries with it an implied domain and codomain.  So a binary operation is a function $m:S\times S\to S$.  Thinking of a binary operation this way was just beginning to appear in the common mathematical culture in the late 60's, and at least one person remarked to me: "I really like this new idea of thinking of 'plus' and 'times' as functions."  I was startled and thought (but did not say), "Well of course it is a function".  But then, in the late sixties I was being indoctrinated/perverted into category theory by the likes of John Isbell and Peter Hilton, both of whom were briefly at Case Western Reserve University.  (Also Paul Dedecker, who gave me a glimpse of Grothendieck's ideas). Now, the idea that a binary operation is a function comes with the fact that it has a domain and a codomain, and specifically that the domain is the Cartesian square of the codomain.  People who didn't think that a binary operation was a function had to introduce the idea of the universe (universal algebraists) or the underlying set (category theorists): you had to specify it separately and introduce terminology such as $(S,\times)$ to denote the structure.   Wikipedia still does it mostly this way, and I am not about to start a revolution to get it to change its ways. ### Groups In the olden days, people thought of groups in this way: • A group is a set $G$ with a binary operation denoted by juxtaposition that is closed on $G$, meaning that if $a$ and $b$ are any elements of $G$, then $ab$ is in $G$. • The operation is associative, meaning that if $a,\ b,\ c\in G$, then $(ab)c=a(bc)$. • The operation has a unity element, meaning an element $e$ for which for any element $a\in G$, $ae=ea=a$. • For each element $a\in G$, there is an element $b$ for which $ab=ba=e$. This is a better way to describe a group: • A group consist of a nullary operation e, a unary operation inv,  and a binary operation denoted by juxtaposition, all with the same codomain $G$. (A nullary operation is a map from a singleton set to a set and a unary operation is a map from a set to itself.) • The value of e is denoted by $e$ and the value of inv$(a)$ is denoted by $a^{-1}$. • These operations are subject to the following equations, true for all $a,\ b,\ c\in G$: • $ae=ea=a$. • $aa^{-1}=a^{-1}a=e$. • $(ab)c=a(bc)$. This definition makes it clear that a group is a structure consisting of a set and three operations whose axioms are all equations.  It was formulated by people in universal algebra but you still see the older form in texts. The old form is not wrong, it is merely inelegant.  With the old form, you have to prove the unity and inverses are unique before you can introduce notation, and more important, by making it clear that groups satisfy equational logic you get a lot of theorems for free: you construct products on the cartesian power of the underlying set, quotients by congruence relations, and other things. (Of course, in AbAl those theorem will be stated later than when groups are defined because the book is for newbies and you want lots of examples before theorems.) ## References 1. Three kinds of mathematical thinkers (G&G post) 2. Technical meanings clash with everyday meanings (G&G post) 3. Commonword names for technical concepts (G&G post) 4. Renaming technical concepts (G&G post) 5. Explaining higher math to beginners (G&G post) 6. Visual Algebra II (G&G post) 7. Monads for high school II: Lists (G&G post) 8. The mystery of the prime numbers: a review (G&G post) 9. Hersh, R. (1997a), "Math lingo vs. plain English: Double entendre". American Mathematical Monthly, volume 104, pages 48–51. 10. Names (in abmath) 11. Cognitive dissonance (in abmath) ## Monads for high school I 2013/02/12 — SixWingedSeraph ### Notes for viewing To manipulate the demos in this post, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. The source for the demos is at associative.nb. # Monads in Abstracting Algebra I've been working on first drafts (topic posts) of several sections of my proposed book Abstracting algebra (AbAl), concentrating on the ideas leading up to monads.  This is going slowly because I want the book to be full of illustrations and interactive demos.  I am writing the demos in Mathematica simultaneously with writing the text, and designing demos is very s l o w work. It occurred to me that I should write an outline of the path leading up to monads, using some of the demos I have already produced. This is the first of probably two posts about the thread. • AbAl is intended to give people with a solid high school math background a mental picture of or way of thinking about the various levels of abstraction of high school algebra. • This outline is not a "Topic post" as described in the AbAl page. In particular, it is not aimed at high school students! It is a guided tour of my current thoughts about a particular thread through the book. ## Associativity AbAl will have sections introducing functions and binary operations using pictures and demos (not outlined in this thread).  The section on binary operations will introduce infix, prefix and postfix notation but will use trees (illustrated below) as the main display method.  Then it will introduce associativity, using demos such as this one: Using this computingscienceish tree notation makes it much easier to visualize what is happening (see Visible Algebra II), compared to, for example, \[(ab)(cd)=a(b(cd))=a((bc)d)=((ab)c)d=(a(bc))d\]  In this equation, the abstract structure is hidden.  You have to visualize doing the operation starting with the innermost parentheses and moving out.  With the trees you can see the computation going up the tree. I will give examples of associative functions that are not commutative using $2\times2$ matrices and endofunctions on finite sets such as the one below, which gives all the functions from a two element set to itself. • Note that each function is shown by a diagram, not by an arbitrary name such as "id" or "sw", which would add a burden to the memory for an example that occurs in one place in the book. (See structural notation in the Handbook.) • The section on composition of functions will also look in some depth at permutations of a three-element set, anticipating a section on groups. By introducing a mechanism for transforming trees of associative binary operations, you can demonstrate (as in the demo below) that any associative binary operation is defined on any list of two or more elements of its domain. For example, applying addition to three numbers $a$, $b$ and $c$ is uniquely defined. This sort of demo gives an understanding of why you get that unique definition but it is not a proof, which requires formal induction. AbAl is not concerned with showing the reader how to prove math statements. ​In this section I will also introduce the oneidentity concept: the value of an associative binary operation on a an element $a$ is $a$.  Thus applying addition or multiplication to $a$ gives $a$.  (The reason for this is that you want an associative binary operation to be a unique quotient of the free associative binary operation.  That will come up after we talk about some examples of monads.) The oneidentity property also implies that for an associative binary operation with identity element, applying the operation to the empty set gives the identity element.  Now we can say: An associative binary operation with identity element is uniquely defined on any finite list of elements of its domain. Thus, in prefix notation,$+(2,3)=5$, $+(2,3,5)=10$, $+(2)=2$ and $+()=0$.  Similarly $\times(2)=2$ and $\times()=1$. This fact suggests that the natural definition of addition, multiplication, and other associative binary operations is as functions from lists of elements of the domain to elements of the domain.   This fits with our early intuition of addition from grade school, not to mention from Excel:  Addition is something you do to lists.  That feeling (for me) is not so strong for multiplication; for many common business applications you generally multiply two things like price and number sold. That's because multiplication is usually for things of different data types, but you usually add things of the same data type (not apples and oranges?). That raises the question: Does every function taking lists to elements come from an associative binary operation?  I will give an example that says no.  But the next thing is to introduce joining lists (concatenation), where we discover that joining lists is an associative binary operation.  So it is really an operation on lists of lists.  This will turn out to give us a systematic way to define all associative binary operations by one mechanism, because it is an example of a monad.  That is for the second installment of this outline. ## Abstracting algebra 2012/12/21 — SixWingedSeraph This post has been turned into a page on WordPress, accessible in the upper right corner of the screen.  The page will be referred to by all topic posts for Abstracting Algebra. ## Discussions about algebra 2012/11/01 — SixWingedSeraph I want to call your attention to the following items about ways to avoid algebra and why you should and shouldn't. Don't kill math, by Evan Miller Inventing on principle, video by Bret Victor ## Explaining “higher” math to beginners 2012/09/21 — SixWingedSeraph Notes on viewing ## Explaining math I am in the process of writing an explanation of monads for people with not much math background.  In that article, I began to explain my ideas about exposition for readers at that level and after I had written several paragraphs decided I needed a separate article about exposition.  This is that article. It is mostly about language. ## Who is it written for? ### Interested laypeople There are many recent books explaining some aspect of math for people who are not happy with high school algebra; some of them are listed in the references.  They must be smart readers who know how to concentrate, but for whom algebra and logic and definition-theorem-proof do not communicate.  They could be called interested laypeople, but that is a lousy name and I would appreciate suggestions for a better name. ### Math newbies My post on monads is aimed at people who have some math, and who are interested in "understanding" some aspect of "higher math"; not understanding in the sense of being able to prove things about monads, but merely how to think about them.   I will call them math newbies.  Of course, I am including math majors, but I want to make it available to other people who are willing to tackle mathematical explanations and who are interested in knowing more about advanced stuff. These "other people" may include people (students and practitioners) in other science & technology areas as well as liberal-artsy people.  There are such people, I have met them.  I recall one theologian who asked me about what was the big deal about ruler-and-compass construction and who seemed to feel enlightened when I told him that those constructions preserve exactly the ideal nature of geometric objects.  (I later found out he was a famous theologian I had never heard of, just like Ngô Bảo Châu is a famous mathematician nonmathematicians have never heard of.) ## Algebra and other foreign languages If you are aiming at interested laypeople you absolutely must avoid algebra.  It is a foreign language that simply does not communicate to most of the educated people in the world.  Learning a foreign language is difficult. So how do you avoid algebra?  Well, you have to be clever and insightful.  The book by Matthew Watkins (below) has absolutely wonderful tricks for doing that, and I think anyone interested in math exposition ought to read it.  He uses metaphors, pictures and saying the same thing in different words. When you finish reading his book, you won't know how to prove statements related to the prime number theorem (unless you already knew how) but you have a good chance of understanding the statement of some theorem in that subject. See my review of the book for more details. If your article is for math newbies, you don't have to avoid algebra completely.  But remember they are newbies and not as fluent as you are — they do things analogous to "Throw Mama from the train a kiss" and "I can haz cheeseburger?".  But if you are trying to give them some way of thinking about a concept, you need many other things (metaphors, illustrative applications, diagrams…)  You don't need the definition-theorem-proof style too common in "exposition".  (You do need that for math majors who want to become professional mathematicians.) ### Unfamiliar notation In writing expositions for interested laypeople or math newbies, you should not introduce an unfamiliar notation system (which is like a minilanguage).  I expect to write the monad article without commutative diagrams.  Now, commutative diagrams are a wonderful invention, the best way of writing about categories, and they are widely used by other than category theorists.  But to explain monads to a newbie by introducing and then using commutative diagrams is like incorporating a short grammar of Spanish which you will then use in an explanation of Sancho Panza's relationship with Don Quixote. The abstractmath article on and, or and not does not use any of the several symbolic notations for logic that are in use.  The explanations simply use "and", "or" and "not".  I did introduce the notation, but didn't use it in the explanations.  When I rewrite the article I expect to put the notation at the end of the article instead of in the middle.  I expect to rewrite the other articles on mathematical reasoning to follow that practice, too. ## Technical terminology This is about the technical terminology used in math.  Technical terminology belongs to the math dialect (or register) of English, which is not a foreign language in the same sense as algebra.  One big problem is changing the meaning of ordinary English words to a technical meaning.  This requires a definition, and definitions are not something most people take seriously until they have been thoroughly brainwashed into using mathematical methodology.  Math majors have to be brainwashed in this way, but if you are writing for laypeople or newbies you cannot use the technology of formal definition. ### Groups, simple groups "You say the Monster Group is SIMPLE???  You must be a GENIUS!"  So Mark Ronan in his book (below) referred to simple groups as atoms.  Marcus du Sautoy calls them building blocks.  The mathematical meaning of "simple group" is not a transparent consequence of the meanings of "simple" and "group". Du Sautoy usually writes "group of symmetries" instead of just "group", which gives you an image of what he is talking about without having to go into the abstract definition of group. So in that usage, "group" just means "collection", which is what some students continue to think well after you give the definition. A better, but ugly, name for "group" might be "symmetroid". It sounds technical, but that might be an advantage, not a disadvantage. "Group" obviously means any collection, as I've known since childhood. "Symmetroid" I've never heard of so maybe I'd better find out what it means. In beginning abstract math courses my students fervently (but subconsciously) believe that they can figure out what a word means by what it means already, never mind the "definition" which causes their eyes to glaze over. You have to be really persuasive to change their minds. ### Prime factorization Matthew Watkins referred to the prime factorization of an integer as a cluster. I am not sure why Watkins doesn't like "prime factorization", which usually refers to an expression such as  $p^{n_1}_1p^{n_2}_2\ldots p^{n_k}_k$.  This (as he says) has a spurious ordering that makes you have to worry about what the uniqueness of factorization means. The prime factorization is really a multiset of primes, where the order does not matter. Watkins illustrates a cluster of primes as a bunch of pingpong balls stuck together with glue, so the prime factorization of 90 would be four smushed together balls marked 2, 3, 3 and 5. Below is another way of illustrating the prime factorization of 90. Yes, the random movement programming could be improved, but Mathematica seduces you into infinite playing around and I want to finish this post. (Actually, I am beginning to think I like smushed pingpong balls better. Even better would be a smushed pingpong picture that I could click on and look at it from different angles.) ### Metaphors, pictures, graphs, animation Any exposition of math should use metaphors, pictures and graphs, especially manipulable pictures (like the one above) and graphs.  This applies to expositions for math majors as well as laypeople and newbies.  Calculus and other texts nowadays have begun doing this, more with pictures than with metaphors. I was turned on to these ideas as far back as 1967 (date not certain) when I found an early version of David Mumford's "Red Book", which I think evolved into the book The Red Book of Varieties and Schemes.  The early version was a revelation to me both about schemes and about exposition. I have lost the early book and only looked at the published version briefly when it appeared (1999).  I remember (not necessarily correctly) that he illustrated the spectrum as a graph whose coordinates were primes, and generic points were smudges.  Writing this post has motivated me to go to the University of Minnesota math library and look at the published version again. ## References #### Expositions for educated non-mathematicians • Mark Ronan, Symmetry and the monster • Marcus du Sautoy, Symmetry: a journey into the patterns of nature • Marcus du Sautoy,The Music of the Primes • Matthew Watkins, The Mystery of the Prime Numbers: Secrets of Creation v. 1 ### Notes on Viewing • This post uses MathJax. If you see mathematical formulas with dollar signs around them, or badly formatted formulas, try refreshing the screen. Sometimes you have to do it two or three times. • To manipulate the demos in this post, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. The code for the demos is in the Mathematica notebook algebra2.nb. ## Visible algebra II 2012/09/10 — SixWingedSeraph ### MathJax.Hub.Config({ jax: ["input/TeX","output/NativeMML"], extensions: ["tex2jax.js"], tex2jax: { inlineMath: [ ['$','$'] ], processEscapes: true } }); Notes on viewing: • This post uses MathJax. If you see mathematical formulas with dollar signs around them, or badly formatted formulas, try refreshing the screen. Sometimes you have to do it two or three times. • To manipulate the demos in this post, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. The code for the demos is in the Mathematica notebook algebra2.nb. ## More about visible algebra I have written about visible algebra in previous posts (see References). My ideas about the interface are constantly changing. Some new ideas are described here. In the first place I want to make it clear that what I am showing in these posts is a simulation of a possible visual algebra system.  I have not constructed any part of the system; these posts only show something about what the interface will look like.  My practice in the last few years is to throw out ideas, not construct completed documents or programs.  (I am not saying how long I will continue to do this.)  All these posts, Mathematica programs and abstractmath.org are available to reuse under a Creative Commons license. ## Commutative and associative operations Times and Plus are commutative and associative operations.  They are usually defined as binary operations.  A binary operation $*$ is said to be commutative if for all $x$ and $y$ in the underlying set of the operation, $x*y=y*x$, and it is associative if for all $x$,$y$ and $z$ in the underlying set of the operation, $(x*y)*z=x*(y*z)$. It is far better to define a commutative and associative operation $*$ on some underlying set $S$ as an operation on any multiset of elements of $S$.  A multiset is like a set, in particular elements can be rearranged in any way, but it is not like a set in that elements can be repeated and a different number of repetitions of an element makes a different multiset.  So for any particular multiset, the number of repetitions of each element is fixed.  Thus $\{a,a,b,b,c\} = \{c,b,a,b,a\}$ but $\{a,a,b,b,c\}\neq\{c,b,a,b,c\}$. This means that the function (operation) Plus, for example, is defined on any multiset of numbers, and \[\mathbf{Plus}\{a,a,b,b,c\}=\mathbf{Plus} \{c,b,a,b,a\}\] but $\mathbf{Plus}\{a,a,b,b,c\}$ might not be equal to $\mathbf{Plus} \{c,b,a,b,c\}$. This way of defining (any) associative and commutative operation comes from the theory of monads.  An operation defined on all the multisets drawn from a particular set is necessarily commutative and associative if it satisfies some basic monad identities, the main one being it commutes with union of multisets (which is defined in the way you would expect, and if this irritates you, read the Wikipedia article on multisets.). You don't have to impose any conditions specifically referring to commutativity or associativity.  I expect to write further about monads in a later post. The input process for a visible algebra system should allow the full strength of this fact. You can attach as many inputs as you want to Times or Plus and you can move them around.  For example, you can click on any input and move it to a different place in the following demo. Other input notations might be suitable for different purposes.  The example below shows how the inputs can be placed randomly in two dimensions (but preserving multiplicity).  I experimented with making it show the variables slowly moving around inside the circle the way the fish do in that screensaver (which mesmerizes small children, by the way — never mind what it does to me), but I haven't yet made it work. A visible algebra system might well allow directly input tables to be added up (or multiplied), like the one below. Spreadsheets have such an operation In particular, the spreadsheet operation does not insist that you apply it only as a binary operation to columns with two entries.  By far the most natural way to define addition of numbers is as an operation on multisets of numbers. ## Other operations Operations that are associative but not commutative, such as matrix multiplication, can be defined the monad way as operations on finite lists (or tuples or vectors) of numbers.  The operation is automatically associative if you require it to preserve concatenation of lists and some other monad requirements. Some binary operations are neither commutative nor associative.  Two such operations on numbers are Subtract and Power.  Such operations are truly binary operations; there is no obvious way to apply them to other structures.  They are only binary because the two inputs have different roles.  This suggests that the inputs be given names, as in the examples below. Later, I will write more about simplifying trees, solving the max area problem for rectangles surmounted by semicircles, and other things concerning this system of doing algebra. ## Visible algebra I supplement 2012/09/08 — SixWingedSeraph Note: This post uses MathJax. If you see mathematical formulas with dollar signs around them, or badly formatted formulas, try refreshing the screen. Sometimes you have to do it two or three times. To manipulate the demo in this post, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. ## Active calculation of area In my previous post Visible algebra I constructed a computation tree for calculating the area of a window consisting of a rectangle surmounted by a semicircle. The visual algebra system described there constructs a computation by selecting operations and attaching them to a tree, which can then be used to calculate the area of the window. I promised to produce a live computation tree later; it is below.  The Mathematica code for this tree is in the notebook algebra1.nb. Press the buttons from left to right to simulate the computation that would take place in a genuine algebra system.  Note that if you skip button 2 you get the effect of parallel computation (the only place in the calculation that can be parallelized). In Visual Algebra I the tree was put together step by step by reasoning out how you would calculate the area of the window: (1) the area is the sum of the areas of the semidisk and the rectangle, (2) the rectangle is width times height, (3) the semidisk has half the area of a disk of radius half the width of the rectangle, and so on.  So the resulting tree is a transparent construction that lets you see the reasoning that created it. The resulting tree could obviously be simplified.  But if you were designing a few such windows, why should you simplify it?  You certainly don't need to simplify it to speed up the computation.  On the other hand, if you are going on to solve the problem of finding the maximum area you can get if the perimeter is fixed, you will have to do some algebraic manipulation and so you do want a simplified expression. Later, I will write more about simplifying trees, solving the max area problem, and other things concerning this system of doing algebra. #### Remark What I am showing in these posts is a simulation of a possible visible algebra system.  I have not constructed any part of the system; these posts only show something about what the interface will look like.  My practice in the last few years is to throw out ideas, not construct completed documents or programs.  (I am not saying how long I will continue to do this.)  All these posts, Mathematica programs and abstractmath.org are available to reuse under a Creative Commons license. ## Semantics of algebra I 2012/08/22 — SixWingedSeraph ## MathJax.Hub.Config({ jax: ["input/TeX","output/NativeMML"], extensions: ["tex2jax.js"], tex2jax: { inlineMath: [ ['$','$'] ], processEscapes: true } }); Note: This post uses MathJax. If you see mathematical formulas with dollar signs around them, or badly formatted formulas, try refreshing the screen. Sometimes you have to do it two or three times. In the post Algebra is a difficult foreign language  I listed some of the difficulties of the syntax of the symbolic language of math (which includes high school algebra and precalculus).  The semantics causes difficulties as well.  Again I will list some examples without any attempt at completeness. ## The status of the symbolic language as a language There is a sharp distinction between the symbolic language of math and mathematical English, which I have written about in The languages of math and in the Handbook of mathematical discourse. Other authors do not make this sharp distinction (see the list of references at the end of this post). The symbolic language occurs embedded in mathematical English and the embedding has its own semantics which may cause great difficulty for students. The symbolic language of math can be described as a natural formal language. Pieces of it were invented by mathematicians and others over the course of the last several hundred years. Individual pieces (notation such as "$3x+1=2y$") can be given a strictly formal syntax, but the whole system is ambiguous, inconsistent, and context-sensitive.  When you get to the research level, it has many dialects: Research mathematicians in one field may not be able to read research articles in a very different field. ## Examples I think the examples below will make these claims plausible.  This should be the subject of deep research. ### Superscripts and functions • A superscript, as in $5^2$ or $x^3$, has a pretty standard meaning denoting a power, at least until you get to higher level stuff such as tensors. • A function can be denoted by a letter, symbol, or string, and the notation $f(x)$ refers to the value of the function at input $x$. For functions defined on numbers, it is common in precalculus and higher to write $f^2(x)$ to denoted $(f(x))^2=f(x)\,f(x)$.  Since the value of certain multiletter functions are commonly written without the parentheses (for example, $\sin\,x$), one writes $\sin^2x$ to mean $(\sin\,x)^2$. The notation $f^n$ is also widely used to mean the $n$th iterate of $f$ (if it exists), so $f^3(x)=f(f(f(x)))$ and so on.  This leads naturally to writing $f^{-1}(x)$ for the inverse function of $f$; this is common notation whether the function $f$ is bijective or not (in which case $f^{-1}$ is set-valued).  Thus $\sin^{-1}x$ means $\arcsin\,x$. It is notorious that words in mathematical English have different meanings in different texts.  This is an example in the symbolic language (and not just at the research level) of a systematic construction that can give expressions that have ambiguous meanings. This phenomenon is an example of why I say the symbolic language of math is a natural formal language: I have described a natural extension of notation used with multiplication of values that has been extended to being used for the binary operation of composition.  And that leads to students thinking that $\sin^{-1}x$ means $\frac{1}{\sin\,x}$. History can overtake notation, too: Mathematicians probably took to writing $\sin\,x$ instead of $\sin(x)$ because it saves writing.  That was not very misleading in the old days when mathematical variables were always single symbols.  But students see multiletter variable names all the time these days (in programming languages, Excel and elsewhere), so of course some of them think $\sin\,x$ means $\sin$ times $x$. People who do this are not idiots. ### Juxtaposition Juxtaposition of two symbols means many different things. • If $m$ and $n$ are numbers, $mn$ denotes the product of the two numbers. • Multiplication is commutative, so $mn$ and $nm$ denote the same number, but they correspond to different calculations. • If $M$ and $N$ are matrices, $MN$ denotes the matrix product of the two matrices. • This is a binary operation but it is not the same operation denoted by juxtaposition of numbers. (In fact it involves both addition and multiplication of numbers.) • Now $MN$ may not be the same matrix as $NM$. • If $A$ and $B$ are points in a geometric drawing, $AB$ denotes the line segment from $A$ to $B$. • This is a function of two variables denoting points whose value is a line segment. • It is not what is usually called a binary operation, although as an opinionated category theorist I would call it a multisorted binary operation. • It is commutative, but it doesn't make sense to ask if it is associative. This phenomenon is called overloaded notation. • In order to understand the meaning of the juxtaposition of symbols, you have to know the type of the variables. • ​The surrounding text may tell you specifically the variables denote matrices or whatever. So this is an instance of context-sensitive semantics. • Students tend to expect that they know what any formula means in isolation from the text.  It may make them very sad to discover that this doesn't work — once they believe it, which can take quite a while. • In many cases the problem is alleviated by the use of convention. • Matrices are usually denoted by capital letters, numbers by lower case letters. • But points in geometry are usually denoted by capital letters too.  So you have to know that referring to a geometric diagram is significant to understanding the notation. This is an indirect form of context-sensitivity.  Did any teacher every point this out to students?  Does it appear anywhere in print? The earlier example of $\sin^{-1}x$ is a case which is not context-sensitive.  Knowing the types of the variables won't help.  Of course, if the author explains which meaning is meant, that explanation is within the context of the book!  That is not a lot of help for grasshoppers like me that look back and forth at different parts of a math book instead of reading it straight through.. ### Equations Consider the expressions 1. $x^2-5x+4=0$ 2. $x^2+y^2=1$ 3. $x^2+2x+1=(x+1)^2$ They are assertions that two expressions have the same value. A strictly logical view of an equation containing variables is that it puts a constraint on the variables.  It is true of some numbers (or pairs of numbers) and false of others.  That is the defining property of an equation. Equation 1 requires that $x=1$ or $x=4$.  Equation 2 imposes a constraint which is satisfied by uncountably many pairs of real numbers, and is also not true of uncountably many pairs. But equation 3 puts no constraint on the variable.  It is true of every number $x$. A strictly logical view of symbolic notation does math a disservice.  Here, the notion that an equation is by definition a symbolic statement that has a truth set and a falsity set may be correct but it is not the important thing about any particular equation. When we read and do math we have many different metaphors and images about a concept.  The definition of a kind of object is often in terms of things that may not be the most important things to know about it.  (One of the most important fact about groups is that it is an abstraction of symmetries, which the axioms don't mention at all.) Equation 1. is something that would make most people set out to discover the truth set.  Equation 2. calls out for drawing its graph.  Equation 3. being an identity means that is useful in algebraic reasoning.  The images they call up are different and what you do with them is different.  The images and metaphors that cluster around a concept are an important part of the semantics of the symbolic language. I expect to post separately about the semantics of variables and about the semantics of symbolic language embedded in mathematical English. ## References • Algebra is a difficult foreign language (previous post) • Handbook of mathematical discourse, by Charles Wells. (Also available online). • The language of mathematics (WIkipedia article) • Mathematical discourse: Language, Symbolism and Visual Images, by Kay O'Halloran. • Variables in Mathematics Education, by Susanna S. Epp. • Variables: Syntax, Semantics and Situations, by John T. Baldwin. • Pages from abstractmath.org • The languages of math • The symbolic language of math (currently the links don't work in this file but I will correct it Real Soon Now) • Variables and substitution • More about the languages of math • Other symbols ## Algebra is a difficult foreign language 2012/08/15 — SixWingedSeraph Note: This post uses MathJax.  If you see mathematical formulas with dollar signs around them, or badly formatted formulas, try refreshing the screen. Sometimes you have to do it two or three times. ## Algebra In a previous post, I said that the symbolic language of mathematics is difficult to learn and that we don't teach it well. (The symbolic language includes as a subset the notation used in high school algebra, precalculus, and calculus.) I gave some examples in that post but now I want to go into more detail.  This discussion is an incomplete sketch of some aspects of the syntax of the symbolic language.  I will write one or more posts about the semantics later. ### The languages of math First, let's distinguish between mathematical English and the symbolic language of math. • Mathematical English is a special register or jargon of English. It has not only its special vocabulary, like any jargon, but also used ordinary English words such as "If…then", "definition" and "let" in special ways. • The symbolic language of math is a distinct, special-purpose written language which is not a dialect of the English language and can in fact be read by mathematicians with little knowledge of English. • It has its own symbols and rules that are quite different from spoken languages. • Simple expressions can be pronounced, but complicated expressions may only be pointed to or referred to. • A mathematical article or book is typically written using mathematical English interspersed with expressions in the symbolic language of math. ### Symbolic expressions A symbolic noun (logicians call it a term) is an expression in the symbolic language that names a number or other mathematical object, and may carry other information as well. • "3" is a noun denoting the number 3. • "$\text{Sym}_3$" is a noun denoting the symmetric group of order 3. • "$2+1$" is a noun denoting the number 3.  But it contains more information than that: it describes a way of calculating 3 as a sum. • "$\sin^2\frac{\pi}{4}$" is a noun denoting the number $\frac{1}{2}$, and it also describes a computation that yields the number $\frac{1}{2}$.  If you understand the symbolic language and know that $\sin$ is a numerical function, you can recognize "$\sin^2\frac{\pi}{4}$" as a symbolic noun representing a number even if you don't know how to calculate it. • "$2+1$" and "$\sin^2\frac{\pi}{4}$" are said to be encapsulated computations. • The word "encapsulated" refers to the fact that to understand what the expressions mean, you must think of the computation not as a process but as an object. • Note that a computer program is also an object, not a process. • "$a+1$" and "$\sin^2\frac{\pi x}{4}$" are encapsulated computations containing variables that represent numbers. In these cases you can calculate the value of these computations if you give values to the variables. A symbolic statement is a symbolic expression that represents a statement that is either true or false or free, meaning that it contains variables and is true or false depending on the values assigned to the variables. • $\pi\gt0$ is a symbolic assertion that is true. • $\pi\lt0$ is a symbolic assertion that it is false.  The fact that it is false does not stop it from being a symbolic assertion. • $x^2-5x+4\gt0$ is an assertion that is true for $x=5$ and false for $x=1$. • $x^2-5x+4=0$ is an assertion that is true for $x=1$ and $x=4$ and false for all other numbers $x$. • $x^2+2x+1=(x+1)^2$ is an assertion that is true for all numbers $x$. ### Properties of the symbolic language The constituents of a symbolic expression are symbols for numbers, variables and other mathematical objects. In a particular expression, the symbols are arranged according to conventions that must be understood by the reader. These conventions form the syntax or grammar of symbolic expressions. The symbolic language has been invented piecemeal by mathematicians over the past several centuries. It is thus a natural language and like all natural languages it has irregularities and often results in ambiguous expressions. It is therefore difficult to learn and requires much practice to learn to use it well. Students learn the grammar in school and are often expected to understand it by osmosis instead of by being taught specifically.  However, it is not as difficult to learn well as a foreign language is. In the basic symbolic language, expressions are written as strings of symbols. • The symbolic language gives (sometimes ambiguous) meaning to symbols placed above or below the line of symbols, so the strings are in some sense more than one dimensional but less than two-dimensional. • Integral notation, limit notation, and others, are two-dimensional enough to have two or three levels of symbols. • Matrices are fully two-dimensional symbols, and so are commutative diagrams. • I will not consider graphs (in both senses) and geometric drawings in this post because I am not sure what I want to write about them. ## Syntax of the language One of the basic methods of the symbolic language is the use of constructors.  These can usually be analyzed as functions or operators, but I am thinking of "constructor" as a linguistic device for producing an expression denoting a mathematical object or assertion. Ordinary languages have constructors, too; for example "-ness" makes a noun out of a verb ("good" to "goodness") and "and" forms a grouping ("men and women"). ### Special symbols The language uses special symbols both as names of specific objects and as constructors. • The digits "0", "1", "2" are named by special symbols.  So are some other objects: "$\emptyset$", "$\infty$". • Certain verbs are represented by special symbols: "$=$", "$\lt$", "$\in$", "$\subseteq$". • Some constructors are infixes: "$2+3$" denotes the sum of 2 and 3 and "$2-3$" denotes the difference between them. • Others are placed before, after, above or even below the name of an object.  Examples: $a'$, which can mean the derivative of $a$ or the name of another variable; $n!$ denotes $n$ factorial; $a^\star$ is the dual of $a$ in some contexts; $\vec{v}$ constructs a vector whose name is "$v$". • Letters from other alphabets may be used as names of objects, either defined in the context of a particular article, or with more nearly global meaning such as "$\pi$" (but "$\pi$" can denote a projection, too). This is a lot of stuff for students to learn. Each symbol has its own rules of use (where you put it, which sort of expression you may it with, etc.)  And the meaning is often determined by context. For example $\pi x$ usually means $\pi$ multiplied by $x$, but in some books it can mean the function $\pi$ evaluated at $x$. (But this is a remark about semantics — more in another post.) ### "Systematic" notation • The form "$f(x)$" is systematically used to denote the value of a function $f$ at the input $x$.  But this usage has variations that confuse beginning students: • "$\sin\,x$" is more common than "$\sin(x)$". • When the function has just been named as a letter, "$f(x)$" is more common that "$fx$" but many authors do use the latter. • Raising a symbol after another symbol commonly denotes exponentiation: "$x^2$" denotes $x$ times $x$.  But it is used in a different meaning in the case of tensors (and elsewhere). • Lowering a symbol after another symbol, as in "$x_i$"  may denote an item in a sequence.  But "$f_x$" is more likely to denote a partial derivative. • The integral notation is quite complicated.  The expression \[\int_a^b f(x)\,dx\] has three parameters, $a$, $b$ and $f$, and a bound variable $x$ that specifies the variable used in the formula for $f$.  Students gradually learn the significance of these facts as they work with integrals. ### Variables Variables have deep problems concerned with their meaning (semantics). But substitution for variables causes syntactic problems that students have difficulty with as well. • Substituting $4$ for $x$ in the expression $3+x$ results in $3+4$. • Substituting $4$ for $x$ in the expression $3x$ results in $12$, not $34$. • Substituting "$y+z$" in the expression $3x$ results in $3(y+z)$, not $3y+z$.  Some of my calculus students in preforming this substitution would write $3\,\,y+z$, using a space to separate.  The rules don't allow that, but I think it is a perfectly natural mistake. ### Using expressions and writing about them • If I write "If $x$ is an odd integer, then $3+x$ is odd", then I am using $3+x$ in a sentence. It is a noun denoting an unspecified number which can be constructed in a specified way. • When I mention substituting $4$ for $x$ in "$3+x$", I am talking about the expression $3+x$.  I am not writing about a number, I am writing about a string of symbols.  This distinction causes students major difficulties and teacher hardly ever talk about it. • In the section on variables, I wrote "the expression $3+x$", which shows more explicitly that I am talking about it as an expression. • Note that quotes in novels don't mean you are talking about the expression inside the quotes, it means you are describing the act of a person saying something. • It is very common to write something like, "If I substitute $4$ for $x$ in $3x$ I get $3 \times 4=12$".  This is called a parenthetic assertion, and it is literally nonsense (it says I get an equation). • If I pronounce the sentence "We know that $x\gt0$" we pronounce "$x\gt0$" as "$x$ is greater than zero",  If I pronounce the sentence "For any $x\gt0$ there is $y\gt0$ for which $x\gt y$", then I pronounce the expression "$x\gt0$" as "$x$ greater than zero\$",  This is an example of context-sensitive pronunciation • There is a lot more about parenthetic assertions and context-sensitive pronunciation in More about the languages of math. ## Conclusion I have described some aspects of the syntax of the symbolic language of math. Learning that syntax is difficult and requires a lot of practice. Students who manage to learn the syntax and semantics can go on to learn further math, but students who don't are forever blocked from many rewarding careers. I heard someone say at the MathFest in Madison that about 25% of all high school students never really understand algebra.  I have only taught college students, but some students (maybe 5%) who get into freshman calculus in college are weak enough in algebra that they cannot continue. I am not proposing that all aspects of the syntax (or semantics) be taught explicitly.  A lot must be learned by doing algebra, where they pick up the syntax subconsciously just as they pick up lots of other behavior-information in and out of school. But teachers should explicitly understand the structure of algebra at least in some basic way so that they can be aware of the source of many of the students' problems. It is likely that the widespread use of computers will allow some parts of the symbolic language of math to be replaced by other methods such as using Excel or some visual manipulation of operations as suggested in my post Mathematical and linguistic ability.  It is also likely that the symbolic language will gradually be improved to get rid of ambiguities and irregularities.  But a deliberate top-down effort to simplify notation will not succeed. Such things rarely succeed. ## References • Communicating in the language of mathematics, in IAE-Pedia. • Handbook of mathematical discourse, by Charles Wells.  (Also available online). • The language of mathematics, by Warren Esty. • Mathematical discourse: Language, Symbolism and Visual Images, by Kay O'Halloran. • Mathematical and linguistic ability (previous post) • Pages from abstractmath.org
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 254, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9179573059082031, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/23198/a-question-regarding-particle-trajectories-in-the-symplectic-manifold-formalism
A question regarding particle trajectories in the symplectic manifold formalism How to solve a free particle on a 2-sphere using symplectic manifold formalism of classical mechanics ? Is there a way to get coriolis effect directly, without going into Newton mechanics? And is there a good textbook dealing with classical mechanics problems using symplectic manifolds? - 1 Concerning textbooks, see e.g. this and this question. – Qmechanic♦ Apr 4 '12 at 11:13 1 Answer In this special case, the symplectic manifold is the cotangent bundle of the sphere. A cotangent bundle is always a symplectic manifold which can be interpreted as the phase space for a particle wandering around the manifold. The position of the particle is the location on the original manifold, while the momentum of the particle is (a scalar multiple of) the cotangent vector. This defines the phase space for particle motion. You might wonder why a cotangent vector and not a tangent vector. The reason is that it is the momentum dot the velocity which is an invariant (the change in phase with time in quantum mechanics, the increment of action in time classically). If this is not intuitive, no worries, just keep reading, it is made mathematically apparent below. To define a symplectic manifold you need a symplectic form, which is an object that takes a the gradient of a scalar hamiltonian function into a phase-space vector (a vector in the tangent bundle of the phase space, the tangent bundle of the cotangent bundle) which tells you how things move around in the phase space in response to the Hamiltonian. In this case, the symplectic form takes a gradient of a scalar energy function on the cotangent bundle (not on the sphere) into a vector field on the cotangent bundle (not on the sphere). This is completely intuitive--- the Hamiltonian is a function of the position and momenta both: Let the sphere point be coordinatized by s^i, and the cotangent vector by p_i. Given a Hamiltonian function H(s,v), the Hamilton's equations are $${ds^i\over dt} = {\partial H\over \partial p^i}$$ $${Dp_i\over Dt} = -{\partial H \over \partial s_i}$$ This is a covariant equation--- the derivative of the position is a vector, and the covariant derivative of the covector is a covector, so the left and right hand sides are consistent tensorially. This can be seen from the consistent pattern of upper and lower indices. From this, you get a mathematical justification of why cotangent bundle-- the derivative with respect to p must give a vector, in other words, something that dots with a covector to make a scalar. A gradient with respect to a covector dots with a covector to make a scalar (this is just a mathematician's way of saying the indices are in the right place). The reason there is a covariant derivative on p and not s is because the cotangent bundle is flat in the cotangent directions (these are a vector space). The cotangent bundle is only non-flat along the sphere. If H={1\over 2m} p_i p_j \delta^{ij}, the result is geodesic equation: $${D\over Dt} {ds^i\over dt} = 0$$ along the sphere with speed ${p^i\over m}$, and you can see this is a great circle with constant velocity by symmetry, or formally by solving the equation after choosing the particle to initially lie on the equator, with its velocity all in the $\phi$ direction in standard spherical coordinates (or any way you prefer). This problem is a bit too trivial. If you want a case where the cotangent bundle motion is interesting, you should choose a potential function on the sphere, or consider the motion in the group manifold SO(3), whose solution is the spinning top. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8845144510269165, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/35105/manifold-regularization-using-laplacian-graph-in-svm
# Manifold regularization using laplacian graph in SVM I'm trying implement Manifold Regularization in Support Vector Machines (SVMs) in Matlab. I'm following the instructions in the paper by Belkin et al.(2006), there's the equation in it: $f^{*} = argmin_{f \in H_k}\sum_{i=1}^{l}V(x_i,y_i,f)+\gamma_{A}\parallel f \parallel_{A}^{2}+\gamma_{I}\parallel f \parallel_{I}^{2}$ where V is some loss function and $\gamma_A$ is the weight of the norm of the function in the RHKS (or ambient norm), the enforces a smoothness condition on the possible solutions, and $\gamma_I$ is the weight of the norm of the function in the low dimensional manifold (or intrinsic norm), that enforces smoothless along the sampled M. The ambient regularizer makes the problem well-posed, and its presence can be really helpful from a practical point of view when the manifold assumption holds at a lesser degree. It has been shown in Belkin et al. (2006) that $f^*$ admits an expansion in terms of $n$ points of S, $f^*(x)=\sum_{i=1}^{n}\alpha_i^*k(x_i,x)$ The decision function that discriminates between class +1 and -1 is $y(x)=sign(f^*(x))$. The problem here is, I'm trying to train SVM using LIBSVM in MATLAB but I don't want to modify the original code, so I have found the precomputed version of the LIBSVM which instead of taking input data, and output groups as parametes, gets Kernal matrix computed and the output groups and trains the SVM model. I'm trying the feed it with the regularized Kernel matrix (Gram Matrix) and let it do the rest. I tried to find the formula which regularizes the Kernal and came to this: Defining $I$ as the identity matrix with the same dimension as the Kernel Matrix, $K$ $G=\frac{2\gamma_AI + 2\gamma_ILK}{I}$ $Gram = KG$ In which $L$ is the Laplacian Graph Matrix, $K$ is the Kernel Matrix and $I$ is the identity matrix. And $Gram$ is computed using inner multiplication of two matrices $K$ and $G$. Is there anyone who can help me figure out how this is computed? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367011189460754, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/18823/how-do-we-visualise-antenna-reception-of-individua-radiowave-photons-building-up/18825
# How do we visualise antenna reception of individua radiowave photons building up to a resonant AC current on the antenna? I am a chemical/biological scientist by trade and wish to understand how quantum EM phenomena translates to our more recognizable classical world. In particular I want to get a mechanistic picture of what is going on when a tuned antenna is interacting with a photon of the desired frequency? I believe an individual electron on the antenna (many electrons) accepts a photon but how does the eventual process of a measurable AC current build up on the dipole (or 1/4 wavelength for example) to be fed with no reactance onto the transmission line? Is there a good text that discusses quantum EM with classical antenna theory? "When photon meets antenna" is a great meeting ground for a quantum/classical bridge. Unfortunately I do not have a serious maths background but will try anything suggested. I have read and listened to many of the Feynman's popular quantum discussions which only increases my thirst for a better understanding of how quantum EM translates to our more visible world. - ## 6 Answers Here is an experimentalist's view of the question: 1) one photon hits the antenna and raises a molecular electron band to a higher energy level, and it will fall back to its lower one, with the characteristic electromagnetic transition time of the order of 10^-16sec, giving the energy to the antenna grid of molecules. One photon will just disappear. 2)a stream of photons that carry a signal means: a) that there is enough amplitude, b) there is coherence between photons: photons carry spin and thus polarization and in order to carry a signal the phases between all photons must be fixed and be coherent in time and space. Coherent means that there are fixed phases in the whole bunch. When such a bunch of photons hits an antenna the coherence will be transferred to the individual photon absorptions and de-excitations by conservation of spin, building up a corresponding electromagnetic wave on the molecular Fermi conduction level which can be detected further as a signal. 3) It is simpler for such problems to use the classical EM picture. - Thank you Anna. – user6869 Dec 28 '11 at 7:50 1. "The antenna grid molecules" I presume means the outer atoms on the antenna. 2. "there is enough amplitude" I presume means sufficient number of coherent photons to get to a detectable current. 3. I did not know that classical EM explains the first steps in photon electron interaction. I am trying to feel my way from photon interaction towards measurable current as described by classical EM. 4. I will need to read more about Fermi conduction and try to picture how measurable AC voltage and current builds. Have you any suggestions? Thanks again Anna. – user6869 Dec 28 '11 at 8:46 As to point 3, what Anna may have meant is that there is really no reason to use quantum theory in practice because the EM waves that interact with antennas are large enough that quantum effects are completely negligible. – David Zaslavsky♦ Dec 28 '11 at 8:57 @DavidZaslavsky right David. Large enough in number of photons to act as an aggregate whose limiting behavior is the classical EM solutions. – anna v Dec 28 '11 at 12:17 @user6869 By grid I meant the crystal lattice, except an antenna is not an organized crystal but does have a structure. This is to contrast with energy levels on atoms and molecules which have much higher frequencies than the incoming RF. The collective modes of the solid state pick up the energy. Yes, amplitude means enough photons to carry the signal. Classical EM is the macroscopic statistical manifestation of the many photon state at the quantum level. – anna v Dec 28 '11 at 12:29 show 1 more comment Understanding antenna physics in terms of photons is not trivial, because the quantum statistics of the photons means that they do not interact as separate distinguishable incoherent particles, at least not when they form a classical electromagnetic wave. They flow together, bunching up into a coherent slowly varying field state which carries them from their source to their destination in a way more analogous to a fluid flow. To make an imperfect analogy, suppose you have a bucket of liquid helium and you punch a little hole at the bottom. You can model the phenomenon by He atoms randomly knocking about and finding the hole and escaping, but this model will fail to predict the flow rate or the emptying time, except in the wrong limiting case of an extremely dilute gas of atoms. The flow in the liquid He is determined by a profile of the classical Schrodinger field, which sets up a gradient for the mass flow along streamlines that escape through the hole. The process with photons is only very roughly analogous, because the He atoms repel each other strongly, making an interacting quantum fluid, while the photons are non-interacting, making a Bose-Einstein condensate. But the Bose statistics are the same. When you have an antenna interacting with a classical EM field, the motion of the charges sets up a Poynting flow which directs the field energy into the antenna, when you superpose the reradiated field from the antenna with the incoming field from the far source. This superposition acts as a guide for the photons, sucking them into the antenna. The classical field picture applies, and the photon picture is in the large number fully coherent zero temperature limit, where it reproduces the field picture. ### Photon picture reproduces classical fields The photon is never a nonrelativistic particle, because it is massless. The propagation of a photon is then never strictly forward in time, and there is no productive identification between a photon wavefunction and a classical electromagnetic field. But in a space-time picture with classical current and charge sources, there is an identification of the probability amplitude of finding the 4 dimensionally propagating photon at a certain point with the vector potential set up by the sources. This identification is four dimensional, meaning the photon can zig-zag in time, and the amplitude is only for quantum propagation along the world-line of the photon, which is not directly observable, since we only see superpositions of all incoming proper times. This is the Schwinger-Feynman picture of relativistic particles, which applies to all quantum field theories. The Lagrangian is $$S=\int {1\over 4} F^2 + J\cdot A$$ and the path integral in Feynman gauge gives vacuum persistence amplitude (the quantum partition function) in the presence of J $$Z[J] = \int e^{iS} \propto e^{\int J^{\mu}(x) G_{\mu\nu}(x-y) J^{\nu}(y) dx dy}$$ Where G(x-y) is the photon propagator in Feynman gauge, which, in x-space is $$G_{\mu\nu}(x) = {1\over 2\pi^2} {ig_{\mu\nu}\over x^2}$$ Up to an $i\epsilon$ prescription along the light-cone which resolves the singularity of photon propagation along $x^2=0$ (this formula is often written with the delta-function singularities separated out, leaving a principal value for the part which is $1/x^2$, but I don't like this convention too much because both parts come from the same expression, which is just the 4d solution to Laplace's equation) The Z[J] functional tells you what all the particle propagation properties are, since it describes how a J source, which produces an A particle (photon) then reabsorbs the photon at a different location. The actual photon propagator can only be seen to be a particle propagation in the relativistic picture in a full 4-dimensional form. In Euclidean space. Ignoring the $g_{\mu\nu}$ polarization factor (which is somewhat nontrivial, because the time component has the wrong sign, but irrelevant for the discussion here, which is about the propagation) $$G(k) = {1\over k^2+i\epsilon} = \int_0^\infty d\tau e^{-\tau (k^2+i\epsilon)}$$ This is Schwinger's proper time representation of the Feynman propagator, central to the modern point of view. The function G(k) has an immediate probability intepretation as a probabilistic superposition over all intermediate proper times of a spreading Gaussian (a shrinking Gaussian in k space which is equal to 1 at the origin is a spreading Gaussian in x-space with a unit integral, a spreading probability distribution). This spreading Gaussian probability process is a random walk of a point particle, and it equivalently describes the Euclidean propagator in a point-particle picture. The analytic continuation to real time can be gotten by analytically continuing $G(x)$, which is standard, and also by analytically continuing $\tau$, which is less commonly presented (but still in the literature, usually in introductory string theory texts as a warm up for the string). The result of continuing in $\tau$ produces a $\tau$ quantum propagation which makes a freely four-dimensionally propagating point-particle with quantum amplitudes to get from one point to another which, when summed over all intermediate times, reproduces the Feynman propagator of the free field. This is best understood as abstractly as possible, from the equivalence of stochastic processes in imaginary time to quantum amplitudes, and this connection is quickly reviewed here: Correct application of Laplacian Operator ( the question is involved and intimidating looking, but irrelevant for this discussion, I am just using the relation of quantum mechanics in real time to stochastic evolution in imaginary time, which is the general principle explained there ) This back-and-forth between particle picture and field picture is well known since Schwinger's era, but it is not often presented nowadays, perhaps because the picture is so acausal, involving sums over four-dimensional paths for particles which zig-zag in time. ### Particle view of antenna In the case of an antenna, the classical solution A(J) in the Feynman gauge gives an alternate expression for the path integral: $$Z[J] = e^{i\int A[J]\cdot J}$$ In other words, the entire photon partition function is determined by knowing the classical field in response to the source J. This determines both the amplitude for photons to go from source to source (during their 4-d acausal propagation), and all the correlation functions of the field (by infinitesimally varying J at different points). Since everything is determined by the classical field, you might as well solve the classical equations to find the behavior of the field in response to J. This is because the photon field is free. The manipulations here, although formally trivial, are the content of the equivalence between the modern photon and the classical field. ### Antenna emission/absorption Now consider an actual antenna responding to a far away source. In the classical picture, in order to know that energy is flowing into the antenna and not out, you need to know that the current distribution is produced in response to the field (in a causal field picture). The energy flowing out of or into the antenna is determined by the interaction Lagrangian, once you have dynamics for the degrees of freedom of the antenna: $$L_i = \int J(x) A(x)$$ The interaction Lagrangian is the covariant generalization of $\int \rho(x) \phi(x)$ for the electrostatic source terms. It cannot be written in terms of E,B fields, only the vector potential is a local Lagragian variable. The interaction Lagrangian is both according to the classical field produced by the source, and it also has a direct interpretation as photon absorption/emission, from the Schwinger proper time formulation of the Feynman propagator. So the photon picture and the classical picture are equivalent for these types of problems. The coincidence of classical absorption and emission and photon absorption emission can be extended to single photons interacting with atoms, which leads some people to speculate that photons are not necessary. This is only true if you integrate out the photon field consistently, giving a nonlocal action to matter. If you keep a local action, the photons are still required to represent intermediate field states. The coincidence of classical and quantum behavior is a special mathematical property restricted tf Gaussian path integrals, discovered by Feynman, who uses the semiclassical approach to derive the rules of QED in his 1950s book "Quantum electrodynamics". This does not imply that photons are not physical, since you could integrate out electrons the same way. - You cannot understand how a radio antenna works by counting the number of photons that strike the copper wire. This number is many orders of magnitude too small to account for the actual power absorbed by an antenna. An antenna would not work if it depended on physically intercepting photons. I explain all this in my blog post "The Crystal Radio". In fact it is much more useful to analyze atoms in terms of antenna theory than to analyze antennas in terms of atomic theory. I elaborate on this question in my follow-up blog article, "How Atoms are Tiny Antennas". - Any antenna (and a receiving system) is at some temperature $T$ that determines its proper noise of generated signals. One more photon may be too few to get a distinguishable signal in such conditions. You need a certain coherent flux of photons exceeding the system threshold to be noticeable for sure. - Let us put an antenna in a box. Then we send a photon into the box. Now the antenna in the box has absorbed a photon, and has not absorbed a photon, and has absorbed a photon and emitted a photon. When a conscious observer observes the antenna, the antenna collapses into one of the three states. An alternative answer: The wave function of an antenna that has not absorbed a photon evolves smoothly towards a wave function of an antenna that has absorbed a photon, when the wave function of a photon passes the antenna. The amplitude of the photon wave function decreases at the point where it meets the antenna. When observed these wave funcctions jump abruply into some state. Let us consider an antenna made of ideal conductor and a continuous EM-wave. Classically this antenna Thomson scatters the EM-wave. Quantum mechanically radio photons are absorbed into the antenna and emitted after a time proportional to the frequency. - You need to be a little cautious about taking ideas like "the photon" too literally. In physics all our theories are models, that is they are approximations to the real world (whatever "real" means!). Treating light as a stream of photons is a model that works well in some circumstances, like the photo-electric effect, but isn't a useful description in other circumstances. Treating light as a wave is another model and it too works well in some circumstances, like the double slit experiment, but it isn't a good description all the time either. Anyhow, the point of all this is that you wouldn't try and explain the double slit experiment by considering single photons and you wouldn't try and explain radio wave reception by an aerial by considering single photons. You could do, and Anna's description is about the best you can do if you insist on a photon description, but generally you make your life as a physicist a lot easier if the choose the model most appropriate to the system you're looking at. - -1: "The photon" is as real as a brick. It is not a model of anything, it's just the way things are. – Ron Maimon Dec 29 '11 at 18:50 I'm afraid I completely disagree. The photon is a concept invented by scientists as a partial description of the electromagnetic field. Of course even QED is only a partial description of the electromagnetic field, and likewise the Standard Model and (probably) String Theory. How real any of these are is debatable, though the question is probably best left to the philosophers. – John Rennie Dec 30 '11 at 13:04 You can see photons scintillate a screen, you can hear them click a photomultiplier, they are as real as an electron. This debate is well over, and there is no modeling going on. The fact that QED is a partial description is no more apropos than the fact that Newton's laws only partially describe a brick. Photons are as real as bricks. You see them (literally) and touch them. – Ron Maimon Dec 30 '11 at 13:14 If you set up a double slit experiment with a CCD as the screen then treating light as photons is an excellent description of what happens when they interact with the CCD. However it's a poor description of what happens when a single photon passes through the slits. I suspect we're not really arguing since I'm not denying that photons can be an excellent description of what happens. I’m just saying that it isn’t a useful way to treat some case, e.g. radio waves interacting with an antenna. – John Rennie Dec 31 '11 at 10:07 Ok, I might have been touchy on this, because there is another user who denies photons, and claims that electromagnetic effects can be described semiclassically (unquantized EM field interacting with quantum atoms). I think the reason people get the idea that photons don't work in the field limit is because the photons are never nonrelativistic. But if you replace photon with "metal atoms" and EM field with "Bose Einstein condensate", the map between the two is the same, regarding double slit interference. Would you say that rubidium atoms are just a model, or are they real? – Ron Maimon Dec 31 '11 at 11:28 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9330158829689026, "perplexity_flag": "head"}
http://mathoverflow.net/questions/22686/quotient-of-a-category-by-a-group-action
## Quotient of a category by a group action ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let a group $G$ act on a small category $C$. If $G$ acts freely on objects, there is a sensible construction of the quotient $C/G$ (this is briefly spelt out here) - 2 The appropriate 2-categorical construction is to replace $C$ by $C \times EG$ where $EG$ is the codiscrete category ($\mathrm{Hom}(x, y) = *$ for all $x$ and $y$) with object set $G$ and $G$ acting by left translation, and forming the quotient $(C \times EG)/G$ as described there. But, I guess your question is about the 1-categorical colimit? – Reid Barton Apr 27 2010 at 6:21 Indeed. Do you have a reference for the 2-categorical construction, anyways? – Mariano Suárez-Alvarez Apr 27 2010 at 6:31 ## 2 Answers From the view point of representations of finite dimensional algebras, there is a construction called the "orbit category" or "skew category" construction. The construction appears, for example, a paper by Cibils and Marcos, a paper by Keller, and a paper by Asashiba. If you browse these papers, you will notice that their construction is a version of the Grothendieck construction. Here we regard an action of a group $G$ on a category $C$ as a functor $$G \longrightarrow Cats.$$ So it's related to Reid's comment. - Thomason's thesis says that for any diagram $F:D\to Cat$, the Grothendieck wreath product of D by F (or F by D?) is a model for the homotopy colimit, in the sense that its geometric realization is homotopy colimit of the geometric realization of the diagram. So I guess that's what's going on here (hence the relationship with Reid's comment). – Dan Ramras Apr 27 2010 at 18:31 Yes. That’s exactly what I wanted to say in the last sentence. – Dai Tamaki Apr 27 2010 at 19:42 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is a particular special case that seems to be useful to me: Let $F: D \to G$-Sets be a diagram of $G$-sets, indexed by some small category D. Then ignoring the G-actions, one can form the Grothendieck wreath product $D\wr F$. Objects are pairs (d, x) with $x\in F(d)$, and morphisms $(d,x)\to (d', x')$ are arrows $a:d\to d'$ such that $F(a)(x) = x'$. This category inherits an action of G from the actions on the sets in the diagram; $g\cdot (d,x) = (d, gx)$ (on morphisms, $g$ looks like the identity: $g\cdot (a:(d,x)\to(d',x')) = a:(d,gx)\to(d',gx')$, which makes since $F(a)$ is $G$-equivariant. There is another diagram $F/G : D\to$ Sets, which takes $d\in D$ to $F(D)/G$, and there's a natural functor $D\wr F \to D\wr (F/G)$. I claim that this functor satisfies the universal property of the colimit, i.e. $(D\wr F)/G = D\wr (F/G)$. This special case has the interesting property that the nerve of $D\wr (F/G)$ is precisely $N_\cdot (D\wr F)/G$. I don't think that will hold for arbitrary group actions on categories. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928774356842041, "perplexity_flag": "head"}
http://mathoverflow.net/questions/86800/heegaard-splitting-of-covering-hyperbolic-manifold
## Heegaard splitting of covering hyperbolic manifold. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am curious about how the Heegaard genus changes after a finite covering. Is there anyone constructing an closed hyperbolic 3-manifold $N$ such that the Heegaard genus of a finite covering of $N$ is less than the Heegaard genus of $N$? Thank you! Note: Heegaard genus of a 3-manifold means the minimal genus of all Heegaard splittings. - ## 2 Answers There are examples like this. Check out section 4.5 of Shalen's paper "Hyperbolic volume, Heegaard genus and ranks of groups." It's here: http://arxiv.org/abs/0904.0191 He gives a reference for a genus 3 example by Alan Reid and a sketch of a technique for producing examples by Hyam Rubinstein. Shalen also conjectures that the genus can drop by at most 1 in a finite cover of a closed hyperbolic 3-manifold. - I think there's a variation on Hyam's construction, where you can take a non-orientable manifold with a non-orientable Heegaard splitting, whose 2-fold orientation cover has an orientable splitting of smaller genus than a Heegaard splitting downstairs. – Agol Jan 27 2012 at 21:46 thanks for both of your answers – yanqing Jan 28 2012 at 0:44 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hyam Rubinstein and me have results about the behavior of the Heegaard genus under double covers for non-Haken manifolds, see http://arxiv.org/abs/math/0607145. Essentially, we show that the Heegaard genera of the two manifolds bound each other linearly. (The statement is a little more complicated for branched covers.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9086691737174988, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/10283/is-there-a-size-cutoff-to-quantum-behaviour/10290
# Is there a “Size” Cutoff to Quantum Behaviour? We all know that subatomic particles exhibit quantum behavior. I was wondering if there's a cutoff in size where we stop exhibiting such behavior. From what I have read, it seems to me that we still see quantum effects up to the nanometer level. - ## 3 Answers The classic experiment demonstrating quantum effects, the 2 slit experiment, has been preformed with subsequently larger and larger particles as our technology available to do it advanced. Originally, it was performed with electrons, which are just as much matter as any other matter, but are extremely small. The largest particle it has been demonstrated with are Buckminsterfullerene, which contain 60 Carbon atoms. For size comparison: $$m_e = 5.485 x 10^{-4}u$$ $$m_{buckyball} = 720.642 u$$ There is a good reason that the experiment gets more difficult with increasing mass, and to be sure, the buckball experiment was quite an accomplishment. To the basics of quantum mechanics: $$\Delta x\, \Delta p \ge \frac{\hbar}{2}$$ Alternatively, the de Broglie wavelength is: $$\lambda = \frac{h}{p} = \frac{h}{m v}$$ I believe that in order to obtain the same wavelength with the same mass you have to decrease the velocity. The reason this could be problematic for such experiments is that it is hard to successfully create the conditions needed with a large and slower moving particle, such as needing a better vacuum. - is there a minimum size of wavelength needed to observe such behavior? Why do we have to keep the wavelength the same? – Tuva May 23 '11 at 18:32 @Hydra Good question, and something I left out as my answer got longer. For the double slit experiment, take $d$ as the separation between slits, $L$ as the distance to the target sheet. Due to some arguments I won't make here (but could), the number of peaks you can see will be in the neighborhood of $\frac{d}{\lambda}$. So some minimum wavelength will be needed to have resolvable peaks, which is a tradeoff with the # of particles you shoot, which is a tradeoff with the purity of the vacuum/system you have. Increasing $d$ also conflicts with those limitations. It's about signal-to-noise. – AlanSE May 23 '11 at 18:47 very interesting. Where can I find how one calculates the number of peaks seen? By resolvable peaks, do you mean having enough "particles" to create such peaks? – Tuva May 23 '11 at 18:58 @Hydra The # of peaks is nearly $\frac{d}{\lambda}$. The number of particles has to do with how long you perform the experiment, but given that you may need a given number of particles/peak (in order to see it at all), you ideally want just a handful of clearly visible peaks. Reading the Wikipedia for the double slit experiment might be the most helpful, although it doesn't address the practical size limit very well. – AlanSE May 23 '11 at 19:13 There is no known cutoff in "size" where systems stop exhibiting "quantum behavior". I put those words in quotes, because "size" can mean different things to different behavior, and you weren't explicit about what you mean by "quantum behavior". Folks have seen double-slit interference with molecules made up of ~60 atoms (I think recent experiments may have increased this number). If you were to try to do the same thing with a baseball, as far as anyone knows there's no fundamental reason why it wouldn't work, but the experiment is far beyond our technical abilities (it would take a length of time greater than the age of the universe to do the experiment, nobody knows how to isolate the baseball from the environment for that length of time, etc. etc.). Superconducting rings have been built with diameters of a few centimeters (probably bigger) and still show flux quantization, which is a quantum-mechanical effect due to the electron wavefunction. Those rings are definitely bigger than a few nanometers, and I'd consider it "quantum behavior". Folks have shown quantum-mechanical entanglement between two light beams that were separated by kilometers (in EPR tests). A kilometer sounds like a big "size" to me, but maybe that's not what you meant? Take your pick as to what's the biggest "size" here, but as far as anyone knows, there's no size cutoff to quantum mechanics. However, depending on the kind of experiment (2-slit interferometer, etc.) it definitely becomes more and more technically difficult to perform the experiment as the system is scaled up. - +1 Hi @Anonymous Coward. Point taken on "size" comment. By "size", what I'm trying to understand is if there's a kind of Reynolds-like number (I realize that it's dimensionless) between what's Newtonian and Quantum. Apologies if this is too vague. – Tuva May 23 '11 at 18:43 Why exactly is the baseball example not technologically feasible? – Tuva May 23 '11 at 18:49 Re: the Reynolds-like cutoff: no, there isn't a known one, as described in the answer. – Anonymous Coward May 23 '11 at 22:06 1 Re: the baseball example, if you want to make a double-slit for a baseball, the slits have to be a few cm wide (otherwise the baseball can't get through). To see interference fringes from slits separated by a few cm, you'll need the deBroglie wavelength of to be on the order of a few cm. To get that, the baseball has to be moving incredibly slowly. So slowly that it'll take longer than the age of the universe to go through the slits. Waiting that long isn't technologically feasible. I've glossed over some math, and there are other issues as well, but I'll skip those for now. – Anonymous Coward May 23 '11 at 22:07 Thanks Anonymous Coward! That's helpful. – Tuva May 23 '11 at 22:19 show 1 more comment There are definitely circumstances where we see quantum behaviors at rather large scales like, for example, in superconductors. In BCS theory cooper pairs are described by a macroscopic wave-function which, to my knowledge is valid for the bulk superconductor no matter how large it is. - @C Earnets: Thanks. I'll take a look at BCS Theory. – Tuva May 23 '11 at 18:02 1 – anna v May 23 '11 at 18:32 The BCS field is a classical field in the classical field limit, so it is not really quantum in the appropriate interpretation, it's just a different classical limit than usual for electrons. – Ron Maimon Aug 2 '12 at 3:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9658042192459106, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/108835/would-you-show-how-many-different-orders-in-this-case-is-possible
# Would you show how many different orders in this case is possible? You want to arrange your disks so that the cd's of the same artist come in a row and you have 20 cd's from five different artists four from each. How many different orders in this case is possible? My solution Firstly the cd's of each artist can be arranged $4 \cdot 4!=96$ different ways. In addition each artist can be arranged $5!=120$ different ways. Thus by the principle of product in combinatorics the total number of different orders is $96 \cdot 120 = 11520$. I would like to ask if this is right result and if you wish to show your own result? - ## 1 Answer Since $4$ cd's can be arranged in $4!$ different ways and $5$ artists can be arranged in $5!$ different ways I think that solution is given by : $5! \cdot (4! \cdot 4! \cdot 4! \cdot 4! \cdot 4! ) = 5! \cdot (4!)^5$ different ways . - In my calculations I got $5! \cdot 5 \cdot 4! = 12600$. Are you sure that it is 14400? No I got the same result. It is $120 \cdot 120= 14400$ – alvoutila Feb 13 '12 at 11:09 @alvoutila,$120 \cdot 5 \cdot 24$ – pedja Feb 13 '12 at 11:12 @AndréNicolas,Of course...thanks.. – pedja Feb 13 '12 at 17:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9440415501594543, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/58622?sort=newest
## Fano 3-fold of degree 4 ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be the intersection of two quadrics in $P^5$. It is well known that the intermediate Jacobian $J(X)$ is isomorphic to $J(C)$ for a genus 2 curve, related to the pencil of quadrics whose base locus is $X$. It seemed then natural to me to ask the following question: Is there an explicit construction where $X$ is obtained as a smooth blow-up of $P^3$, or of a smooth quadric, or of a $P^2$ bundle over $P^1$, along a curve isomorphic to $C$? - ## 2 Answers The projection from a line $L_0$ is a birational isomorphism of $X$ onto $P^3$. It decomposes as the blow-up of the line $L_0$ followed by the contraction of a surface swept by lines intersecting $L_0$ onto a curve of genus 2 in $P^3$. - Thank you, this is the construction I was looking for. Can the normal sheaf of the line $L_o \subset X$ be both $\mathcal{O}\oplus \mathcal{O}$ and $\mathcal{O}(1)\oplus \mathcal{O}(-1)$? – IMeasy Mar 18 2011 at 11:47 This is probably known, but just to draw a line at the end of this post: I think that, at least generically, the normal bundle is $\mathcal{O}\oplus \mathcal{O}$ and it is sent onto the quadric surface $Q$ in $P^3$ in which $X$ is a divisor of type $(2,3)$. – IMeasy Mar 19 2011 at 12:07 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The answer is 'no'. By the Lefschetz hyperplane theorem, the second betti number $b_2$ of $X$ is 1, so in particular the Picard group of $X$ is isomorphic to $\mathbb{Z}$. Since blow-ups and $\mathbb{P}^k$-bundles have Picard number $\ge 2$, it follows that no such description exists. - Yes, you're of course right. Anyway my question may still stand in the following terms. There could be a birational model of $X$ (say a blow-up) that dominates both $X$ and one of the varieties I mentioned in my question. – IMeasy Mar 16 2011 at 10:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499740600585938, "perplexity_flag": "head"}
http://mathoverflow.net/questions/35420/how-to-compute-the-ring-of-invariants-of-so-3k-acting-on-a-polynomial-ring
## How to compute the ring of invariants of SO_3(k) acting on a polynomial ring ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $k$ be a field and let $A$ be the polynomial ring over $k$ in $3n$ variables: $A = k[X_{ij} \vert i=1,2,3 \quad j=1,2,\cdots,n]$. ${\rm SO}_3(k)$ acts on $A$ in the following way: Given $g \in {\rm SO}_3(k)$, we define: $$g(X_{ij})=g_{ik}X_{kj}$$ with respect to the summation convention. Can the ring of invariants of this action be expressed in terms of generators and relations? I get the feeling that this is a standard exercise in invariant theory, but am not sure where to look. - ## 2 Answers This is addressed by the classical invariant theory, but the answer is more complicated than for general linear or orthogonal groups (in particular, not all minimal generators are quadratic). Let $k$ be a field of characteristic 0. The group $G=SO_m$ acts on $m\times n$ matrices by the left multiplication and this induces a $G$-action on $A=k[X_{ij}].$ Let us view the variables as the entries of the $m\times n$ generic matrix over $k.$ Then the algebra of invariants $A^G$ is generated by: 1 Scalar products of the columns of the matrix $X.$ 2 Order $m$ minors of the matrix $X.$ This is the First Fundamental Theorem (FFT) of classical invariant theory for $SO_m.$ In fact, the elements of the first type generate $O_m$-invariants and the elements of the second type generate $SL_m$-invariants ($SO_m=O_m\cap SL_m$). Moreover, all relations between these generators are also known (the Second Fundamental Theorem, SFT) and there is a good description of a standard monomial basis of $A^G.$ If I am not mistaken, the last part is due to Laskshmibai and coauthors. A comprehensive modern reference is Laskshmibai and Raghavan, Standard monomial theory. Invariant theoretic approach. Encyclopaedia of Mathematical Sciences, vol 137 (Invariant Theory and Algebraic Transformation Groups VIII), Springer. - The theorem remains true unless $\text{char} k=2.$ But already for orthogonal groups in char 2, there is a drastic difference (not all invariants are generated by the quadratic ones). – Victor Protsak Aug 13 2010 at 2:42 Thankyou. The result I need is the one given in H. Weyl, the classical groups II.17 - that the relations between determinants and inner products are generated by relations of two types: J2, J3 (which I believe imply J1). It is not clear to me what the required hypotheses on the ground field k are, either in Weyl or Laskshmibai (L&R). Also, it is not obvious that the relations (i-iii) in Prop 12.3.1.1 of L&R are the same as Weyl's J2,J3. For the rest of my paper I only need that char(k) is not 2, so it would be ideal if as you suggest the above result held for all fields of char not 2. – tkf Feb 14 2011 at 20:14 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. H. Weyl, The classical groups chapter V -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8972077965736389, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/204632/the-concept-of-infinity/204635
# The concept of infinity This evening I had a discussion with a friend of my about a mathematical riddle and the concept of 'infinite' The riddle Imagine a hotel with an infinite amount of rooms, and all of the rooms are occupied. At that moment a bus arrives with an infinite amount of people who all want a room in that hotel. Possible or not, and how? Possible answer (according to friend) The owner of the hotel lets everyone who is currently sitting in a room, move to a room further. In that way the first chamber will be available. To be honest I am not really knowledgable at math, but something tells me that this is equivalent to shifting the problem. Thanks to this 'solution' there is always somebody without a room, or not? After all, if someone moves to the room further one will never find an empty room. The person after him also has to move and the person after him also etc. etc. I think there can never be more people in that hotel as all (infinite) rooms were already filled with an infinite number of people. (infinite + 1 is impossible, right?) Is it because of my limited understanding of the term 'infinity' and am I missing something, or is something wrong with the riddle? *Sorry for the English, I hope it's clear. * - 3 – Asaf Karagila Sep 29 '12 at 23:49 ## 4 Answers After all, if someone moves to the room further one will never find an empty room. The first room is clearly empty: there's nobody moving into it. The only way a person could fail to move into the next room is if they were in the last room. But then there would only be finitely many rooms, so clearly this particular hotel doesn't suffer from this problem. The main point of this example is to vividly demonstrate one of the ways in which infinite collections differ from finite ones. (infinite + 1 is impossible, right?) In this case, we're adding ordinal numbers. And you can add them. The ordinal number describing the hotel rooms is called $\omega$. It is essentially just the sequence of natural numbers. When you add two ordinal numbers, you essentially just place one after the other. So if we draw a picture of 1: * and a picture of $\omega$ • .... then to get $1 + \omega$, we place 1 first, then $\omega$ next: • .... Looks the same, doesn't it? That's what happens in the hotel. And that's because $1 + \omega = \omega$. Incidentally, if we add the other way, $\omega + 1$, we get an ordinal number that's bigger than $\omega$. One way to draw it is • .... | * The pipe (|) is a decoration to indicate that the .... really refers to an infinite sequence of asterisks (*), and they all are located to the left of the pipe. This is so it's not confused with something like • ... * in which the ... would usually be interpreted as a finite number of asterisks that we were too lazy to write out. One particularly important thing to note about $\omega + 1$ is that last asterisk doesn't have an immediate predecessor. That's another unusual feature that infinite ordinal numbers (except for $\omega$ have: they can have elements that have infinitely many things before it, but none of them are immediately before it. - Great! This really makes sense. :-) – Sebass van Boxel Sep 30 '12 at 12:19 You might be interested in reading this - Thanks to this 'solution' there is always somebody without a room, or not? Who? The person in the 1st room now has room 2. The person in the 1284th room now has room 1285. In fact, there's nobody in the hotel that doesn't have a room: it is true, in some sense, that "infinity + 1 = infinity" (and I think Hurkyl explained this very well via ordinals). If that doesn't convince you, think about it this way: what if you (our hypothetical hotel owner) built a new room (called 0) to accommodate a new guest, but then - a few days later - decided that a room called 0 was silly, and so simply renumbered all the rooms, renumbering 0 as 1, 1 as 2, and so on. What has changed? Nothing - the (numerical) structure of your hotel is the same, everyone has a room, and your new guest has been accommodated. So that's a nice easy way to get one extra guest in, as long as you're prepared to mildly inconvenience infinitely many people to do so. And if we can get one extra guest in, we can get as many as we like in: if a bus carrying 50 people turns up, just get everyone to move to (their own room + 50), or equivalently, add an extra 50 rooms to the front. No problem there. But if a bus carrying infinitely many people turns up, you can't shift everyone to (their own room + $\infty$) - as you will quickly realise, when the non-mathematician in room 17 that complains he couldn't work out which room to move to (even when he used his calculator!). Adding infinity to a number doesn't make sense. All of the rooms in your hotel have been given nice, solid, honest, finite room numbers - the "(17+$\infty$)th" room (whatever that means) doesn't exist. (There are infinitely many finite numbers, but you shouldn't confuse this with 'infinite numbers' (whatever such things might mean).) So what can you do? Remember, there is a stipulation that you can't insert rooms at the end (because there is no end - right?), but we've already seen that you can insert rooms at the beginning (because there is a beginning, namely room 1). In fact, we can insert rooms anywhere that we like, provided that place is easy to point to: e.g. you can tell everyone in rooms 6, 7, 8, ... to move to rooms 7, 8, 9, ... and leave a gap in room 6. (Equivalently: insert a room $5\frac{1}{2}$ between rooms 5 and 6, and then renumber the rooms.) Now, we obviously can't just plonk infinitely many rooms at the start, because that has the problem I mentioned before - everyone has to end up in a nice (finite) room. [A question you might ask at this point is: why can't you just add rooms 0, -1, -2, -3, ... and stick your busful of guests in those? Well, you can, practically, of course. But then it becomes rather harder to renumber the rooms, so that they are numbered 1, 2, 3, ... again. Remember that we're not actually allowed to invent new rooms - I introduced this idea as a tool to show you that moving infinitely many people across finitely many rooms wasn't going to change your hotel structurally, or leave anyone without a room, or anything like that. You can't do anything that changes the numerical structure of your hotel.] So you might decide to insert an unoccupied room between every two occupied rooms. That is, you might decide that you want rooms 1, 3, 5, 7, 9, ... to be free, and rooms 2, 4, 6, 8, 10, ... to be occupied. Ah, that's easy: tell everyone to move to the room number that's double their own. Now the man in room 17 is happy again. What if two infinite buses turn up? Well, you could just do the same thing again, twice. That would annoy a lot of your guests, but you probably wouldn't care, not least because you'd be making a lot of money out of this. Likewise three or four infinite buses. But what if infinitely many buses turn up, all at the same time? You can't ask your guests to move to double their own room number infinitely many times - the man in room 17 is unhappy again, because he doesn't know which room he will end up in after infinitely many moves ("what is $17\times 2^\infty$??", he complains). You're trying to shunt him infinitely far away, and he doesn't like this. But there's no need to panic. Calmly announce that you want all your guests to move to double their own room number (once only!), and watch them stare in amazement: Let me label the buses a, b, c, d, ... (let's pretend there are infinitely many letters!), and let me label their passengers a1, a2, a3, ..., b1, b2, b3, ..., and so on. Now simply admit them to your empty rooms (1, 3, 5, 7, ...) in the following order: a1, b1, a2, c1, b2, a3, d1, c2, b3, a4, ... Do you see the pattern? Do you see why everyone (even if they're the millionth passenger on the billionth bus) is going to have a finite room number? - If the rooms are numbered 1, 2, 3, 4, . . . and all are occupied, and everyone shifts one room further up, then only one room becomes vacant. But if everyone moves to the room that bears twice the number of the room he's in already, then infinitely many rooms become vacant. The latter way works if the number of new customers is what is called "countably infinite". "Countably infinite" means they are numbered 1, 2, 3, 4, . . . . , so that for each member of this numbered collection of customers, you will reach that customer after counting through only finitely many terms of this sequence. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9716558456420898, "perplexity_flag": "middle"}
http://www.abstractmath.org/Word%20Press/?tag=mathematica
# Gyre&Gimbleposts about math, language and other things that may appear in the wabe ## Writing math for the web 2013/05/02 — SixWingedSeraph ## Abstractmath I built my website abstractmath.org during the years 2002 through 2006. After that I made sporadic changes, but medical operations and then teaching courses as an adjunct for a couple of years kept me from making much progress until 2010. This post is an explanation of the tools I used for abstractmath, what went right and what went wrong, and my plans for redoing the website. ## Methodology My previous experience in publishing math was entirely with TeX. When I began work on abstractmath, I wanted to produce html files, primarily because they refloated the text when the window width changed. I was thinking of small screens and people wanting to look at several windows at once. In those days, there was no method of starting with a LaTeX input file and producing an html file that preserved all the math and all the formatting. I have over the years spent many hours trying out various systems that claimed to do it and not found one that did not require major massaging to get the look I wanted. Most of them can cannot implement all LaTeX commands, or even most of the LaTeX formatting commands. (I have not looked at any of these since 2011.) In contrast, systems such as PDFTeX turn even very complicated (in formatting and in math) LaTeX files into nearly perfect PDF files. Unfortunately, PDF files are a major impediment to having several windows open at once. ### Word and MathType My solution was to write abstractmath articles using Microsoft Word with MathType, which provides a plugin for Word. The MathType interface was a very useful expansion of the Equation Editor in Word, and it produced little .gif files that were automatically inserted into the text. MathType also provided a command to create an html file. This file was produced with the usual “_files” folder that contained all the illustrations I had included as well as all the .gif files that MathType created. The html file contained code that put each .gif file in the right place in the typeset text. That combination worked well. Using Word allowed me tight control over formatting and allowed floating textboxes, which I used freely. They very nicely moved around when you changed the width of the window. I had used textboxes in my book A Handbook of Mathematical Discourse for apt quotations, additional comments, and (very clever if I say so myself) page indexes. The Handbook is available in several ways: • Amazon. The citations are not included. • The Handbook in paper form. A pdf file showing the book as it appears on paper (all the illos, textboxes and page indexes, no hyperlinks), plus all the citations. (This paragraph was modified on 2013-05-02). • A version with hyperlinks, This includes the citations but omits the boxes and the illustrations, and it has hyperlinks to the citations. The page indexes are replaced by internal hyperlinks. • The citations. That book was written in TeX with much massaging using AWK commands. Boxes are much easier to do in Word than they are in TeX, and the html files produced by MathType preserved them quite well. The abmath article on definitions shows boxes used both for side comments and for quotations. There were some problems with using MathType and Word together. In particular, a longish article would have dozens or hundreds of .gif files, which greatly slowed down uploading via ftp. I now have WebDrive (thanks to CWRU) and that may make it quicker. ## Rot sets in Without my doing anything at all, the articles on abstractmath began deteriorating. This had several main causes. • Html was revised over time. Currently it is HTML5.0. • Browsers changed way they rendered the html. And they had always differed among themselves in some situations. • Microsoft Word changed the way it generated html. Two of the more discouraging instances of rot were: • Many instances of math formulas are now out of line with the surrounding text. This happened without my doing anything. It varies by browser and by when I last revised the article. • Some textboxes deteriorated. In particular, textboxes generated by newer versions of Word were sometimes nearly illegible. Part of the reason for this is that Word started saving them as images. ## Failed Forays The main consequence of all this was that I was afraid of trying to revise articles (or complete them) because it would make them harder to read or ugly. So I set out to find new ways to produce abmath articles. This has taken a couple of years, while abmath is a big mess sprawling there on its website. A mostly legible big mess, and most of the links work, but frustrating to its appearance-sensitive author. ### Automatically convert to a new system My first efforts were to find another system with the property that I could convert my present Word files or html files to the new system without much hand massaging. I tried converting the Word files to LaTeX input. This was made easier (I thought) because MathType now provided a means for turning all the MathType itty bitty .gif files into LaTeX expressions. I wrote Word macros to convert much of the formatting (italics, bold, subheads, purple prose, and so on) into LaTeX formatting — although I did have to go through the Word text, select each specially formatted piece, and apply the correct macro. But I had other problems. • Converting the Mathype images files to LaTeX caused problems because it messed up the spaces before and after the formulas. • I worked with great sweat and tears to write a macro to extract the addresses from the links — and failed. If I had presevered I probably would have learned how to do it, and learned a lot of Word macros programming in the process. The automatic conversion process appeared to require more and more massaging. I made some attempts at automatically converting the html files that Word generates (instead of the doc files), but they are an enormous mess. They insert a huge amount of code (especialy spans) into the text, making it next to impossible to read the code or find anything. It was beginning to look like I would have to go to an entirely new system and rewrite all the articles from scratch. This was attractive in one respect: in writing this blog my style has changed and I was seeing lots of things I would say or do differently. I have also changed my mind about the importance of some things, and abmath now has stubs and incomplete articles that ought to be eliminated with references to Wikipedia. ## Go for rewriting Meanwhile, I was having trouble with Gyre&Gimble. The WordPress editor works pretty well, but two new products came along: • MathJax was introduced, providing a much better way to use TeX to insert formulas. (Note: MathType recently implemented the use of MathJax into its html output.) • Mathematica CDF files, which are interactive diagrams that can be inserted directly into html. (My post Improved Clouds has examples.) Both MathJax and CDF Player require entering links directly in the html code the WordPress editor produces. The WordPress editor trashed the html code I had entered every time I switched back and forth between “visual” (wysiwyg) and html. I switched to CKEdit, which preserved the html but has a lot of random behavior. I learned to understand some of the behavior but finally gave up. I started writing my blogs in html using the Coffee Cup HTML Editor — that is how I am writing this. Then I paste it into the WordPress editor. My current plan is to start revising each abmath article in this way: • Write html code for the special formatting I want, mostly the code that produces the header, but also purple prose and other things. Once done I can use this code for all the abmath articles with little massaging. • Start with the Word doc file for an article and use MathType to toggle all the MathType-generated gif files into TeX. • Generate the html file in a way that preserves the TeX code with dollar signs. (There are two ways to do this and I have not made up my mind which to use.) • Start revising! I have already begun doing this. My intention is to revise each abstractmath article, post it, and announce the posting on Gyre&Gimble or on Google+. If an article is heavily revised I expect to post it (or parts of it) on Gyre&Gimble. Some of these things will be ready soon. ## Last minute notes • I used WinEdt, a text editor, to write the Handbook of Mathematical Discourse. It is a powerful html editor, with an extensive macro language that in particular allows rearranging the menus and adding new code to call other applications. It is especially designed for TeX, so is not as convenient as it stands for html. However, its macro language would allow me to convert it to a system that will do most of what Coffee Cup can do. I might do this because Coffee Cup has no macro language and (as far as I can tell) has no way to revise or add to menus. • It is early days yet, but I am thinking of including pieces of Abstracting Algebra into abstractmath.org. Posted in abstractmath.org, exposition. Tags:. No Comments » ## Abstracting algebra 2012/12/21 — SixWingedSeraph This post has been turned into a page on WordPress, accessible in the upper right corner of the screen.  The page will be referred to by all topic posts for Abstracting Algebra. ## Apportionment 1 2012/12/01 — SixWingedSeraph Notes on viewing. ## Election systems This post begins a new thread in Gyre&Gimble: Election systems.  In 1968 I created a game called Parlement and ran a few games by mail before being absorbed by family and career.  In it you ran a political party in a parliament in the style of many European countries: The parliament forms a government, votes on bills & budgets, then an election is held where each party runs on its record. My interest in election systems has continued sporadically since them.  Since the nineties I have been programming in Mathematica and made stabs at implementing various systems for achieving proportionality. Now I expect to devote several posts to Mathematica demos of election and apportionment systems. ## Proportional representation An elected body is chosen by a list method of proportional representation this way: • The election is by districts, each electing several members.  In most cases the number of seats each district has in the body is fixed ahead of the election. • Each voter votes for a party list. • The proportion of the integer number of representatives that party will have out of the total number of seats alloted to the district is chosen to be near to the proportion of votes the party list receives out of the total votes. • The method for choosing the number of seats for the party can be any one of many methods proposed in the past 200 or so years.  Only a few methods are actually used in practice. • Once the number of members for the party is determined, that many persons on the list are chosen according to some method.  There are quite a few different methods used for this.  I will not write about this aspect at all. This post looks at the method of equal proportions. This method may also be used to apportion the legislature of countries such as the USA that has states or provinces so that each state has a suitably proportionate number of seats.  For most of the history of the USA, that method has been the method of equal proportions, but in the early days other methods were used. My impression is that the equal proportion method is the most common method used in legislatures elected by the list system, and is also the most common method used for apportioning legislative seats among states or provinces.  There is much information about these things scattered over many articles in Wikipedia, and a close study may prove me wrong about this. Note: The summary above is oversimplified and leaves out many details.  The references list more details than most people would ever want to know. ## The equal proportions method Several sites listed in the references describe the equal proportions method in detail.  The method for calculation used in the demo (there are others) works this way: • Create a list $V$ of weights indexed by the party number. • For proportional representation for parties, each party starts with $0$ seats and  $V_p$ is initially the number of votes party $p$ has for the district. • For state representation, each state starts with $1$ seat and the the initial $V_s$ is the number of votes state $s$ receives divided by $\sqrt{2}$. • Suppose $S$ seats are to be assigned . • Assign the first one to the party $m$ for which $V_m$ is the maximum of the list $V$.  (In the case of states, this is the first seat after the initial one.) • Change the list by setting $V_m:=\frac{V_m}{\sqrt{2}}$ ($V_m:=\frac{V_m}{\sqrt{6}}$ in the case of states). • At the $k$th step, if $n$ is the party for which $V_n$ is the max of the list $V$ in its current state, assign the $k+1$st seat to party $n$ and set $V_n:= \frac{V_n}{\sqrt{u(u+1)}}$ (the geometric mean), where $u$ is the number of seats party $n$ had before the new one was assigned. • Stop which $k=S$. ## Interactive demos of the method of equal proportions To manipulate these demos, click on "Enable Dynamics".  At present the demo has a bug that makes the table pink (Mathematica's way of giving an error message). Move any slider and the pink will disappear forever.  The demos are also available on my website: PartySimple.cdf and PartyComparison.cdf.  If you download them onto a machine with CDF player installed and run them the pink table does not happen.  I can't imagine what could cause an error like that only when run embedded in html. Both demos have the same controls. • Five sliders labeled n1 through n5 control the number of votes they get in the election. • The bottom slider controls the number of seats that district gets to elect. • By moving a slider you control the information it represents. • By clicking on the plus sign to the right of the slider, you toggle a list of controls below it.  The party vote sliders begin with the controls invisible and the seats slider begins with them visible. ### How the EP method works The demo below assumes that five parties, numbered 1 through 5, are running to get seats in the elected body.  You may change the votes (Column Votes) received by any party, and also the total number of representatives to be chosen (Seats). #### Data • Total votes is the total number of votes received by all five parties. • Seats is the total number of representatives assigned to the elected body. It can be changed using the Seats slider. • Quota is Total votes divided by Seats.  Some systems use slight variations on this quota. • The Name of the party is given just as a number.  In a suitably fancy demo you might give each party a real name. • Votes is the number of votes the party receives in the election, controlled by the relevant slider. • SeatsAsgd  is the number of seats the equal proportions method assigns to that party. • Weight: This number is $\frac{\text{Votes}}{\sqrt{\text{SeatsAsgd}\times(\text{SeatsAsgd}+1)}}$ #### Playing with the demo • Start with any settings for the sliders, and press the $+$ button underneath the seat slider. This allots one more seat to the representatives. • You will see that the party with the largest weight gets the new seat.  That is how the method works: The algorithm starts with no seats and adds one at a time until the correct number of seats is reached. • The weight is a function of the number of seats allotted to the party. • Try changing the Votes for a single party, letting the total number of seats remain the same.  Which parts of the data get changed by doing this?  Do you understand why? • Try changing the votes for all the parties, so that one has most of the votes and the others have only a few votes apiece.  Start with Seats at zero and step the seats up one at a time.  Notice what happens in column SeatsAsgd and what happens to Weight. ### How close to an accurate apportionment does it get? • Quota is Total votes divided by Seats.  Note how it changes when you move any slider. • The Name of the party is given just as a number.  In a suitably fancy demo you might give each party a real name. • Votes is the number of votes the party receives in the election, controlled by the relevant slider. • SeatsAsgd is the number of seats the equal proportions method assigns to that party, given the total number of Seats allotted. • SeatsIdeal is the party's Votes divided by Quota.  Note that this is generally close to SeatsAsgd, which is usually but not always either the floor or the ceiling of SeatsIdeal. Try to find a setting where that is not true (hint: Give one party most of the votes.) • VotesPerSeat is Votes divided by SeatsAsgd.  Compare it to SeatsIdeal.  They are usually close, but can  be quite far off if only a few Seats are assigned. • Deviation is VotesPerSeat divided by Quota.  This is a relative measure of how far away from exactness the representations of the parties are. ### Playing with the demo • Move Seats down to 2 or 3.  Notice that the deviations are quite bad, even $40\%$ off sometimes. Move Seats  up and see that deviations get much better.  Can you understand why that happens? • Note that usually Seats is either the integer just below SeatsIdeal (the floor) or the integer just above it (the ceiling).  This is reasonable and is called "keeping quota". • Make one or two parties large and the others small, then move the Seats slider around.  You find examples where Seats busts through the ceiling, "breaking quota".  Sometimes Seats is several units bigger than the ceiling. • Note that if you step Seats up one at a time, the only thing that ever happens is that one party's seats goes up one unit.  Some other common systems cause some other party's seats to go down occasionally when the total seats is incremented.  That obviously never happens with EP. ## About demos These demos were designed for people to learn about a concept by experimenting with them.  Such demo should be fairly simple with only choices and displays relevant to what you are trying to show. You can also build elaborate CDF's. RiemannSums.nb contains a command PlotRiemann which allows for many options showing different kinds of Riemann sums with different options, and you could design a single demo with many buttons, sliders and other gadgets that allow for all sorts of possibilities (but I have not designed such a monster). I do expect to eventually design a command such as PlotRiemann that does for voting systems something like what PlotRiemann does for Riemann Sums, but the way to do that is to create one feature or option at a time.  I will be doing that and they will result in other election systems demos that I will post here from time to time.  (Promises, promises). ## References • Basic Geometry of Voting , by Donald G. Saari.  This is the book to go to for the math that explains why the EP works (and also many other methods). There really is geometry behind the methods, as he illustrates. • Chaotic Elections! A Mathematician Looks at Voting, by Donald G. Saari • Congressional Apportionment  (Census Bureau) • Congressional apportionment (GPO) • Congressional apportionment (Wikipedia) • Congressional apportionment methods (Wolfram CDF demo) • Four voting methods (Java applet) • Greatest remainder method (Wikipedia) • Equal proportions method of apportionment (Wikipedia).  Contains a brief description of the EP method. • Equal proportions method of apportionment (MAA) • The Mathematics of Voting and Elections: by Jonathan K. Hodge and Richard E Kilma • Parlement (Google discussion group, now inactive, with links to the rules of the game.) • PartySimple.cdf (demo) • PartyComparison.cdf (demo) Notes on Viewing This post uses MathJax. If you see mathematical expressions with dollar signs around them, or badly formatted formulas, try refreshing the screen. Sometimes you have to do it two or three times. To manipulate the demo in this post, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. The code for the demo is in the Mathematica notebook Equal Proportions.nb. ## Representations of mathematical objects 2012/11/02 — SixWingedSeraph ### MathJax.Hub.Config({ jax: ["input/TeX","output/NativeMML"], extensions: ["tex2jax.js"], tex2jax: { inlineMath: [ ['$','$'] ], processEscapes: true } }); This is a long post. Notes on viewing. ## About this post A mathematical object, or a type of math object, is represented in practice in a great variety of ways, including some that mathematicians rarely think of as "representations". In this post you will find examples and comments about many different types of representations as well as references to the literature. I am not aware that anyone has considered all these different ideas of representation in one place before. Reading through this post should raise your consciousness about what is going on when you do math. This is also an experiment in exposition.  The examples are discussed in a style similar to the way a Mathematica command is discussed in the Documentation Center, using mostly nonhierarchical bulleted lists. I find it easy to discover what I want to know when it is written in that way.  (What is hard is discovering the name of a command that will do what I want.) ## Types of representations ### Using language • Language can be used to define a type of object. • A definition is intended to be precise enough to determine all the properties that objects of that type all have.  (Pay attention to the two uses of the word "all" in that sentence; they are both significant, in very different ways.) • Language can be used to describe an object, exhibiting properties without determining all properties. • It can also provide metaphors, making use of one of the basic tools of our brain to understand the world. • The language used is most commonly mathematical English, a special dialect of English. • The symbolic language of mathematics (distinct from mathematical English) is used widely in calculations. Phrases from the symbolic language are often embedded in a statement in math English. The symbolic language includes among others algebraic notation and logical notation. • The language may also be a formal language, a language that is mathematically defined and is thus itself a mathematical object. Logic texts generally present the first order predicate calculus as a formal language. • Neither mathematical English nor the symbolic language is a formal language. Both allow irregularities and ambiguities. ### Mathematical objects The representation itself may be a mathematical object, such as: • A linear representation of a group. Not only are the groups mathematical objects, so is the representation. • An embedding of a manifold into Euclidean space. A definition given in a formal language of the first order predicate calculus of the property of commutativity of binary operations. (Thus a property can be represented as a math object.) ### Visual representations A math object can be represented visually using a physical object such as a picture, graph (in several senses), or diagram. • The visual processing of our brain is our major source of knowledge of the world and takes about a fifth of the brain's processing power.  We can learn many things using our vision that would take much longer to learn using verbal descriptions.  (Proofs are a different matter.) • When you look at a graph (for example) your brain creates a mental representation of the graph (see below). ### Mental representations If you are a mathematician, a math object such as "$42$", "the real numbers" or "continuity" has a mental representation in your brain. • In the math ed literature, such a representation is called "mental image", "concept image", "procept", or "schema".   (The word "image" in these names is not thought of as necessarily visual.) • The procept or schema describe all the things that come to mind when you think about a particular math object: The definition, important theorems, visual images, important examples, and various metaphors that help you understand it. • The visual images occuring in a mental schema for an object may themselves be mental representations of physical objects. The examples and theorems may be mental representations of ideas you learned from language or pictures, and so on.  The relationships between different kinds of representations get quite convoluted. ### Metaphors Conceptual metaphors are a particular kind of mental representation of an object which involve mentally associating some aspects of the objects with some aspects of something else — a physical object, an image, an action or another abstract object. • A conceptual metaphor may give you new insight into the object. • It may also mislead you because you think of properties of the other object that the math object doesn't have. • A graph of a function is a conceptual metaphor. • When you say that a point on a graph "rises as it goes from left to right" your metaphor is an action. • When you say that the cosets of a normal subgroup of a group "get along" with the group multiplication, your metaphor identifies a property they have with an aspect of human behavior. ## Properties of representations A representation of a math object may or may not • determine it completely • exhibit some of its properties • suggest easy proofs of some theorems • provide a useful way of thinking about it • mislead you about the object's properties ## Examples of representations This list shows many of the possibilities of representation.  In each case I discuss the example in terms of the two bulleted lists above. Some of the examples are reused from my previous publications. ### Functions Example (F1) "Let $f(x)$ be the function defined by $f(x)=x^3-x$." • This is an expression in mathematical English that a fluent reader of mathematical English will recognize gives a definition of a specific function. • (F1) is therefore a representation of that function. • The word "representation" is not usually used in this way in math.  My intention is that it should be recognized as the same kind of object as many other representations. • The expression contains the formula $x^3-x$.  This is an encapsulated computation in the symbolic language of math. It allows someone who knows basic algebra and calculus to perform calculations that find the roots, extrema and inflection points of the function $f$. • The word "let" suggests to the fluent reader of mathematical English that (F1) is a definition which is probably going to hold for the next chunk of text, but probably not for the whole article or book. • Statements in mathematical English are generally subject to conventions.  In a calculus text (F1) would automatically mean that the function had the real numbers as domain and codomain. • The last two remarks show that a beginner has to learn to read mathematical English. • Another convention is discussed in the following diatribe. #### Diatribe You would expect $f(x)$ by itself to mean the value of $f$ at $x$, but in (F1) the $x$ has the property of a bound variable.  In mathematical English, "let" binds variables. However, after the definition, in the text the "$x$" in the expression "$f(x)$" will be free, but the $f$ will be bound to the specific meaning.  It is reasonable to say that the term "$f(x)$" represents the expression "$x^3-x$" and that $f$ is the (temporary) name of the function. Nevertheless, it is very common to say "the function $f(x)$" to mean $f$. A fluent reader of mathematical English knows all this, but probably no one has ever said it explicitly to them.  Mathematical English and the symbolic language should be taught explicitly, including its peculiarities such as "the function $f(x)$".  (You may want to deprecate this usage when you teach it, but students deserve to understand its meaning.) ### The positive integers You have a mental representation of the positive integers $1,2,3,\ldots$.  In this discussion I will assume that "you" know a certain amount of math.  Non-mathematicians may have very different mental representations of the integers. • You have a concept of "an integer" in some operational way as an abstract object. • "Abstract object" needs a post of its own. Meanwhile see Mathematical Objects (abstractmath) and the Wikipedia articles on Mathematical objects and Abstract objects. • You have a connection in your brain between the concept of integer and the concept of listing things in order, numbering them by $1,2,3,\ldots$. • You have a connection in your brain between the concept of an integer and the concept of counting a finite number of objects.  But then you need zero! • You understand how to represent an integer using the decimal representation, and perhaps representations to other bases as well. • Your mental image has the integer "$42"$ connected to but not the same as the decimal representation "42". This is not true of many students. • The decimal rep has a picture of the string "42" associated to it, and of course the picture of the string may come up when you think of the integer $42$ as well (it does for me — it is a an icon for the number $42$.) • You have a concept of the set of integers. • Students need to be told that by convention "the set of integers" means the set of all integers.  This particularly applies to students whose native language does not have articles, but American students have trouble with this, too. • Your concept of  "the set of integers" may have the icon "$\mathbb{N}$" associated with it.  If you are a mathematician, the icon and the concept of the set of integers are associated with each other but not identified with each other. • For me, at least, the concept "set of integers" is mentally connected to each integer by the "element of" relation. (See third bullet below.) • You have a mental representation of the fact that the set of integers is infinite. • This does not mean that your brain contains an infinite number of objects, but that you have a representation of infinity as a concept, it is brain-connected to the concept of the set of integers, and also perhaps to a proof of the fact that $\mathbb{N}$ is infinite. • In particular, the idea that the set of integers is mentally connected to each integer does not mean that the whole infinite number of integers is attached in your brain to the concept of the set of integers.  Rather, the idea is a predicate in your brain.  When it is connected to "$42$", it says "yes".  To "$\pi$" it says "No". • Philosophers worry about the concept of completed infinity.  It exists as a concept in your brain that interacts as a meme with concepts in other mathematicians' brains. In that way, and in that way only (as far as I am concerned) it is a physical object, in particular an object that exists in scattered physical form in a social network. ### Graph of a function This is a graph of the function $y=x^3-x$: • The graph is a physical object, either on a screen or on paper • It is processed by your visual system, the most powerful sensory management system in your brain • It also represents the graph in the mathematical sense (set of ordered pairs) of the function $y=x^3-x$ • Both the mathematical graph and the physical graph are represented by modules in your brain, which associates the two of them with each other by a conceptual metaphor. • The graph shows some properties of the function: inflection point, going off to infinity in a specific way, and so on. • These properties are made apparent (if you are knowledgeable) by means of the powerful pattern recognition system in your brain. You see them much more quickly than you can discover them by calculation. • These properties are not proved by the graph. Nevertheless, the graph communicates information: for example, it suggests that you can prove that there is an inflection point near $(0,0)$. • The graph does not determine or define the function: It is inaccurate and it does not (cannot) show all of the graph. ### Continuity Example (C1) The $\epsilon-\delta$ definition of the continuity of a function $f:\mathbb{R}\to\mathbb{R}$ may be given in the symbolic language of math: A function $f$ is continuous at a number $c$ if \[\forall\epsilon(\epsilon\gt0\implies(\forall x(\exists\delta(|x-c|\lt\delta\implies|f(x)-f(c)|\lt\epsilon)))\] • To understand (C1), you must be familiar with the notation of first order logic.  For most students, getting the notation right is quite a bit of work. • You must also understand  the concepts, rules and semantics of first order logic. • Even if you are familiar with all that, continuity is still a difficult concept to understand. • This statement does show that the concept is logically complicated. I don't see how it gives any other intuition about the concept. Example (C2) The definition of continuity can also be represented in mathematical English like this: A function $f$ is continuous at a number $c$ if for any $\epsilon\gt0$ and for any $x$ there is a $\delta$ such that if $|x-c|\lt\delta$, then $|f(x)-f(c)|\lt\epsilon$. • This definition doesn't give any more intuition that (C1) does. • It is easier to read that (C1) for most math students, but it still requires intimate familiarity with the quirks of math English. • The fact that "continuous" is in boldface signals that this is a definition.  This is a convention. • The phrase "For any $\epsilon\gt0$" contains an unmarked parenthetic insertion that makes it grammatically incoherent.  It could be translated as: "For any $\epsilon$ that is greater than $0$".  Most math majors eventually understand such things subconsciously.  This usage is very common. • Unless it is explicitly pointed out, most students won't notice that  if you change the phrase "for any $x$ there is a $\delta$"  to "there is a $\delta$ for any $x$" the result means something quite different.  Cauchy never caught onto this. • In both (C1) and (C2), the "if" in the phrase "A function $f$ is continuous at a number $c$ if…" means "if and only if" because it is in a definition.  Students rarely see this pointed out explicitly. Example (C3) The definition of continuity can be given in a formally defined first order logical theory. • The theory would have to contain function symbols and axioms expressing the algebra of real numbers as an ordered field. • I don't know that such a definition has ever been given, but there are various semi-automated and automated theorem-proving systems (which I know little about) that might be able to state such a definition.  I would appreciate information about this. • Such a definition would make the property of continuity a mathematical object. • An automated theorem-proving system might be able to prove that $x^3-x$ is continuous, but I wonder if the resulting proof would aid your intuition much. Example (C4) A function from one topological space to another is continuous if the inverse of every open set in the codomain is an open set in the domain. • This definition is stated in mathematical English. • In definitions (C1) – (C3), the primitive data are real numbers and the statement uses properties of an ordered field. • In (C4), the data are real numbers and the arithmetic operations of a topological field, along with the open sets of the field. The ordering is not mentioned. • This shows that a definition need not mention some important aspects of the structure. • One marvelous example of this is that  a partition of a set and an equivalence relation on a set are based on essentially disjoint sets of data, but they define exactly the same type of structure. Example (C4) "The graph of a continuous function can be drawn without picking up the chalk". • This is a metaphor that associates an action with the graph. • It is incorrect: The graphs of some continuous functions cannot be drawn.  For example, the function $x\mapsto x^2\sin(1/x)$ is continuous on the interval $[-1,1]$ but cannot be drawn at $x=0$. • Generally speaking, if the function can be drawn then it can be drawn without picking up the chalk, so the metaphor provides a useful insight, and it provides an entry into consciousness-raising examples like the one in the preceding bullet. ## References 1. 1.000… and .999… (post) 2. Concept Image and Concept Definition in Mathematics with particular reference to Limits and Continuity, David Tall and Schlomo Vinner, 1981 3. Conceptual blending (post) 4. Conceptual blending (Wikipedia) 5. Conceptual metaphors (Wikipedia) 6. Convention (abstractmath) 7. Definitions (abstractmath) 8. Embodied cognition (Wikipedia) 9. Handbook of mathematical discourse (see articles on conceptual blend, mental representation, representation, metaphor, parenthetic assertion) 10. Images and Metaphors (abstractmath). 11. The interplay of text, symbols and graphics in math education, Lin Hammill 12. Math and the modules of the mind (post) 13. Mathematical discourse: Language, symbolism and visual images, K. L. O’Halloran. 14. Mathematical objects (abmath) 15. Mathematical objects (Wikipedia) 16. Mathematical objects are “out there?” (post) 17. Metaphors in computing science ​(post) 18. Procept (Wikipedia) 19. Representations 2 (post) 20. Representations and models (abstractmath) 21. Representations II: dry bones (post) 22. Representation theorems (Wikipedia) Concrete representations of abstractly defined objects. 23. Representation theory (Wikipedia) Linear representations of algebraic structures. 24. Semiotics, symbols and mathematical visualization, Norma Presmeg, 2006. 25. The transition to formal thinking in mathematics, David Tall, 2010 26. Theory in mathematical logic (Wikipedia) 27. What is the object of the encapsulation of a process? Tall et al., 2000. 28. Where mathematics comes from, by George Lakoff and Rafael Núñez, Basic Books, 2000. 29. Where mathematics comes from (Wikipedia) This is a review of the preceding book.  It is a permanent link to the version of 04:23, 25 October 2012.  The review is opinionated, partly wrong, not well written and does not fit the requirements of a Wikipedia entry.  I recommend it anyway; it is well worth reading.  It contains links to three other reviews. ### Notes on Viewing This post uses MathJax. If you see mathematical expressions with dollar signs around them, or badly formatted formulas, try refreshing the screen. Sometimes you have to do it two or three times. ## Representing and thinking about sets 2012/10/12 — SixWingedSeraph Notes on viewing ## Representations of sets Sets are represented in the math literature in several different ways, some mentioned here.  Also mentioned are some other possibilities.  Introducing a variety of representations of any type of math object is desirable because students tend to assume that the representation is the object. ### Curly bracket notation The standard representation for a finite set is of the form "$\{1,3,5,6\}$". This particular example represents the unique set containing the integers $1$, $3$, $5$ and $6$ and nothing else. This means precisely that the statement "$n$ is an element of $S$" is true if $n=1$, $n=3$, $n=5$ or $n=6$, and it is false if $n$ represents any other mathematical object. In the way the notation is usually used, "$\{1,3,5,6\}$", "$\{3,1,5,6\}$", "$\{1,5,3,6\}$",  "$\{1,6,3,5,1\}$" and $\{ 6,6,3,5,1,5\}$ all represent the same set. Textbooks sometimes say "order and repetition don't matter". But that is a statement about this particular representation style for sets. It is not a statement about sets. It would be nice to come up with a representation for sets that doesn't involve an ordering. Traditional algebraic notation is essentially one-dimensional and so automatically imposes an ordering (see Algebra is a difficult foreign language). ### Let the elements move In Visible Algebra II, I experimented with the idea of putting the elements at random inside a circle and letting them visibly move around like goldfish in a bowl.  (That experiment was actually for multisets but it applies to sets, too.)  This is certainly a representation that does not impose an ordering, but it is also distracting.  Our visual system is attracted to movement (but not as much as a cat's visual system). ### Enforce natural ordering One possibility would be to extend the machinery in a visible algebra system that allows you to make a box you could drag elements into. This box would order the elements in some canonical order (numerical order for numbers, alphabetical order for strings of letters or words) with the property that if you inserted an element in the wrong place it would rearrange itself, and if you tried to insert an element more than once the representation would not change.  What you would then have is a unique representation of the set. An example is the device below.  (If you have Mathematica, not just CDF player, you can type in numbers as you wish instead of having to use the buttons.) This does not allow a representation of a heterogenous set such as $\{3,\mathbb{R},\emptyset,\left(\begin{array}{cc}1&2\\0&1\\ \end{array}\right)\}$.  So what?  You can't represent every function by a graph, either. ### Hanger notation The tree notation used in my visual algebra posts could be used for sets as well, as illustrated below. The system allows you to drag the elements listed into different positions, including all around the set node. If you had a node for lists, that would not be possible. This representation has the pedagogical advantage of shows that a set is not its elements. • A set is distinct from its elements • A set is completely determined by what the elements are. ### Pattern recognition Infinite sets are sometimes represented using the curly bracket notation using a pattern that defines the set.  For example, the set of even integers could be represented by $\{0,2,4,6,\ldots\}$.  Such a representation is necessarily a convention, since any beginning pattern can in fact represent an infinite number of different infinite sets.  Personally, I would write, "Consider the even integers $\{0,2,4,6,\ldots\}$", but I would not write,  "Consider the set $\{0,2,4,6,\ldots\}$". By the way, if you are writing for newbies, you should say,"Consider the set of even integers $\{0,2,4,6,\ldots\}$". The sentence "Consider the even integers $\{0,2,4,6,\ldots\}$" is unambiguous because by convention a list of numbers in curly brackets defines a set. But newbies need lots of redundancy. ### Representation by a sentence Setbuilder notation is exemplified by $\{x|x>0\}$, which denotes the positive reals, given a convention or explicit statement that $x$ represents a real number.  This allows the representation of some infinite sets without depending on a possibly ambiguous pattern. A Visible Algebra system needs to allow this, too. That could be (necessarily incompletely) done in this way: • You type in a sentence into a Setbuilder box that defines the set. • You then attach a box to the Setbuilder box containing a possible element. • The system then answers Yes, No, or Can't Tell. The Can't Tell answer is a necessary requirement because the general question of whether an element is in a set defined by a first order sentence is undecidable. Perhaps the system could add some choices: • Try for a second. • Try for an hour. • Try for a year. • Try for the age of the universe. Even so, I'll bet a system using Mathematica could answer many questions like this for sentences referring to a specific polynomial, using the Solve or NSolve command.  For example, the answer to the question, "Is $3\in\{n|n\lt0 \text{ and } n^2=9\}$?" (where $n$ ranges over the integers) would be "No", and the answer to  "Is $\{n|n\lt0 \text{ and } n^2=9\}$ empty?" would also be "No". [Corrected 2012.10.24] ### References 1. Explaining “higher” math to beginners (previous post) 2. Algebra is a difficult foreign language (previous post) 3. Visible Algebra II (previous post) 4. Sets: Notation (abstractmath article) 5. Setbuilder notation (Wikipedia) ### Notes on Viewing • This post uses MathJax. If you see mathematical expressions with dollar signs around them, or badly formatted formulas, try refreshing the screen. Sometimes you have to do it two or three times. • To manipulate the demos in this post, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. The code for the demos is in the Mathematica notebook Representing sets.nb. ## Visible algebra II 2012/09/10 — SixWingedSeraph ### MathJax.Hub.Config({ jax: ["input/TeX","output/NativeMML"], extensions: ["tex2jax.js"], tex2jax: { inlineMath: [ ['$','$'] ], processEscapes: true } }); Notes on viewing: • This post uses MathJax. If you see mathematical formulas with dollar signs around them, or badly formatted formulas, try refreshing the screen. Sometimes you have to do it two or three times. • To manipulate the demos in this post, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. The code for the demos is in the Mathematica notebook algebra2.nb. ## More about visible algebra I have written about visible algebra in previous posts (see References). My ideas about the interface are constantly changing. Some new ideas are described here. In the first place I want to make it clear that what I am showing in these posts is a simulation of a possible visual algebra system.  I have not constructed any part of the system; these posts only show something about what the interface will look like.  My practice in the last few years is to throw out ideas, not construct completed documents or programs.  (I am not saying how long I will continue to do this.)  All these posts, Mathematica programs and abstractmath.org are available to reuse under a Creative Commons license. ## Commutative and associative operations Times and Plus are commutative and associative operations.  They are usually defined as binary operations.  A binary operation $*$ is said to be commutative if for all $x$ and $y$ in the underlying set of the operation, $x*y=y*x$, and it is associative if for all $x$,$y$ and $z$ in the underlying set of the operation, $(x*y)*z=x*(y*z)$. It is far better to define a commutative and associative operation $*$ on some underlying set $S$ as an operation on any multiset of elements of $S$.  A multiset is like a set, in particular elements can be rearranged in any way, but it is not like a set in that elements can be repeated and a different number of repetitions of an element makes a different multiset.  So for any particular multiset, the number of repetitions of each element is fixed.  Thus $\{a,a,b,b,c\} = \{c,b,a,b,a\}$ but $\{a,a,b,b,c\}\neq\{c,b,a,b,c\}$. This means that the function (operation) Plus, for example, is defined on any multiset of numbers, and \[\mathbf{Plus}\{a,a,b,b,c\}=\mathbf{Plus} \{c,b,a,b,a\}\] but $\mathbf{Plus}\{a,a,b,b,c\}$ might not be equal to $\mathbf{Plus} \{c,b,a,b,c\}$. This way of defining (any) associative and commutative operation comes from the theory of monads.  An operation defined on all the multisets drawn from a particular set is necessarily commutative and associative if it satisfies some basic monad identities, the main one being it commutes with union of multisets (which is defined in the way you would expect, and if this irritates you, read the Wikipedia article on multisets.). You don't have to impose any conditions specifically referring to commutativity or associativity.  I expect to write further about monads in a later post. The input process for a visible algebra system should allow the full strength of this fact. You can attach as many inputs as you want to Times or Plus and you can move them around.  For example, you can click on any input and move it to a different place in the following demo. Other input notations might be suitable for different purposes.  The example below shows how the inputs can be placed randomly in two dimensions (but preserving multiplicity).  I experimented with making it show the variables slowly moving around inside the circle the way the fish do in that screensaver (which mesmerizes small children, by the way — never mind what it does to me), but I haven't yet made it work. A visible algebra system might well allow directly input tables to be added up (or multiplied), like the one below. Spreadsheets have such an operation In particular, the spreadsheet operation does not insist that you apply it only as a binary operation to columns with two entries.  By far the most natural way to define addition of numbers is as an operation on multisets of numbers. ## Other operations Operations that are associative but not commutative, such as matrix multiplication, can be defined the monad way as operations on finite lists (or tuples or vectors) of numbers.  The operation is automatically associative if you require it to preserve concatenation of lists and some other monad requirements. Some binary operations are neither commutative nor associative.  Two such operations on numbers are Subtract and Power.  Such operations are truly binary operations; there is no obvious way to apply them to other structures.  They are only binary because the two inputs have different roles.  This suggests that the inputs be given names, as in the examples below. Later, I will write more about simplifying trees, solving the max area problem for rectangles surmounted by semicircles, and other things concerning this system of doing algebra. ## Generating a Collatz tree 2012/08/19 — SixWingedSeraph I have written a short Mathematica program that generates the function tree of the Collatz function.  The code is in the document collatz.nb on the abmath website. ### Examples Here are some examples.  The first one is generated by the integers between 1 and 26.  (27 is to be avoided because it makes a shoot that is 111 nodes high.)  The primes from 1 to 26 generates the same tree. This one is generated by the odd numbers from 1 to 26: This is generated by the even numbers from 1 to 50: ### Remark This program is not of great import, but it was fun doing it and I learned more Mathematica. In particular, I learned that you cannot assign to a parameter in a function definition. For example, I had to write fv[gl_List, n_Integer] := (nn := n; ggl := {nn}; (Sow[1]; While[! MemberQ[ggl, cf[nn]], (Sow[nn]; Sow[cf[nn]]); nn = cf[nn]]) // Reap) instead of fv[gl_List, n_Integer] := (Sow[1]; While[! MemberQ[gl, cf[n]], (Sow[n]; Sow[cf[n]]); n = cf[n]]) // Reap) (where n:=cf[n] wouldn't have worked either). Posted in math, Mathematica. Tags:. No Comments » ## Mathematical and linguistic ability 2012/08/04 — SixWingedSeraph This post uses MathJax.  If you see mathematical formulas with dollar signs around them, or badly formatted formulas, try refreshing the screen.  Sometimes you have to do it two or three times. ## Some personal history When I was young, I was your typical nerdy geek.  (Never mind what I am now that I am old.) In high school, I was fascinated by languages, primarily by their structure.  I would have wanted to become a linguist if I had known there was such a thing.  I was good at grasping the structure of a language and read grammars for fun. I was only pretty good at picking up vocabulary. I studied four different languages in high school and college and Turkish when I was in the military.  I know a lot about their structure but am not fluent in any of them (possibly including English). After college, I decided to go to math grad school.  This was soon after Sputnik and jobs for PhD's were temporarily easy to get. I always found algebra easy.  When I had to learn other symbolic languages, for example set theory, first order logic, and early programming languages, I found them easy too.  I had enough geometric insight that I did well in all my math courses, but my real strength was in learning languages. When I got a job at (what is now) Case Western Reserve University, I began learning category theory and a bit of cohomology of groups. I wrote a paper about group automorphisms that got into Transactions of the AMS.  (Full disclosure: I am bragging). The way Saunders Mac Lane did cohomology, he used "$+$" as a noncommutative operation.  No problem with that, I did lots of calculations in his notation.  In reading category theory I learned how to reason using commutative diagrams.  That is radically different from other math — it isn't strings of symbols — but I caught on. I read Beck's thesis in detail.  Beck wrote functions on the right (unlike Mac Lane) which I adapted to with no problem.  In fact my automorphisms paper and many others in those days was written with functions on the right. Later on in my career, I learned to program in Forth reasonably well. It is a reverse Polish language. Then (by virtue of summer grants in the 1990's) to use Mathematica, which I now use a lot:  I am an "experienced" user but not an "expert". ## Learning foreign languages in studying math I taught mostly engineering students during my 35 years at CWRU (especially computer engineering). When I used a text (including my own discrete math class notes) some students pleaded with me not to use $P\wedge Q$ and $P \vee Q$ but let them use $PQ$ and $P+Q$ like they did in their CS courses.  Likewise $1$ and $0$ instead of T and F.  Many of them simply could not switch easily between different codes.  Similar problems occurred in classes in first order logic. In the early days of calculators when most of them were reverse Polish, some students never mastered their use. These days, a common complaint about Mathematica is that it is a difficult language to learn; at the MAA meeting in Madison (where I am as I write this) they didn't even staff a booth.  Apparently too many of the professors can't handle Mathematica. I gave up writing papers with functions on the right because several professional mathematicians complained that they found them too hard to read. I guess not all professional mathematicians can switch code easily, either. There are many great mathematicians whose main strength is geometric understanding, not linguistic understanding.  Nevertheless, to become a mathematician you have to have enough linguistic ability to learn… ## Algebra The big elephant in the room is ordinary symbolic algebra as is used in high school algebra and precalculus.  This of course causes difficulty among first year calculus students, too, but college profs are spared the problem that high school teachers have with a large percentage of the students never really grasping how algebra works.  We don't see those students in STEM courses. It is surely the case that algebra is a difficult and unintuitive foreign language.  I have carried on about this in my stuff about the languages of math in my abstractmath site. Some students already in college don't really understand expressions such as $x^2$.  You still get some who sporadically think it means $2x$.  (They don't always think that, but it happens when they are off guard.)  Lots of them don't understand the difference between $x^2$ and $2^x$. In complicated situations, students don't grasp the difference between an expression such as $x^2+2x+1$ and a statement like $x^2+2x+1=0$.  Not to mention the difference between the way $x^2+2x+1=0$ and $x^2+2x+1=(x+1)^2$ are different kinds of statements even though the difference is not indicated in the syntax. There are many irregularities and ambiguities (just like any natural language — the symbolic language of math is a natural language!): consider $\sin xy$, $\sin x + y$, $\sin x/y$.  (Don't squawk to me about order of operators.  That's as bad as aus, außer, bei, mit, zu.  German can't help it, but mathematical notation could.) One monstrous ambiguity is $(x,y)$, which could be an ordered pair, the GCD, or an open interval.  I found an example of two of those in the same sentence in the Handbook of Mathematical Discourse, and today in a lecture I saw someone use it with two meanings about three inches apart on a transparency. Anyway, the symbolic language of math is difficult and we don't teach it well. ## Structuring calculations There are other ways to structure calculations that are much more transparent.  Most of them use two or three dimensions. • Spreadsheets: It is easy to approximate the zeros of a function using a spreadsheet and changing the input till you get the value near zero. Why can't middle school students be taught that? • Bret Victor has made suggestions for easy ways to calculate things. • My post Visible Algebra I suggest a two-dimensional approach to putting together calculations.  (There are several more posts coming about that idea.) • Mathematica interactive demos could maybe be provided in a way that would allow them to be joined together to make a complicated calculation. (Modules such as an inverse image constructor.)  I have not tried to do this. A lot of these alternatives work better because they make full use of two dimensions.  Toolkits could be made for elementary school students (there are some already but I am not familiar with them). It is impractical to expect that every high school student master basic algebraic notation.  It is difficult and we don't know how to teach it to everyone. With the right toolkits, we could provide everyone, not just students, to put together usable calculations on their computer and experiment with them.  This includes working out the effect of different payment periods on loans, how much paint you need for a room, and many other things. STEM students will still have to learn algebraic notation as we use it now.  It should be taught as a foreign language with explicit instruction in its syntax (sentences and terms, scope of an operator, and so on), ambiguities and peculiarities. ## Making visible the abstraction in algebraic notation 2012/05/23 — SixWingedSeraph To manipulate the demos below, you must have Wolfram CDF Player installed on your computer. It is available free from the Wolfram website. ### Algebraic notation Algebraic notation contains a hidden abstract structure coded by apparently arbitrary conventions that many college calculus students don't understand completely. This very simple example shows one of the ways in which calc students may be confused: 1. $x+2y$ 2. $(x+2)y$ Students often mean to express formula 2 when they write something like \[x\!\!+\!\!2\,\,\,\,y\] (with a space).  This is a perfectly natural way to write it. But it is against the rules, I presume because in handwriting it is not clear when you mean a space and when you don't. Formula 1 can also be written as $x+(2y)$, and if it were usually written that way students (I predict) would be less confused.   Always writing it this way would exacerbate the clutter of parentheses but would allow a simple rule: Evaluate every expression inside parentheses first, starting with the innermost. ### Using trees for algebra Writing algebraic expressions as a tree (as in computing science) • makes it obvious what gets evaluated first • uses no parentheses at all. An example of using the tree of an expression to do calculations is available in Expressions.nb (requires Mathematica) and Expressions.cdf (requires CDF player only) on my Mathematica website.  I could imagine using tree expressions instead of standard notation as the normal way of doing things. That would require working on Ipads or some such and would take a big amount of investment in software making it intuitive and easy to use.  No, I am not going to embark on such an adventure, but I think it ought to be attempted.  (Brett Victor has many ideas like this.) ### Transforming algebraic notation into trees The two manipulable diagrams below show the algebraic notation being transformed into tree form.  I expect that this will make the abstract structure more concrete for many students and I encourage others to show it to their students.  Note that the tree form makes everything explicit.  The code for these diagrams is in Handmade Exp Tree.nb After I return from a ten-day trip I will explore the possibility of making the expression-to-tree transformer turn the expression into an evaluable tree as in Expressions.nb and Expressions.cdf.  In the I hope not to distant future students should have access to many transformers that morph expressions from one form into another.  Such transformers are much more politically correct than Optimus Prime. Offloading chunking and Computable algebraic expressions in tree form are earlier posts related to this post. ## Metaphors in computing science I 2012/05/15 — SixWingedSeraph Michael Barr recently told me of a transcription of a talk by Edsger Dijkstra dissing the use of metaphors in teaching programming and advocating that every program be written together with a proof that it works.  This led me to think about the metaphors used in computing science, and that is what this post is about.  It is not a direct answer to what Dijkstra said. We understand almost anything by using metaphors.  This is a broader sense of metaphor than that thing in English class where you had to say "my love is a red red rose" instead of "my love is like a red red rose".  Here I am talking about conceptual metaphors (see references at the end of the post). ### Metaphor: A program is a set of instructions You can think of a program as a list of instructions that you can read and, if it is not very complicated, understand how to carry them out.  This metaphor comes from your experience with directions on how to do something (like directions from Google Maps or for assembling a toy).   In the case of a program, you can visualize doing what the program says to do and coming out with the expected output. This is one of the fundamental metaphors for programs. Such a program may be informal text or it may be written in a computer language. #### Example A description of how to calculate $n!$ in English could be:  "Multiply the integers $1$ through $n$".  In Mathematica, you could define the factorial function this way: fac[n_] := Apply[Times, Table[i, {i, 1, n}]] This more or less directly copies the English definition, which could have been reworded as "Apply the Times function to the integers from $1$ to $n$ inclusive."  Mathematica programmers customarily use the abbreviation "@@" for Apply because it is more convenient: Fac[n_]:=Times @@ Table[i, {i, 1, 6}] As far as I know, C does not have list operations built in.  This simple program gives you the factorial function evaluated at $n$: j=1;  for (i=2; i<=n; i++)   j=j*i; return j; This does the calculation in a different way: it goes through the numbers $1, 2,\ldots,n$ and multiplies the result-so-far by the new number.  If you are old enough to remember Pascal or Basic, you will see that there you could use a DO loop to accomplish the same thing. #### What this metaphor makes you think of Every metaphor suggests both correct and incorrect ideas about the concept. • If you think of a list of instructions, you typically think that you should carry out the instructions in order.  (If they are Ikea instructions, your experience may have taught you that you must carry out the instructions in order.) • In fact, you don't have to "multiply the numbers from $1$ to $n$" in order at all: You could break the list of numbers into several lists and give each one to a different person to do, and they would give their answers to you and you would multiply them together. • The instructions for calculating the factorial can be translated directly into Mathematica instructions, which does not specify an order.   When $n$ is large enough, Mathematica would in fact do something like the process of giving it to several different people (well, processors) to speed things up. • I had hoped that Wolfram alpha would answer "720" if I wrote "multiply the numbers from $1$ to $6$" in its box, but it didn't work.  If it had worked, the instruction in English would not be translated at all. (Note added 7 July 2012:  Wolfram has repaired this.) • The example program for C that I gave above explicitly multiplies the numbers together in order from little to big.  That is the way it is usually taught in class.  In fact, you could program a package for lists using pointers (a process taught in class!) and then use your package to write a C program that looks like the  "multiply the numbers from $1$ to $n$" approach.  I don't know much about C; a reader could probably tell me other better ways to do it. So notice what happened: • You can translate the "multiply the numbers from $1$ to $n$" directly into Mathematica. •  For C, you have to write a program that implements multiplying the numbers from $1$ to $n$. Implementation in this sense doesn't seem to come up when we think about instruction sets for putting furniture together.  It is sort of like: Build a robot to insert & tighten all the screws. Thus the concept of program in computing science comes with the idea of translating the program instruction set into another instruction set. • The translation provided above for Mathematica resembles translating the instruction set into another language. • The two translations I suggested for C (the program and the definition of a list package to be used in the translation) are not like translating from English to another language.  They involve a conceptual reconstruction of the set of instructions. Similarly, a compiler translates a program in a computer language into machine code, which involves automated conceptual reconstruction on a vast scale. #### Other metaphors • C or Mathematica as like a natural language in some ways • Compiling (or interpreting) as translation Computing science has used other VIM's (Very Important Metaphors) that I need to write about later: • Semantics (metaphor: meaning) • Program as text – this allows you to treat the program as a mathematical object • Program as machine, with states and actions like automata and Turing machines. • Specification of a program.  You can regard  "the product of the numbers from $1$ to $n$" as a specification.  Notice that saying "the product" instead of "multiply" changes the metaphor from "instruction" to "specification". #### References Conceptual metaphors (Wikipedia) Images and Metaphors (article in abstractmath) Images and Metaphors for Sets (article in abstractmath) Images and Metaphors for Functions (incomplete article in abstractmath)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 168, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.91718989610672, "perplexity_flag": "middle"}
http://pixelstoomany.wordpress.com/category/shadows/exponential-shadow-maps/
# Pixels, Too Many.. ## A conceptually simple(r) way to derive exponential shadow maps + sample code June 12, 2008 — Marco Salvi A few months ago, while working on an improved version of exponential shadow maps, I stumbled on a new way to derive ESM equations which looks more simple and intuitive than previous attempts. There is no need to invoke Markov’s inequality, higher order moments or convolutions. In fact all we have to do is to write the basic percentage closer filtering formula for $n$ equally weighted occluders $o_i$ and a receiver`$r$` `$\displaystyle\frac{1}{n}\sum_{i=1}^{n}H(o_i-r)$` The role of the step function $H(x)$ is to perform a depth test on all occluders, depth test results are then averaged together to obtain a filtered occlusion term. The are many ways to write $H(x)$ and a limit of exponential functions guarantees a fast convergence: `$\displaystyle H(o_i-r) = \lim_{k \to +\infty} \frac{e^{ko_i}}{e^{ko_i}+e^{kr}}$` We can rewrite the original PCF equation as: $\begin{array}{ccc} \displaystyle\frac{1}{n}\sum_{i=1}^{n}H(o_i-r)&=&\displaystyle\frac{1}{n}\sum_{i=1}^{n}\lim_{k \to +\infty} \frac{e^{ko_i}}{e^{ko_i}+e^{kr}} \\ &=&\displaystyle\lim_{k \to +\infty}\frac{1}{ne^{kr}}\sum_{i=1}^{n}\frac{e^{ko_i}}{e^{k(o_i - r)}+1} \end{array}$ If we make the hypothesis that our shadow receiver is planar within the filtering window we are also implicitly assuming that the receiver is the most distant occluder (otherwise it might occlude itself, which can’t happen given our initial hypothesis), thus we have $r > o_i$. Armed with this new assumption we observe that the term $e^{k(o_i - r)}$ quickly converges to zero for all occluders: $\begin{array}{ccc} \displaystyle\lim_{k \to +\infty}\frac{1}{ne^{kr}}\sum_{i=1}^{n}\frac{e^{ko_i}}{e^{k(o_i - r)}+1} &\approx&\displaystyle\lim_{k \to +\infty}\frac{1}{ne^{kr}}\sum_{i=1}^{n}e^{ko_i} \\ &\equiv&\displaystyle\lim_{k \to +\infty}\frac{E[e^{ko}]}{e^{kr}} \\ \end{array}$ As we already know $k$ controls the sharpness of our step function approximation and can be used to fake soft shadows. Ultimately we can drop the limit and we obtain the ESM occlusion term formula: $\displaystyle \frac{E[e^{ko}]}{e^{kr}}$ Exponential shadow maps can be seen as a very good approximation of a PCF filter when all the occluders are located in front of our receiver (no receiver self shadowing within the filtering window). There’s not much else to add, if not that this new derivation clearly shows the limits of this technique and that any future improvements will necessarily be based on a relaxed version of the planar receiver hypothesis. For unknown reasons some old and buggy ESM test code was distributed with ShaderX6. You can grab here the FxComposer 2.0 sample code that was meant to be originally released with the book. ## Another brief update: GDC08 and ShaderX6 February 24, 2008 — Marco Salvi How do you feel right before giving the first lecture of your life? The answer is: not well! I felt so tense that I forgot half of what I wanted to say, and even though I was probably speaking a bit too fast I believe my lecture at GDC went quite well. The room was packed with people, despite the fact that was the last talk of the day. At the end of the presentation I was asked a fair number of pertinent questions, and I was kind of surprised to realize that some didn’t believe that such extremely simple approaches could possibly work! For all those that couldn’t come be there Wolfgang Engel (organizer of the Core Techniques & Algorithms in Shader Programming day at GDC) has collected all the presentations in one handy page. You can also download my presentation about some new and exotic shadow mapping filtering schemes directly from this link. Comments were added to the most obscure slides and as usual feel free to ask questions, comments, point out errors, etc.. on this blog. This talk was also represented the first occasion to publicly introduce Exponential Shadow Maps. The main idea has been going around for a while now (and it seems some highly anticipated game will soon ship with it..) but it wasn’t possible to fully explain the algorithm and why it works (or doesn’t work, in some case:) ) due the limited amount of time available. On the other hand ShaderX6 is now widely available and you will be able to find in it a long and detailed article about ESM (and some sample code), among many other interesting contributions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 13, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475181698799133, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/140591/infinite-or-unknown/140602
# Infinite or unknown? If you have $0$ clients on Monday, and $5$ clients on Tuesday, how many times have the number of clients you had grown from Monday to Tuesday? $A$ - Infinite times $B$ - Unknown $C$ - Undefined - 4 I'd go with "c - undefined". – mrf May 3 '12 at 21:14 ## 2 Answers To ask "how many times have the number of clients you had grown from Monday to Tuesday" is the same to ask what is the value of: $$\dfrac{\mathrm{Clients}(\mathrm{Tuesday})}{\mathrm{Clients}(\mathrm{Monday})}=\frac50$$ In the context of the real numbers, or natural numbers if you prefer, this is undefined. We cannot divide by the actual number zero. Note that it is common to say "infinity" because $\lim\limits_{n\to0}\frac5n=\infty$. However this simply tells us that the ratio is larger than any other number, it is not an actual number or ratio per se. - @Marvis: Of course! :-) – Asaf Karagila May 3 '12 at 21:23 This would evaluate to "infinity" for any nonzero number of clients on tuesday. Surely, a rise to 5 clients should deserve to be denoted by a larger increase than, say, 2 clients. Since I don't see a way to express this, maybe "b - unknown" is the best answer here? – doppelfish May 3 '12 at 21:23 1 @doppelfish: No, this is undefined because it is not a defined operation. Defined operations are exactly operations which are not dependent of the choice of representatives ($\frac12n=\frac24n$, for example). Evaluating a division by zero is simply undefined in this context. In a broader context it might be possible and maybe even reasonable to define something like that, however this is not the sort of context that one would ask about here. Usually when you reach that point, you can do these things yourself just fine. – Asaf Karagila May 3 '12 at 21:26 Go with "c - Undefined". Translating the word problem to a precise statement I find it to mean the following: Let $x = 0$ and $y=5$. If $y=kx$, then what is $k$? Solving this equation would involve division by zero, so the answer is undefined, i.e. there is no such $k$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502816796302795, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/51234-solved-find-area-rectangle-abcd.html
# Thread: 1. ## [SOLVED] Find the area of rectangle ABCD Hi, this is on the SAT OG Book page 747 #16 In rectangle ABCD, point E is the midpoint of line segment BC. If the area of quadrilateral ABED is $\frac{2}{3}$, what is the area of the rectangle ABCD? a) $\frac{1}{2}$ b) $\frac{3}{4}$ c) $\frac{8}{9}$ d) 1 e) $\frac{8}{3}$ 2. Originally Posted by fabxx Hi, this is on the SAT OG Book page 747 #16 In rectangle ABCD, point E is the midpoint of line segment BC. If the area of quadrilateral ABED is $\frac{2}{3}$, what is the area of the rectangle ABCD? a) $\frac{1}{2}$ b) $\frac{3}{4}$ c) $\frac{8}{9}$ d) 1 e) $\frac{8}{3}$ note that the area of rectangle ABCD is given by AD*AB now, ABED is a trapezium, thus its area is given by half the sum of the two parallel sides times the distance between them. that is (1/2)(BE + AD)*AB, and this is 2/3, so (1/2)(BE + AD)*AB = 2/3 => (BE + AD)*AB = 4/3 but, BE = (1/2)AD (this should be pretty obvious from a diagram if you drew or were given one), so that => (3/2)AD*AB = 4/3 => AD*AB = 8/9 3. Hello, fabxx! Did you make a sketch? In rectangle ABCD, point E is the midpoint of line segment BC. If the area of $ABED$ is $\frac{2}{3}$, what is the area of $ABCD$? $(a)\;\frac{1}{2} \qquad(b)\;\frac{3}{4}\qquad (c)\;\frac{8}{9} \qquad (d)\;1 \qquad (e) \;\frac{8}{3}$ Code: ``` D *---------------* C | * | | * | | * | F * - - - - - - - * E | | | | | | A *---------------* B``` Draw median EF. $\text{Rect }DCEF \:=\:\frac{1}{2}(\text{Rect }ABCD)$ $\Delta DCE \:=\:\frac{1}{2}(\text{Rect }DCEF) \;=\;\frac{1}{4}(\text{Rect }ABCD)$ . . Hence: . $\text{Quad }ABED \:=\:\frac{3}{4}(\text{Rect }ABCD)$ We are told that: . $\text{Quad }ABED \:=\:\frac{2}{3}$ We have: . $\frac{3}{4}(\text{Rect }ABCD) \;=\;\frac{2}{3} \quad\Rightarrow\quad \boxed{\text{Rect }ABCD \:=\:\frac{8}{9}}$ . . . answer (c) 4. Originally Posted by Jhevon note that the area of rectangle ABCD is given by AD*AB now, ABED is a trapezium, thus its area is given by half the sum of the two parallel sides times the distance between them. that is (1/2)(BE + AD)*AB, and this is 2/3, so (1/2)(BE + AD)*AB = 2/3 => (BE + AD)*AB = 4/3 but, BE = (1/2)AD (this should be pretty obvious from a diagram if you drew or were given one), so that => (3/2)AD*AB = 4/3 => AD*AB = 8/9 No figure is given for this. If i were to draw one myself, would it look like the attachment below? Attached Thumbnails 5. Originally Posted by fabxx No figure is given for this. If i were to draw one myself, would it look like the attachment below? yes, it should look like that. note that it is the same as the diagram Soroban drew, except it is turned on its side. i imagined it as Soroban did. it will all work out the same. if your diagram was like the given one or Soroban's you will get the right answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9533146619796753, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/tagged/factorization+function-construction
# Tagged Questions 3answers 245 views ### Generating pairs of additive and multiplicative factors for integers Given an integer $n$, I want two lists: a) the set of pairs of the divsors $a,b$ into exactly two factors $n=a\cdot b$, b) the set of pairs $a,b$ of two summands $n=a+b$. The code I came up ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8295324444770813, "perplexity_flag": "middle"}
http://www.theresearchkitchen.com/archives/date/2012/06
# Density Estimation of High-Frequency Financial Data Posted on June 12, 2012 by Frequently we will want to estimate the empirical probability density function of real-world data and compare it to the theoretical density from one or more probability distributions. The following example shows the empirical and theoretical normal density for EUR/USD high-frequency tick data $$X$$ (which has been transformed using log-returns and normalized via $$\frac{X_i-\mu_X}{\sigma_X}$$). The theoretical normal density is plotted over the range $$\left(\lfloor\mathrm{min}(X)\rfloor,\lceil\mathrm{max}(X)\rceil\right)$$. The results are in the figure below. The discontinuities and asymmetry of the discrete tick data, as well as the sharp kurtosis and heavy tails (a corresponding interval of $$\approx \left[-8,+7\right]$$ standard deviations away from the mean) are apparent from the plot. Empirical and Theoretical Tick Density We also show the theoretical and empirical density for the EUR/USD exchange rate log returns over different timescales. We can see from these plots that the distribution of the log returns seems to be asymptotically converging to normality. This is a typical empirical property of financial data. Density Estimate Across Varying Timescales The following R source generates empirical and theoretical density plots across different timescales. The data is loaded from files that are sampled at different intervals. I cant supply the data unfortunately, but you should get the idea. ```# Function that reads Reuters CSV tick data and converts Reuters dates # Assumes format is Date,Tick readRTD <- function(filename) { tickData <- read.csv(file=filename, header=TRUE, col.names=c("Date","Tick")) tickData$Date <- as.POSIXct(strptime(tickData$Date, format="%d/%m/%Y %H:%M:%S")) tickData } # Boilerplate function for Reuters FX tick data transformation and density plot plot.reutersFXDensity <- function() { filenames <- c("data/eur_usd_tick_26_10_2007.csv", "data/eur_usd_1min_26_10_2007.csv", "data/eur_usd_5min_26_10_2007.csv", "data/eur_usd_hourly_26_10_2007.csv", "data/eur_usd_daily_26_10_2007.csv") labels <- c("Tick", "1 Minute", "5 Minutes", "Hourly", "Daily") par(mfrow=c(length(filenames), 2),mar=c(0,0,2,0), cex.main=2) tickData <- c() i <- 1 for (filename in filenames) { tickData[[i]] <- readRTD(filename) # Transform: `$Y = \nabla\log(X_i)$` logtick <- diff(log(tickData[[i]]$Tick)) # Normalize: `$\frac{(Y-\mu_Y)}{\sigma_Y}$` logtick <- (logtick-mean(logtick))/sd(logtick) # Theoretical density range: `$\left[\lfloor\mathrm{min}(Y)\rfloor,\lceil\mathrm{max}(Y)\rceil\right]$` x <- seq(floor(min(logtick)), ceiling(max(logtick)), .01) plot(density(logtick), xlab="", ylab="", axes=FALSE, main=labels[i]) lines(x,dnorm(x), lty=2) #legend("topleft", legend=c("Empirical","Theoretical"), lty=c(1,2)) plot(density(logtick), log="y", xlab="", ylab="", axes=FALSE, main="Log Scale") lines(x,dnorm(x), lty=2) i <- i + 1 } par(op) } ``` Posted in Finance, R, Statistics # Binomial Pricing Trees in R Posted on June 11, 2012 by Binomial Tree Simulation The binomial model is a discrete grid generation method from $$t=0$$ to $$T$$. At each point in time ($$t+\Delta t$$) we can move up with probability $$p$$ and down with probability $$(1-p)$$. As the probability of an up and down movement remain constant throughout the generation process, we end up with a recombining binary tree, or binary lattice. Whereas a balanced binomial tree with height $$h$$ has $$2^{h+1}-1$$ nodes, a binomial lattice of height $$h$$ has $$\sum_{i=1}^{h}i$$ nodes. The algorithm to generate a binomial lattice of $$M$$ steps (i.e. of height $$M$$) given a starting value $$S_0$$, an up movement $$u$$, and down movement $$d$$, is: ``` FOR i=1 to M FOR j=0 to i STATE S(j,i) = S(0)*u^j*d^(n-j) ENDFOR ENDFOR ``` We can write this function in R and generate a graph of the lattice. A simple lattice generation function is below: ```# Generate a binomial lattice # for a given up, down, start value and number of steps genlattice <- function(X0=100, u=1.1, d=.75, N=5) { X <- c() X[1] <- X0 count <- 2 for (i in 1:N) { for (j in 0:i) { X[count] <- X0 * u^j * d^(i-j) count <- count + 1 } } return(X) } ``` We can generate a sample lattice of 5 steps using symmetric up-and-down values: ```> genlattice(N=5, u=1.1, d=.9) [1] 100.000 90.000 110.000 81.000 99.000 121.000 72.900 89.100 108.900 133.100 65.610 [12] 80.190 98.010 119.790 146.410 59.049 72.171 88.209 107.811 131.769 161.051 ``` In this case, the output is a vector of alternate up and down state values. We can nicely graph a binomial lattice given a tool like graphviz, and we can easily create an R function to generate a graph specification that we can feed into graphviz: ```function(S, labels=FALSE) { shape <- ifelse(labels == TRUE, "plaintext", "point") cat("digraph G {", "\n", sep="") cat("node[shape=",shape,", samehead, sametail];","\n", sep="") cat("rankdir=LR;","\n") cat("edge[arrowhead=none];","\n") # Create a dot node for each element in the lattice for (i in 1:length(S)) { cat("node", i, "[label=\"", S[i], "\"];", "\n", sep="") } # The number of levels in a binomial lattice of length N # is `$\frac{\sqrt{8N+1}-1}{2}$` L <- ((sqrt(8*length(S)+1)-1)/2 - 1) k<-1 for (i in 1:L) { tabs <- rep("\t",i-1) j <- i while(j>0) { cat("node",k,"->","node",(k+i),";\n",sep="") cat("node",k,"->","node",(k+i+1),";\n",sep="") k <- k + 1 j <- j - 1 } } cat("}", sep="") } ``` This will simply output a dot script to the screen. We can capture this script and save it to a file by invoking: ```> x<-capture.output(dotlattice(genlattice(N=8, u=1.1, d=0.9))) > cat(x, file="/tmp/lattice1.dot") ``` We can then invoke dot from the command-line on the generated file: ```$ dot -Tpng -o lattice.png -v lattice1.dot ``` The resulting graph looks like the following: Binomial Lattice (no labels) If we want to add labels to the lattice vertices, we can add the labels attribute: ```> x<-capture.output(dotlattice(genlattice(N=8, u=1.1, d=0.9), labels=TRUE)) > cat(x, file="/tmp/lattice1.dot") ``` Binomial Lattice (labels) Posted in Coding, Finance, R, Statistics # Statistical Arbitrage II : Simple FX Arbitrage Models Posted on June 10, 2012 by In the context of the foreign exchange markets, there are several simple no-arbitrage conditions, which, if violated outside of the boundary conditions imposed by transaction costs, should provide the arbitrageur with a theoretical profit when market conditions converge to theoretical normality. Detection of arbitrage conditions in the FX markets requires access to high-frequency tick data, as arbitrage opportunities are usually short-lived. Various market inefficiency conditions exist in the FX markets. Apart from the basic strategies outlined in the following sections, other transient opportunities may exist, if the trader or trading system can detect and act on them quickly enough. Round-Trip Arbitrage Possibly the most well-known no-arbitrage boundary condition in foreign exchange is the covered interest parity condition. The covered interest parity condition is expressed as: $(1+r_d) = \frac{1}{S_t}(1+r_f)F_t$ which specifies that it should not be possible to earn positive return by borrowing domestic assets at \$$$r_d$$ for lending abroad at $$r_f$$ whilst covering the exchange rate risk through a forward contract $$F_t$$ of equal maturity. Accounting for transaction costs, we have the following no-arbitrage relationships: $(1+r_d^a) \geq \frac{1}{S^a}(1+r_f^b)F^b$ $(1+r_f^b) \geq S^b(1+r_d^b)\frac{1}{F^a}$ For example, the first condition states that the cost of borrowing domestic currency at the ask rate ($$1+r_d^a$$) should be at least equal to the cost of converting said currency into foreign currency ($$\frac{1}{s^a}$$) at the prevailing spot rate $$S^a$$ (assuming that the spot quote $$S^a$$ represents the cost of a unit of domestic currency in terms of foreign currency), invested at $$1+r_f^b$$, and finally converted back into domestic currency via a forward trade at the ask rate ($$F^a$$). If this condition is violated, then we can perform round-trip arbitrage by converting, investing, and re-converting at the end of the investment term. Persistent violations of this condition are the basis for the roaring carry trade, in particular between currencies such as the Japanese Yen and higher yielding currencies such as the New Zealand dollar and the Euro. Triangular Arbitrage A reduced form of FX market efficiency is that of triangular arbitrage, which is the geometric relationship between three currency pairs. Triangular arbitrage is defined in two forms, forward arbitrage and reverse arbitrage. These relationships are defined below. \left(\frac{C_1}{C_2}\right)_{ask} \left(\frac{C_2}{C_3}\right)_{ask} = \left(\frac{C_1}{C_3}\right)_{bid} \\ \left(\frac{C_1}{C_2}\right)_{bid} \left(\frac{C_2}{C_3}\right)_{bid} = \left(\frac{C_1}{C_3}\right)_{ask} With two-way high-frequency prices, we can simultaneously calculate the presence of forward and reverse arbitrage. A contrived example follows: if we have the following theoretical two-way tradeable prices: $$\left(\frac{USD}{JPY}\right) = 90/110$$, $$\left(\frac{GBP}{USD}\right) = 1.5/1.8$$, and $$\left(\frac{JPY}{GBP}\right) = 150/200$$. By the principle of triangular arbitrage, the theoretical two-way spot rate for JPY/GBP should be $$135/180$$. Hence, we can see that JPY is overvalued relative to GBP. We can take advantage of this inequality as follows, assuming our theoretical equity is 1 USD: • Pay 1 USD and receive 90 JPY ; • Sell 90 JPY ~ and receive $$\left(\frac{90}{135}\right)$$ GBP ; • Pay $$\left(\frac{90}{135}\right)$$ GBP, and receive $$\left(\frac{90}{135}\right) \times$$ 1.5 USD = 1.32 USD. We can see that reverse triangular arbitrage can detect a selling opportunity (i.e. the bid currency is overvalued), whilst forward triangular arbitrage can detect a buying opportunity (the ask currency is undervalued). The list of candidate currencies could be extended, and the arbitrage condition could be elegantly represented by a data structure called a directed graph. This would involve creating an adjacency matrix $$R$$, in which an element $$R_{i,j}$$ contains a measure representing the cost of transferring between currency $$i$$ and currency $$j$$. Estimating Position Risk When executing an arbitrage trade, there are some elements of risk. An arbitrage trade will normally involve multiple legs which must be executed simultaneously and at the specified price in order for the trade to be successful. As most arbitrage trades capitalize on small mispricings in asset values, and rely on large trading volumes to achieve significant profit, even a minor movement in the execution price can be disastrous. Hence, a trading algorithm should allow for both trading costs and slippage, normally by adding a margin to the profitability ratio. The main risk in holding an FX position is related to price slippage, and hence, the variance of the currency that we are holding. Posted in Finance # Quasi-Random Number Generation in R Posted on June 6, 2012 by Random number generation is a core topic in numerical computer science. There are many efficient algorithms for generating random (strictly speaking, pseudo-random) variates from different probability distributions. The figure below shows a sampling of 1000 two-dimensional random variates from the standard Gaussian and Cauchy distributions, respectively. The size of the extreme deviations of the Cauchy distribution is apparent from the graph. However, sometimes we need to produce numbers that are more evenly distributed (quasi-random numbers). For example, in a Monte Carlo integration exercise, we can get faster convergence with a lower error bound using so-called low-discrepancy random sequences, using the GSL library. In the figure below, we show two-dimensional normal and Sobol (a low-discrepancy generator) variates. Normal, Cauchy, and Sobol 2-d variates To generate the graph below, I used the GSL library for R, as shown below: ```library(gsl) q <- qrng_alloc(type="sobol", 2) rs <- qrng_get(q,1000) par(mfrow=c(3,1)) plot(rnorm(1000), rnorm(1000), pch=20, main="~N(0,1)", ylab="", xlab="") plot(rs, pch=20, main="Sobol", ylab="", xlab="") plot(rcauchy(1000), rcauchy(1000),pch=20, main="~C(0,1)", ylab="",xlab="") ``` The property of low-discrepancy generators is even more apparent if we view the random variates in a higher dimension, for example the figure below shows the variates as a 3-dimensional cube. Note how the clustering around the centre of the cube is much more pronounced for the Gaussian cube. 3D Random Variates To plot the figure above, I used the GSL and Lattice libraries: ```library(gsl) library(lattice) q <- qrng_alloc(type="sobol", 3) npoints <- 200 rs <- qrng_get(q,npoints) ltheme <- canonical.theme(color = FALSE) ltheme$strip.background$col <- "transparent" lattice.options(default.theme = ltheme) trellis.par.set(layout.heights = list(top.padding = -20, main.key.padding = 1, key.axis.padding = 0, axis.xlab.padding = 0, xlab.key.padding = 0, key.sub.padding = 0, bottom.padding = -20)) # Plot the normal variates in a 3-dim cube p1 <- cloud(rnorm(npoints) ~ rnorm(npoints) + rnorm(npoints), xlab="x", ylab="y", zlab="z", pch=20, main="~N(0,1)") p2 <- cloud(rs[,1] ~ rs[,2] + rs[,3], xlab="x", ylab="y", zlab="z", pch=20, main="Sobol") print(p1, split=c(1,1,2,1), more=TRUE) print(p2, split=c(2,1,2,1)) ``` Posted in Coding, R, Statistics # High-Frequency Statistical Arbitrage Posted on June 6, 2012 by Computational statistical arbitrage systems are now de rigeur, especially for high-frequency, liquid markets (such as FX). Statistical arbitrage can be defined as an extension of riskless arbitrage, and is quantified more precisely as an attempt to exploit small and consistent regularities in asset price dynamics through use of a suitable framework for statistical modelling. Statistical arbitrage has been defined formally (e.g. by Jarrow) as a zero initial cost, self-financing strategy with cumulative discounted value $$v(t)$$ such that: • $$v(0) = 0$$, • $$\lim_{t\to\infty} E^P[v(t)] > 0$$, • $$\lim_{t\to\infty} P(v(t) < 0) = 0$$, • $$\lim_{t\to\infty} \frac{Var^P[v(t)]}{t}=0 \mbox{ if } P(v(t)<0) > 0 \mbox{ , } \forall{t} < \infty$$ These conditions can be described as follows: (1) the position has a zero initial cost (it is a self-financing trading strategy), (2) the expected discounted profit is positive in the limit, (3) the probability of a loss converges to zero, and (4) a time-averaged variance measure converges to zero if the probability of a loss does not become zero in finite time. The fourth condition separates a standard arbitrage from a statistical arbitrage opportunity. We can represent a statistical arbitrage condition as $$\left| \phi(X_t – SA(X_t))\right| < \mbox{TransactionCost}$$ Where $$\phi()$$ is the payoff (profit) function, $$X$$ is an arbitrary asset (or weighted basket of assets) and $$SA(X)$$ is a synthetic asset constructed to replicate the payoff of $$X$$. Some popular statistical arbitrage techniques are described below. Index Arbitrage Index arbitrage is a strategy undertaken when the traded value of an index (for example, the index futures price) moves sufficiently far away from the weighted components of the index (see Hull for details). For example, for an equity index, the no-arbitrage condition could be expressed as: $\left| F_t - \sum_{i} w_i S_t^i e^{(r-q_i)(T-t)}\right| < \mbox{Cost}$ where $$q_i$$ is the dividend rate for stock i, and $$F_t$$ is the index futures price at time t. The deviation between the futures price and the weighted index basket is called the basis. Index arbitrage was one of the earliest applications of program trading. An alternative form of index arbitrage was a system where sufficient deviations in the forecasted variance of the relationship (estimated by regression) between index pairs and the implied volatilities (estimated from index option prices) on the indices were classed as an arbitrage opportunity. There are many variations on this theme in operation based on the VIX market today. Statistical Pairs trading is based on the notion of relative pricing – securities with similar characteristics should be priced roughly equally. Typically, a long-short position in two assets is created such that the portfolio is uncorrelated to market returns (i.e. it has a negligible beta). The basis in this case is the spread between the two assets. Depending on whether the trader expects the spread to contract or expand, the trade action is called shorting the spread or buying the spread. Such trades are also called convergence trades. A popular and powerful statistical technique used in pairs trading is cointegration, which is the identification of a linear combination of multiple non-stationary data series to form a stationary (and hence predictable) series. Trading Algorithms In recent years, computer algorithms have become the decision-making machines behind many trading strategies. The ability to deal with large numbers of inputs, utilise long variable histories, and quickly evaluate quantitative conditions to produce a trading signal, have made algorithmic trading systems the natural evolutionary step in high-frequency financial applications. Originally the main focus of algorithmic trading systems was in neutral impact market strategies (e.g. Volume Weighted Average Price and Time Weighted Average Price trading), however, their scope has widened considerably, and much of the work previously performed by manual systematic traders can now be done by “black box” algorithms. Trading algorithms are no different from human traders in that they need an unambiguous measure of performance – i.e. risk versus return. The ubiquitous Sharpe ratio ($$\frac{\mu_r – \mu_f}{\sigma}$$) is a popular measure, although other measures are also used. A measure of trading performance that is commonly used is that of total return, which is defined as $R_T \equiv \sum_{j=1}^{n}r_j$ over a number of transactions n, and a return per transaction $$r_j$$. The annualized total return is defined as $$R_A = R_T \frac{d_A}{d_T}$$, where $$d_A$$ is the number of trading days in a year, and $$d_T$$ is the number of days in the trading period specified by $$R_T$$. The maximum drawdown over a certain time period is defined as $$D_T \equiv \max(R_{t_a}-R_{t_b}|t_0 \leq t_a \leq t_b \leq t_E)$$, where $$T = t_E – t_0$$, and $$R_{t_a}$$ and $$R_{t_b}$$ are the total returns of the periods from $$t_0$$ to $$t_a$$ and $$t_b$$ respectively. A resulting indicator is the Stirling Ratio, which is defined as $SR = \frac{R_T}{D_T}$ High-frequency tick data possesses certain characteristics which are not as apparent in aggregated data. Some of these characteristics include: • Non-normal characteristic probability distributions. High-frequency data may have large kurtosis (heavy tails), and be asymmetrically skewed; • Diurnal seasonality – an intraday seasonal pattern influenced by the restrictions on trading times in markets. For instance, trading activity may be busiest at the start and end of the trading day. This may not apply so much to foreign exchange, as the FX market is a decentralized 24-hour operation, however, we may see trend patterns in tick interarrival times around business end-of-day times in particular locations; • Real-time high frequency data may contain errors, missing or duplicated tick values, or other anomalies. Whilst historical data feeds will normally contain corrections to data anomalies, real-time data collection processes must be aware of the fact that adjustments may need to be made to the incoming data feeds. Posted in Finance, Statistics
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 63, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.904374361038208, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/128282/group-homomorphism-always-exist-between-two-groups
# Group homomorphism always exist between two groups. Is there always exists a homomorphism between two groups $G_1$ and $G_2$? Why? - ## 1 Answer If you require the homomorphism to be surjective, the answer is no; for example, there is no homomorphism of $\Bbb Z/3\Bbb Z$, the cyclic group of order $3$, onto $\Bbb Z/2\Bbb Z$, the cyclic group of order $2$. Added: In particular, if $H:G_1\to G_2$ is a surjective homomorphism, then $$|G_2|=\frac{|G_1|}{|\ker h|}\;,$$ so $|G_2|$ must divide $|G_1|$. Another general class of examples for which the answer is no is provided by pairs of non-isomorphic groups of the same cardinality, like $\Bbb Z/4\Bbb Z$, the cyclic group of order $4$, and $(\Bbb Z/2\Bbb Z)^2$, the Klein four-group: a surjective homomorphism between them would be an isomorphism. If you do not require the homomorphism to be surjective, the answer is yes: just map every element of $G_1$ to the identity element of $G_2$. - Thanks a lot. This is very useful. – faisal Apr 5 '12 at 9:12 3 Or, more generally, if you require the homomorphism to be nontrivial, then the answer is "not always". – Arturo Magidin Apr 5 '12 at 14:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9072429537773132, "perplexity_flag": "head"}
http://stochastix.wordpress.com/2012/06/29/runge-kutta-in-haskell/
# Rod Carvalho ## Runge–Kutta in Haskell Last time I implemented a Runge-Kutta method was in December 2001. Back then, I implemented the classical 4th order Runge-Kutta (RK4) method in C. However, in the past decade, whenever I needed a numerical solution of an ordinary differential equation (ODE), I used MATLAB. Now, I would like to use Haskell instead. Consider the following initial value problem (IVP) $\dot{x} (t) = f ( x (t) )$,     $x (t_0) = x_0$ where the vector field $f : \mathbb{R}^n \to \mathbb{R}^n$ and the initial condition $x_0$ are given. Solving an instance of the IVP consists of finding a function $x : [t_0, \infty) \to \mathbb{R}^n$ that satisfies both the differential equation and the initial condition. Let $h > 0$ be the time-step, and let $t_k = t_0 + k h$ be the $k$-th time instant, where $k \in \mathbb{N}_0 := \{0, 1, 2, \dots\}$. Suppose that we solve a given instance of the IVP and obtain a closed-form solution $x : [t_0, \infty) \to \mathbb{R}^n$; we then discretize the closed-form solution to obtain an infinite sequence $x_0, x_1, x_2, \dots$ where $x_k = x (t_k)$. However, not all instances of the IVP have a closed-form solution (or even an analytic solution), and even in the cases where a closed-form solution does exist, it can be quite hard to find such a solution. Therefore, instead of discretizing the closed-form solution, one is tempted to discretize the ODE itself to obtain an approximation of the sequence $x_0, x_1, x_2, \dots$, which we will denote by $\tilde{x}_0, \tilde{x}_1, \tilde{x}_2, \dots$. We will call the approximate sequence a numerical solution. Consider now the following discretized initial value problem (DIVP) $\tilde{x}_{k+1} = g ( \tilde{x}_k )$,     $\tilde{x}_0 = x_0$ where $g : \mathbb{R}^n \to \mathbb{R}^n$ depends on what particular numerical method one wants to use. We will use the (classical) 4th order Runge-Kutta (RK4) method [1], which is given by $g (x) = \displaystyle x + \frac{1}{6} \left( k_1 + 2 k_2 + 2 k_3 + k_4 \right)$ where the $k_i$ variables are given by $\begin{array}{rl} k_1 &= h f (x)\\\\ k_2 &= h f (x + \frac{1}{2} k_1)\\\\ k_3 &= h f (x + \frac{1}{2} k_2)\\\\ k_4 &= h f (x + k_3)\end{array}$ In a nutshell, one hopes that by making the time-step $h$ “small enough”, the numerical solution $\tilde{x}_0, \tilde{x}_1, \tilde{x}_2, \dots$ will be “close enough” to $x_0, x_1, x_2, \dots$, the discretization of the actual solution $x$. This hoping involves a fair amount of faith. Especially so if floating-point arithmetic is used, as underflow, overflow, and round-off error are not to be dismissed. __________ The following Haskell script implements the RK4 method: -- define 4th order Runge-Kutta map (RK4) rk4 :: Floating a => (a -> a) -> a -> a -> a rk4 f h x = x + (1/6) * (k1 + 2*k2 + 2*k3 + k4) where k1 = h * f (x) k2 = h * f (x + 0.5*k1) k3 = h * f (x + 0.5*k2) k4 = h * f (x + k3) Let us now test this rather succinct implementation. __________ Example Consider the following IVP where $t_0 = 0$, $x_0 = 1$, and $n = 1$ $\dot{x} (t) = - x (t)$,     $x (0) = 1$. The closed-form solution of this IVP is $x (t) = e^{-t}$. We load the script above into GHCi and compute the numerical solution of the IVP using the higher-order function iterate: *Main> -- discretize closed-form solution *Main> let xs = [ exp (-t) | t <- [0,0.1..]] *Main> -- define function f *Main> let f x = -x *Main> -- define time-step h *Main> let h = 1 / 100000 *Main> -- define initial condition *Main> let x0 = 1.0 *Main> -- compute numerical solution *Main> let xs_tilde = iterate (rk4 f h) x0 *Main> -- decimate numerical solution *Main> let xs_dec = [ xs_tilde !! k | k <- [0,1..], rem k 10000 == 0] Note that the time-step is $h = 1 / 100000$. Hence, the numerical solution will have $10^5$ samples in between $t = 0$ and $t = 1$, which probably is too fine a discretization. Thus, we decimate the numerical solution by a factor of $10^4$, so that we are left with only eleven samples in between $t = 0$ and $t = 1$, with a uniform spacing of $\Delta t = 0.1$ between consecutive samples. We can now compare the discretization of the closed-form solution with the heavily decimated numerical solution, as follows: *Main> -- print discretized closed-form solution *Main> take 11 xs [1.0,0.9048374180359595,0.8187307530779818, 0.7408182206817179,0.6703200460356392, 0.6065306597126333,0.5488116360940264, 0.49658530379140947,0.44932896411722156, 0.4065696597405991,0.36787944117144233] *Main> -- print decimated numerical solution *Main> take 11 xs_dec [1.0,0.9048374180359563,0.8187307530779842, 0.7408182206817214,0.6703200460356442, 0.6065306597126376,0.5488116360940266, 0.49658530379141175,0.44932896411722334, 0.40656965974059905,0.367879441171439] *Main> -- compute difference of two solutions *Main> let es = zipWith (-) xs_dec xs *Main> -- print difference *Main> take 11 es [0.0,-3.219646771412954e-15,2.3314683517128287e-15, 3.552713678800501e-15,4.9960036108132044e-15, 4.3298697960381105e-15,2.220446049250313e-16, 2.275957200481571e-15,1.7763568394002505e-15, -5.551115123125783e-17,-3.3306690738754696e-15] *Main> -- compute energy of the difference *Main> sum \$ take 11 \$ map (^2) es 9.161263559461204e-29 Note how small the values of difference sequence are, of the order of $10^{-15}$. The implementation of the RK4 method seems to be working. Who needs MATLAB now? ;-) __________ References [1] Richard W. Hamming, Numerical Methods for Scientists and Engineers, Dover Publications, 1973. ### Like this: Tags: Differential Equations, Haskell, Initial Value Problems, Numerical Methods, Numerical ODEs, ODEs, Ordinary Differential Equations, Runge-Kutta Methods This entry was posted on June 29, 2012 at 13:44 and is filed under Haskell, Numerical Methods. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 4 Responses to “Runge–Kutta in Haskell” 1. LucasCampos Says: October 9, 2012 at 15:44 | Reply That seems like an awesome implementation. As you said, is quite succint. I’ll try to start doing simulations in Haskell. 2. gs Says: November 9, 2012 at 16:19 | Reply In the first ghci session you display, you don’t give an upper limit for the interval you are solving the ODE on. Is that an example of lazy evaluation, done at the “take 11 xs_dec” line? • Rod Carvalho Says: November 9, 2012 at 19:48 | Reply Indeed, it is. I merely declare what the list is. Only when I use take do I force the interpreter to actually compute stuff. Lazy evaluation truly is a beautiful thing. 3. Sahisnu Says: March 2, 2013 at 16:48 | Reply This is enlightenment! %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 39, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8964812159538269, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/6769/upward-force-on-a-object-at-rest/6770
# Upward force on a object at rest Is there an upward force on a object at rest? If yes, where does it come from? - 3 Can you be more specific about the situation? I have a feeling you're referring to an object in the Earth's gravitational field resting on some kind of surface? – dbrane Mar 12 '11 at 22:53 You mean on an object lying on the ground? If so, the answer is yes. It comes from the ground. Do you want more precise microscopic description of this? – Marek Mar 12 '11 at 22:54 No, there is no force on an object at rest. – Holowitz Mar 13 '11 at 2:26 You should add why, we people get tired keeping a lifted weight in our hands but tables don't! :))) – Val May 12 at 9:42 ## 4 Answers Yes, it's called the normal force. It comes from the rigidity of the stuff separating the object from the center of gravitational attraction, i.e. the rigidity of the rocks, dirt, floor, table, etc. If you'd like, you could think of this stuff as behaving like a spring with a huge spring constant. Any first-year physics textbook will cover this; there's a very incomplete list of books in another question. - It's normal force equals its weight per Newton's third law. – Michael Luciuk Mar 12 '11 at 23:10 Michael Lucluk: You mean Newton's second law! Normal force equals weight (for an object that's subject only to those two forces and is not accelerating) because the total force on the object is zero in those circumstances. The third law says that the normal force on A due to B equals the normal force on B due to A; it doesn't say that normal force equals weight. – Ted Bunn Mar 12 '11 at 23:17 Sorry Ted. As usual my pen exceeded my brainpower. – Michael Luciuk Mar 13 '11 at 0:13 Stupid question: where does the energy come from to provide this upward force? – Jonathan. Apr 15 '11 at 20:59 1 @jonathan It does not require any energy transfer to provide a force. Energy will be transferred if there is a force on an object in the direction of its motion. – Mark Eichenlaub Apr 15 '11 at 21:05 show 2 more comments It is all electrostatics. The electrons on the outer shells of the atoms of the object don't want to be anywhere near the electrons of the atoms on the resting surface providing a repelling force which increases with proximity. When this force balances with gravity you have reached "equilibrium". In fact, everything is somewhat fluid as the atoms move and vibrate nothing is really static. But on a macroscopic scale it is unnoticeable. - I don't think this is relevant; you're referring to electrostatics in a very newtonian situation. – Garan May 12 at 9:02 I do not think that crystals are sorta fluid, even at the microscopic level. And, I do not think that it is appropriate to ground the repulsion on the liquidness. – Val May 12 at 9:37 The upward force on an object at rest is called the Normal force and is always perpendicular to the surface. If you recall from Newton's Third law, "Every action has an equal and opposite reaction." So an example is a block sitting on a table. The block is exerting a force DOWN on the table from the gravitational force, its weight. By Newton's Third law, there is an equal an opposite reaction due to this downward force. The block is "pushing" down on the table, so the table must also push UP on the block. This pushing from the table is the normal force. If this force were not present, the block would accelerate right through the table due to Newton's Second Law. - This is my second most hated thing about physics, personally it seems as though the upward force was just invented to fit the law. and that the reason the book doesn't fall through the table is simply because there is a table in the way :) – Jonathan. Apr 15 '11 at 20:58 One of the first things to check with such old (and quiet) threads is to look when the starter of it, in this case user2123, was seen last. It happens that this was Mar 12. – Georg Apr 15 '11 at 20:59 2 @Jonathan: for one thing, you can measure the upward force by pressing a scale against the table. So it's real, it wasn't just invented. Besides, how else would you mathematically express the fact "there is a table in the way"? (I guess you could do it with a Lagrange multiplier and a constraint function but that's another story) – David Zaslavsky♦ Apr 15 '11 at 21:17 Lennard-Jones potential is a big answer to the confusion The force is a spacial deriviative of the energy, F = dE/dr. So, positive derivative, where plot goes up you get attractive force, and where it goes down you will have repulsion. You see that there is a huge reuplsion between atoms when the distance between them, r, is very small. So, they cannot be pulled very closely. As gravity pulls book to the molecules of table, they repel very strongly. Atoms tend to stay at distance $r_m$, producing molecular structures. Please note that the repulisve force appears at distances smaller than distance between molecules. So, you do not feel it normally, when keep your book is above the table. The reaction force is zero at that distance. You can make it visible at visible distances, though, by chaining zillions of atoms in one chain. Then, when you compress the chain by 1 mm, the distance between atoms varies by fraction of $r_m$. You get into the parabolic potential well $r_m$. You call such setup a spring. You expand it - you see that atoms attract. You contract it -- and see that they repel. - ## protected by Qmechanic♦May 12 at 8:24 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9497279524803162, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-topics/74744-sign-charts-print.html
# Sign Charts Printable View • February 20th 2009, 03:31 PM AlgebraicallyChallenged Sign Charts Dear Forum I have tried to figure this one out several times but I am still having no luck if anyone can help please let me know, it would be greatly appreciated. Solve x^2 + 11x + 18 > 0 using the Critical Value (Sign Chart) method. Thanks -AC- • February 20th 2009, 03:42 PM skeeter find the x-values where $x^2 + 11x + 18 = 0$ plot these two x-values on a number line ... the two plotted solutions break up the number line into three intervals. pick a single value in each interval and "test" it to see if that value makes the original inequality true or false ... if true, then all values in that interval make the inequality true and are part of the solution set. if the chosen value from the interval makes the original inequality false, then exclude that interval of values from the solution set. All times are GMT -8. The time now is 04:56 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8505674600601196, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/33669/trajectory-of-a-projectile-meets-a-moving-object-2d/34112
# Trajectory of a projectile meets a moving object (2D) First of all, I asked this question on Stackoverflow, but I realize this is a better place to ask the question. So i moved it here. I've looked for quite some time now to find a nice math solution for my cannon firing a projectile at a moving target, taking into account the gravity. I've found a solution for determining the angle at which the cannon should be fired, based on the cannon's position, the target's position and the start velocity. The formula is described here: http://en.wikipedia.org/wiki/Trajectory_of_a_projectile#Angle_.CE.B8_required_to_hit_coordinate_.28x.2Cy.29. Unfortunately I can't post images yet here, so you'll have to do with the link. This works perfectly. However, my target is moving, so if I shoot at the target and the projectile takes a few seconds to get to its destination, the target is long gone. The target's x position can be determined from the time. Lets say that: x = 1000 - (10 * t) where t is the time in seconds. The y can be described as: y = t. The problem is, that t depends on the angle at which the cannon is fired. Therefor my question is: How can I modify the formula as described in the wiki, so that it takes the moving target into account? I want to fire it now and the target is in range given the speed. The cannon is at {0, 0} and isn't moving. The start speed is 100 m/s. The target is at {1000, 0} and is moving with 10 m/s towards the cannon (v = -10 m/s). What angle should I use to hit the moving target, when I want to fire at t=0 (immediately)? If I shoot without taking the target's speed into account, I would aim at {1000, 0} and the angle could be calculated using the mentioned formula. But it will miserably miss the target because its moving. I could aim at i.e. {500, 0}, calculate what time it takes for the projectile to arrive at those coords (lets say 5 seconds) and wait until the target is 5 seconds away from {500, 0}, being {550, 0}. But this means that I have to wait 450m or 45 seconds before I can fire my cannon. And I don't want to wait, because the target is killing me in the mean time. I really hope this gives you enough info to go with. I'd prefer some a neat math solution, but anything that would get me really close to firing "right away" and "right on target" is also much appreciated. I would also be happy if you can tell me if what I want can't be captured in a formula, then I can figure out an algorithm to find it as fast as possible. Thank you in advance for your braintime! - If the target is moving with speeds $u_x$ and $u_y$ along the x and y axes respectively, then you can modify the equations of motion to take that into account as follows: $x=vt\cos(\theta)+u_xt$ and $y=vt\sin(\theta)-\frac12gt^2 - u_yt$. Observe how this change doesn't affect the equations much since $u_xt$ and $u_yt$ factor in with the existing $vt$, so that the rest of the derivaton should follow routinely as before. – quantumelixir Apr 18 '11 at 17:38 Dear QuantumElixer, asking you this feels a bit like not doing my homework, but I didn't manage to solve the equation with the "+ ux/uy t" part. To be quite honest, I just copied the first formula into code straight from the wiki. Could you help me out a bit? Thank you in advance! – sdk Apr 19 '11 at 17:42 A closed form solution for $\theta$ in this case could be hard. Instead, you could simply plug in the values that you know into those equations and try to eliminate $\theta$ using $\sin^2 \theta + \cos^2 \theta = 1$ to find $t$ and then find $\theta$ using $t$. – quantumelixir Apr 20 '11 at 5:01 ## 1 Answer Note: I'm using the same convention used in the wikipedia article that you mention. Suppose the target is moving with a speed $u$, in a direction that makes an angle $\phi$ with the positive direction of the x-axis, then you can write down the following equations of motion: $$x = vt\cos\theta + ut\cos\phi$$ $$y = vt\sin\theta - \frac12gt^2 - ut\sin\phi$$ If you eliminate $t$ from those two equations you will get an equation in a very nice form: $$gx^2+2(v\cos\theta+u\cos\phi)(vy\cos\theta+ uy\cos\phi-vx\sin\theta+ux\sin\phi)=0$$ in which you are required to substitute the values of $g,x,y,v,u,\phi$ and solve the resulting equation for $\theta$. You can use a numerical method which can handle implicit equations like $f(\theta)=0$ as we have here. For instance, you could try Newton's Method. - Thx alot, this works like a charm and is also something I understand completely! Thx alot for your insight! – sdk Apr 20 '11 at 19:15 Glad to help! :) – quantumelixir Apr 21 '11 at 12:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9556350708007812, "perplexity_flag": "head"}
http://mathoverflow.net/questions/69278?sort=votes
Hartshorne’s associated scheme for a variety Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question comes from Proposition 2.6 in Chapter 2 of Hartshorne's Algebraic Geometry. In my edition, that's on page 78. For a variety $V$, Hartshorne defines the topological space $t(V)$ to consist of the nonempty closed irreducible subsets of $V$, where the closed sets of $t(V)$ are of the form $t(Y)$ for $Y$ closed in $V$. He then defines a map $\alpha: V \rightarrow t(V)$ where P gets sent to {P} in $t(V)$. The claim is that `$(t(V), \alpha_*(\mathcal{O}_V))$` is a scheme. I understand why this is true if $V$ is affine, but I have been unable to show `$(t(V), \alpha_*(\mathcal{O}_V))$` is a scheme for an arbitrary variety $V$. I had hoped to show that if $U$ is an affine open subset of $V$, then $t(U)$ is isomorphic to an open subset of $t(V)$. I used the map from $t(U)$ into $t(V)$ where we send an irreducible subset $W$ in $U$ to the smallest irreducible subset of $V$ containing $W$. However, although the image of of $t(U)$ is contained in $[t(U^c)]^c$, I don't believe these are equal. - 2 Answers To show that $(t(V),\alpha_*\mathcal{O}(V))$ is a scheme, you must show that $t(V)$ has an open cover on which this ringed space is isomorphic to an affine scheme. Take an affine open cover $\{U_i\}$ of $V$. Since you believe the affine case, it suffices to show that $\{t(U_i)\}$ is an open cover of $t(V)$, and `$(t(V),\alpha_*\mathcal{O}(V))|_{t(U_i)} \cong (t(U_i),\alpha_*\mathcal{O}(U_i))$` for each $i$. Given your last paragraph, it sounds like the first of these points is your difficulty. Let $Y$ be a nonempty irreducible closed subset $Y\subseteq U_i$. For each $j$, $Y\cap U_j$ is (when nonempty) a nonempty irreducible closed subset of $U_i\cap U_j$ (since an open subset of an irreducible is irreducible). The intersection $U_i\cap U_j$ is an affine open subset of $U_j$, and it's not hard to see (look at the pre-image of the corresponding prime ideals!) that $Y\cap U_j$ extends in a natural way to an irreducible closed subset of $U_j$. These extensions glue for varying $j$ to give an irreducible closed subset of $V$, since a locally irreducible subset of a (connected) space is irreducible. This furnishes the map $t(U_i)\to t(V)$ (which, in particular, I think addresses the issue you raise in the last paragraph). It remains to see that this is an open subset and gives an open cover of $t(V)$, and to prove the above isomorphism. So now try from here... - Thanks so much for your answer! I think I now understand your map, but I'm still trying to show the image of $t(U_i)$ is open. By saying this map addresses the issue I was having, do you mean that the image will equal $[t(U_{i}^c)]^c$? Or do I need a different approach? – abourdon Jul 2 2011 at 12:47 1 The image is as your describe. To see this, I suggest that you first try to prove that the inclusion of $t(U_i)$ into $t(V)$ I described (because I was working in the context of a more general argument to show that $t(V)$ is a scheme) is more simply described as "taking the closure". From this description it's not difficult to show that the image of $t(U)$ is $t(U^c)^c$. Basically, if $X$ is a nonempty irreducible closed subset of $U$, its closure cannot be contained in $U^c$, so it is an element of $t(U^c)^c$ by definition. Conversely, if $X$ is closed irreducible in $V$ not containing ... – Ramsey Jul 2 2011 at 16:02 rather, not contained in, $U^c$, then $X\cap U$ is nonempty irreducible closed in $U$. Now show that this $X$ is the closure of $X\cap U$ and you're done! – Ramsey Jul 2 2011 at 16:04 Great! Thanks again for your help! – abourdon Jul 3 2011 at 16:49 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I can't comment yet, so this is a brief comment about some intuition, not the exact answer. At least on affine schemes, the points (i.e. prime ideals) are in one-to-one correspondence with irreducible closed subsets of the affine scheme (I have just read very little EGA, so please excuse my ignorance if this holds generally for any scheme - which is actually great). This really explains the "somewhat unintuitive" construction of the map. - With regard to generalizing this: look at Hartshorne's exercises I.1.6 and II.2.7. – Dylan Moreland Jul 4 2011 at 1:45 @Dylan: Thanks a lot for your help. I think you probably mean II.2.9, nice generalization! – Poldavian Jul 6 2011 at 3:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 61, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9532129764556885, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/148998/how-to-plot-z-in-mathbbc-z-izi/149002
# How to plot $\{z \in \mathbb{C} : |z-i|>|z+i|\}$ How would I draw the set $\{z \in \mathbb{C} : |z-i|>|z+i|\}$ and $\{z \in \mathbb{C} : |z-i|\not=|z+i|\}$? Im not sure how to solve the second one, and for the first one, I tried squaring both sides and trying to work something out, but I got no where. $|z-i|^2>|z+i|^2\\\\(z-i)(\bar z+i)>(z+i)(\bar z-i)\\ z\bar z+1+i(z -\bar z)>z \bar z+1 +i(\bar z -z)\\i(z-\bar z)>i(\bar z-z)$ What would the 'general' method/approach be for drawing the sets? Edit: How would I draw $\{z \in \mathbb{C} : |z-i|\not=|z+i|\}$? After a similar calculation using Zev Chonoles' post, I got that $-b\not=b$, hence $z=a+ib$ satisfies $|z-i|\not=|z+i|$ if and only if $-b\not=b$. For $\{z \in \mathbb{C} : |z-i|>|z+i|\}$ - ## 3 Answers Write out a complex number $z$ with real and imaginary components, i.e. as $z=a+bi$. Then $$|z-i|=|a+(b-1)i|=\sqrt{a^2+(b-1)^2}$$ $$|z+i|=|a+(b+1)i|=\sqrt{a^2+(b+1)^2}$$ so $$\begin{align*}|z-i|>|z+i|&\iff\sqrt{a^2+(b-1)^2}>\sqrt{a^2+(b+1)^2}\\&\iff a^2+(b-1)^2>a^2+(b+1)^2\\ &\iff (b-1)^2>(b+1)^2\\ &\iff -2b>2b\\ &\iff b<0\end{align*}$$ Thus, the complex number $z=a+bi$ satisfies $|z-i|>|z+i|$ if and only if $b<0$, i.e. if and only if it lies below the real axis in the complex plane. Thinking geometrically (i.e. with complex numbers as points in the plane), it might also help to note that $$|z-i|=\sqrt{a^2+(b-1)^2}=\text{distance from }(a,b)\text{ to }(0,1)$$ $$|z+i|=\sqrt{a^2+(b+1)^2}=\text{distance from }(a,b)\text{ to }(0,-1)$$ - Thanks, @ZevChonoles . Just to make sure, is my diagram correct for the first graph? Also, how would I draw the 2nd graph? – Derrick May 23 '12 at 23:11 1 @Derrick: Yup, it's correct - you have the dotted line along the real axis indicating that it is not included in the set, and you've shaded in the right region. Now, note that $-b\neq b$ if and only if $b\neq 0$. Does that help you see the graph for the second question? – Zev Chonoles♦ May 23 '12 at 23:12 Thanks @ZevChonoles, so I take it, it would be shading the entire region with a dotted line along the real axis (see edited post with new picture)? – Derrick May 23 '12 at 23:19 1 @Derrick: Exactly! – Zev Chonoles♦ May 23 '12 at 23:32 Okay, thanks again for all your help! – Derrick May 23 '12 at 23:33 Try writing $z=x+iy$ with $x,y\in\mathbb{R}$. Then for the first inequality you get (just try it for yourself): $$|z-i|>|z+i| \Leftrightarrow \mbox{Im}(z) < 0$$ so the solutions is the whole lower half-plane (without the real axis). For the second one, you get $\mathbb{C}\setminus\mathbb{R}$, because of a similar condition for the imaginary part ($\mbox{Im}(z)\not=0$). Also, your last line $i(z-\bar{z})>i(\bar{z}-z)$ translates for $z=x+iy$ into $0>y$, which gives the same result. - Thanks, most appreciated :) – Derrick May 23 '12 at 23:34 One could simply look at the absolute value of a complex number as it's distance from origin, or in other words, $|a-b|$ is the distance of $a$ from $b$. Now, in your problem you want find all the points such that their distance from $i$ is more than their distance from $-i$. Well, find all the points who have the same distance from $i$ and $-i$. That's a line (namely the line that is perpendicular to the line segment $\overline{i,-i}$). So far you've got the solution to the second part of the problem. Now, choose the half-plane which includes $-i$. That is that answer for the first part. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9471372365951538, "perplexity_flag": "head"}
http://sbseminar.wordpress.com/2007/06/27/john-brundan-and-the-centers-of-blocks-of-category-o/
## John Brundan and the centers of blocks of category O June 27, 2007 Posted by Ben Webster in Algebraic Topology, category O, Category Theory. trackback So, those of you who like the categorical approach to representation theory probably know about the center of a category: Definition. The center of an abelian category is the rings of endomorphisms of the identity functor. That is, an element of the center is a system of maps $\phi_C: C\to C$ for all objects $C$ in your category such that for ANY morphism $\xi:C\to D$, we have $\phi_D\xi=\xi\phi_C$. For example, if A is a finite dimensional algebra, then the action of any element of the center of A gives an element of the center of the catgeory $\mathrm{Rep}(A)$ (ANY element of A satisfies the commutation condition by definition, but it will only give a morphism in $\mathrm{Rep}(A)$ if it is central). In this case, the center of A will actually be the whole center of the category, by silly universal algebra considerations (you can recover a finite-dimensional algebra as the endomorphism algebra of the forgetful functor from $\mathrm{Rep}(A)$ to vector spaces). But in more general cases, it can be hard to be sure you’ve got the whole center of the category (just as it’s hard to be sure you’ve got the whole center of an algebra you don’t know very well). Once you have your hands on the center of a category $\mathcal C$, you can use it for block decomposition by central characters. Let’s make our lives easier, and suppose our category is linear over an algebraically closed field $k$. Then, for any simple object $C$, and any $\phi$ is the center of $\mathcal C$, then the morphism $\phi_C$ must be a multiple of the identity, which we might call $\chi_C(\phi)$. This defines a character $\chi_C:Z(\mathcal{C})\to\mathbb{C}$, which we call the central character of $C$. Now, imagine $C$ and $D$ are simple objects with different central characters $\chi_C\neq \chi_D$. Then, if one looks at the space $\mathrm Ext^i(C,D)$ we can see that the center must act on it by $\chi_C$ by composition on the left, and by $\chi_D$ by composition on the right. Since these are different, the only way this is possible is for all these spaces to be trivial. Thus Theorem. Let $\mathcal{C}_\chi$ be the Serre subcategory of $\mathcal{C}$ generated by simples with central character $\chi_C$, and $\mathcal{C}'_\chi$ be the subcategory generated by all other simples. Then $\mathcal{C}\cong \mathcal{C}_\chi\oplus \mathcal{C'}_\chi$. More generally, we have a decomposition of $\mathcal{C}$ as $\mathcal{C}\cong\bigoplus_{\chi}\mathcal{C}_\chi$. (A Serre subcategory of an abelian category where if latex A\subset B\$ and two of $A,B$ and $B/A$ are in the subcategory, the third one is as well). On the other hand, if one has a direct sum decomposition of your category $\mathcal{C}\cong \mathcal{C}'\oplus\mathcal{C}''$, then then projection to the unique summand of $C$ lying in $\mathcal{C}'$ is an element of the center which distinguishes these. Thus, the central characters separate blocks. However, if you have an infinite dimensional algebra, and pick out some nice category of representations for it, then there’s no guarantee that the center of the algebra surjects onto the center of the category. One interesting example of this is so-called category $\mathcal O$. This is the Serre subcategory of the category of all representations of a semi-simple complex Lie algebra \$\mathfak{g}\$ generated by Verma modules (alternatively by simple modules with a weight decomposition and highest weight vector). More generally, we’ll want to consider the parabolic category $\mathcal O^{\mathfrak p}$ for some parabolic \$\mathfrak p\subset \mathfrak g\$, which is the subcategory of $\mathcal O$ consisting of modules which are a direct sum of finite dimensional representations for the action of \$\mathfrak l\$, the Levi subgroup of \$\mathfrak p\$. Let’s restrict, for ease, to the case where they have integral weights as well. In this case, the blocks are in bijection with orbits of the shifted Weyl group action $w\bullet\lambda=w(\lambda+\rho)-\rho$ where $\rho$ is the sum of the fundamental weights, as usual. The action of the center of the universal enveloping algebra of $\mathfrak g$ gets exactly this far: it’s canonically identified with the functions invariant under this action of the Weyl group by the Harish-Chandra homomorphism. One need only look at the Verma modules of elements with dominant (or almost dominant weight) to check that each of these subcategories is indecomposible. But does the universal enveloping algebra get the whole center of each block? How could you check? To see this, you should realize your category as representations of a finite dimensional algebra. Luckily, this is possible for any category with finitely many simple objects and enough projectives. If you take a projective generator $P$ of your block (the direct sum of the projective cover of each simple module in the block), and look at it’s endomorphism algebra $E$, then you’ll see that $\mathrm{Hom}(P,-)$ is an equivalence of categories to $\mathrm{Rep}(E)$. So, now we just need to find the center of $E$. But this is not a particularly easy task. Soergel worked out the the universal enveloping algebra does surject in usual category $\mathcal O$ using the following trick: let $P_a$ be the unique indecomposible module of your block which is both projective and injective. Then, the functor $\mathrm{Hom}(P_a,-)$ to representations of $C=\mathrm{End}(P_a)$ is full and faithful on projectives (it kills lots of other modules, but not projectives). Thus, one can calculate the endomorphisms of a projective generator just as well after applying this functor. It turns out that $C$ is commutative, and the obvious map into \$\mathrm E\$ is an isomorphism onto the center. One can explicitly check that the center of $U(\mathfrak{g})$ surjects onto $C$, and thus onto the center of the block. This result is simply not true in the parabolic case, though. There are explicit examples in type \$B\$ where the map to the center is not sujective. However, in type \$A\$, it’s actually true, and John Brundan gave a tlak here in Denmark illuminating his proof of this fact. I don’t really have the energy to say much about it, other than that it relies on a very explicit description of a projective generator of $\mathcal{O}^{\mathfrak p}$, which most interestingly shows that the center of a block of $\mathcal{O}^{\mathfrak p}$ is isomorphic to the cohomology of a Spaltenstein variety in $G/P$. There’s no geometric proof of this fact, a problem begging to be rectified. But we’ll save talking about that for another day. ## Comments» 1. Urs Schreiber - June 27, 2007 I’d be grateful to learn if somebody ever ran into prominent examples of the following slight generalization of the concept of the center of a category: The center $Z(\mathcal{C})$ of $\mathcal{C}$ is an abelian monoid since it is constructed as a one-object one-morphism 2-category, namely $Z(\mathcal{C}) :=\mathrm{End}_{\mathrm{Cat}}(\mathrm{Id}_{\mathcal{C}}))$. But now suppose we have a finite group $G$ acting strictly on $\mathcal{C}$ – $R : \Sigma G : \mathrm{Aut}_{\mathrm{Cat}}(\mathcal{C})$ – by functors $R_g : \mathcal{C} \to \mathcal{C}$ for all $g \in G$. Then we can consider the $G$-graded monoid $\mathrm{Hom}_{\mathrm{Cat}}(\mathrm{Id}_{\mathcal{C}},-) := \bigcup_{g \in G} \mathrm{Hom}_{\mathrm{Cat}}(\mathrm{Id}_{\mathcal{C}},R_g)$. This is no longer abelian. But almost so: we may pass an element $b$ of this monoid past an element $a$ in degree $g$ up to a twist $a \cdot b = \mathrm{Ad}_g(b) \cdot a$. This game may be played not just with 1-categories, but also for instance with 2-categories. There it turns out to reproduce a concept introduced by Turaev and Kirillov, that of $G$-equivariant fusion categories: these generalize braided monoidal categories of reps of vertex operator algebras to something like $G$-orbifold versions. But here I am looking for places where this “$G$-twisted center” has been considered for 1-categories. Has anyone seen this? 2. Urs Schreiber - June 27, 2007 Sorry for the typesetting mess. Without a comment preview this is hard to avoid. The lesson I learned this time: don’t use line breaks in LaTeX environment… 3. Ben Webster - July 1, 2007 Sounds vaguely reminiscent of some of this stuff Noah’s been doing with braided monoidal categories fibered over a group, though he has a very specific example in mind, rather than a general construction. 4. Urs Schreiber - July 5, 2007 stuff Noah’s been doing with braided monoidal categories fibered over a group Is that the one referred to as Braiding for quantum groups at roots of unity (in preparation) on Noah’s website? I’d be interested in seeing that. I did think about looking into categories fibered over something in this context, but there are some reasons for me to look at it from another point of view. I now have a detailed description of what I have in mind here: On G-equivariant fusion categories. I happen to be interested in that mostly as a way to understand rational SCFT a little more conceptually. I am talking about that relation in Supercategories. 5. Ben Webster - July 5, 2007 That would be it. If you want to know more details, I’m sure Noah can supply. Hell, he might even have a readable draft by now. 6. Noah Snyder - July 5, 2007 That is what Ben was refering to. It’s definitely a subject with a similar flavor to what you’re dicussing, but also a bit different. You can see the basic outline in Kashaev and Reshetikhin’s papers (especially this one which I think still isn’t on the arxiv). The paper Kolya and I are writing will hopefully be done sometime early in the fall. I wish it were going to be done sooner but it got delayed due to Kolya’s being in Denmark, and now I have Mathcamp and so won’t have a chance to write it. However, in terms of the general structure it doesn’t add much to what’s already in Kashaev and Reshetikhin’s papers. It mostly concerns certain technical issues in generalizing from sl_2 to arbitrary type. 7. Urs Schreiber - July 5, 2007 There is R. Kashaev, N. Reshetikhin, Braiding for the quantum $\mathfrak{gl}_2$ at roots of unity, math/0410182. 8. Not Even Wrong » Blog Archive » Too Much Good Stuff - September 15, 2007 [...] worth reading are posts from Ben Webster about centers of blocks of category O, various comment section discussions with David Ben-Zvi at both this blog and the n-category cafe, [...] %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 73, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227617383003235, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/2917/how-can-the-schmidtsamoa-cryptosystem-uniquely-decrypt-large-messages/2919
# How can the Schmidt–Samoa cryptosystem uniquely decrypt large messages? Suppose I choose $p=7$ and $q=11$. This gives a public key of $p^2·q = 539$. However, decryption occurs using a modulus of $p·q=77$. If a person chooses to encrypt $500$ using my public key, how do I recover this value if the decryption modulus is so much smaller? Is this a limitation of the Schmidt-Samoa scheme? Is there a maximum message size for this scheme? How can this size be securely provided to the user without leaking information about the private key? - ## 2 Answers When you take a look at the paper you can see, that it is actually defined as a key-encapsulation mechanism, meaning that you will never encrypt data, but only keys for a symmetric encryption scheme. (That is, short messages.) Further in $\mathsf{TKEM.Key}(pk)$ the "key carrier" that will be encrypted is explicitely chosen from $\{0, 1,\ldots, 2^{rLen} - 1\}$ where $rLen = 2k − 2$. As $p$ and $q$ are both $k$-bit primes, $pq$ will always be greater then $2^{rLen} - 1$ and therefore the problem will not arise. If you view the scheme as a public key encryption scheme, then the message space is simply $\{0, 1,\ldots, 2^{rLen} - 1\}$. You are not supposed to encrypt larger messages. (which is fine, because $k$ is a lot larger than in your example and you only want to encrypt short keys.) - In RSA, the plaintext and ciphertext spaces are the same $\mathbb{Z}_N$ where $N=pq$. This is not, however, true for all cryptosystems. Schmidt-Samoa is one example (Paillier is another). In Schmidt-Samoa $m\in\mathbb{Z}_{pq}$, while $c\in \mathbb{Z}_{N=p^2q}$. So, if you pick $m=500$ but $500\not\in\mathbb{Z}_{pq}$, you are really encrypting $500\pmod{pq}$. A similar analogy in RSA would be if $N=143$ and you chose $m=500$. $m$ is not in the plaintext space, so you won't get the exact same $m$ from decryption. Is this a limitation of the Schmidt-Samoa scheme? Not really, you are asking the cryptosystem to do something it was never intended to do (i.e., encrypt a plaintext that is not in the plaintext space). Is there a maximum message size for this scheme? How can this size be securely provided to the user without leaking information about the private key? The maximum message size is determined by the plaintext space. One way to do it would be to tell the user a maximum number of bits for plaintext messages. For example if $p$ and $q$ are each $512$ bits, $p\cdot q$ would be approximately $1024$ bits, so tell the user not to encrypt anything bigger than $1000$ bits. From @Maeher's comment: The message space in the scheme is however limited to $\{0,1,\cdots,2^{2k-2}-1\}$, (where $p,q$ are $k$-bit primes), - You cannot publish $pq$. $N=p^2q$, therefore $N/(pq)=p$. That would break the scheme. The message space in the scheme is however limited to $\{0, 1,\ldots, 2^{2k-2} - 1\}$ (where $p,q$ are $k$-bit primes), so that solves the problem. – Maeher Jun 15 '12 at 17:14 @Maeher, good call. I'll update my post. – mikeazo♦ Jun 15 '12 at 17:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073314666748047, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/3969/why-are-elliptic-curve-variants-of-rsa-chiefly-of-academic-interest
# Why are elliptic curve variants of RSA “chiefly of academic interest”? Yesterday I was thinking about elliptic curve variants of popular protocols/algorithms (ECDH, ECES[1], etc) and the thought occured that I had never seen an elliptic curve variant of RSA. My understanding of RSA and elliptic curves told me that it should be possible. After some searching I found the following explanation: As we have mentioned, there are elliptic curve analogs to RSA but it turns out that these are chiefly of academic interest since they offer essentially no practical advantages over RSA. This is primarily the case because elliptic curve variants of RSA actually rely for their security on the same underlying problem as RSA, namely that of integer factorization. This got me thinking. I had thought that the discrete log based systems I mentioned earlier were also based on the same underlying problem, the discrete log. But the article had this to say: The situation is different with variants of discrete logarithm cryptosystems. The security of the elliptic curve variants of discrete logarithm cryptosystems depends on a restatement of the conventional discrete logarithm problem for elliptic curves. This restatement is such that current algorithms that solve the conventional discrete logarithm problem in what is termed sub-exponential time are of little value in attacking the analogous elliptic curve problem. Instead the only available algorithms for solving these elliptic curve problems are more general techniques that run in what is termed exponential time. So now, to my question: Why are we able to restate the discrete log problem to make it more practical on elliptic curves but not the RSA problem? 1. ECES is the elliptic curve variant of ElGamal - ## 1 Answer The generic discrete logarithm problem is this: Given a group $(G, ·)$ with generator $g$ and $y \in G$, find $x \in \mathbb N$ such that $y = g^x$. The "classic" discrete logarithm problem (the one used in "classic" DSA and ECDSA) is this with some subgroup of the (multiplicative) group of a prime field, i.e. $(\mathbb Z/p \mathbb Z)^*$: Given a prime number $p$, a generator $g$ (of some large subgroup) and $y \in \left<g\right>$, find $x \in \mathbb N$ such that $y = g^x \bmod p$. The elliptic curve discrete logarithm problem is this with a subgroup of an elliptic curve group: Given an elliptic curve $(E, +)$, a generator $G$ of some large subgroup, and $y \in \left<G\right>$, find $x \in \mathbb N$ such that $y = x · G$. There are generic algorithms (such that the baby-step giant-step algorithm) for solving the discrete logarithm in any group (assuming a computable group law, or even just oracle access to the group law), and these work just fine for elliptic curve groups, as well as for the classic prime field. The average running time of these algorithms usually depends on the size of the group, like $O(\sqrt{|G|})$ for baby-step-giant-step. This is still exponential in the input size (since the input size is in $O(\log|G|)$, if one doesn't select really stupid encodings). But for some special groups the discrete logarithm is easier, since they have some more (and known) structure. As an extreme case, consider the additive group $(\mathbb Z/n \mathbb Z, +)$ (for any positive integer $n$), with any generator. It is very easy to solve this problem even for quite large $n$ – we need only a single modular division, i.e. one execution of the extended euclidean algorithm. (And if the generator is fixed, we even can calculate its multiplicative inverse once and reuse it later, then it gets down to one multiplication per problem.) The multiplicative group of the prime fields is harder. There is no known polynomial time algorithms, but there are still algorithms faster than the generic ones, using subexponential time. (Yes, there is some gap between exponential and polynomial.) The number field sieve for factoring integers can actually also be used to calculate discrete logarithms in prime groups (and needs similar running time for similar input sizes of both problems). Thus we need larger prime field groups than we would need for a generic group for a same "security level". For some elliptic curves, it looks like the discrete logarithm is almost as hard as for a generic group of the same size, which means that there are no specialized algorithms which are much faster than the generic ones. (When there are, one calls the corresponding group "broken". ) The effect is that for a similar security level, we can use quite smaller elliptic curve groups than prime groups (and it turns out that for these smaller groups, the calculation is also faster). Given $n$ (a product of two large unknown primes), $e$ (with $\operatorname{gcd}(m,e) = 1)$ and $x < n$, we look for $m$ such that $x = m^e \bmod n$. The most efficient known method to solve this (other than maybe a brute force attack on $m$ when it comes from a small sub space) is by factoring the modulus, then calculating the private key $d$ and decrypt it. So the hard problem here is factoring the modulus. It is not really clear how to generalize this to any group (or what other kind of structure). I found A semantically secure elliptic curve RSA scheme with small expansion factor (2002), and it refers to (and includes an description of) an earlier work (N. Demytko. A new elliptic curve based analogue of RSA. EUROCRYPT ’93, LNCS 765 40–49 (1993).) Demytko's scheme works on an elliptic curve (and its twists) over the ring $\mathbb Z/n\mathbb Z$ (where $n$ is still a product of two large primes), and encrypts using $c = e \star m$, which is the $x$-coordinate of $e·(m,y)$ for some $y$ such that this is on the curve, and decrypts a ciphertext $c$ as $m = d \star c$, where $d$ is one of four inverses of $e$, and it depends on $c$ which one to use. The second one even works in a curve over $\mathbb Z/n^2 \mathbb Z$ (but claims to be semantically secure, as it incorporates a random component). In both cases it looks like the easiest way to break this is to factor $n$ (in the natural numbers), not some analogous problem in an elliptic curve based structure. This means $n$ must be just as large as for RSA, and thus we have correspondingly large curves – not really an advantage of using elliptic curves here. - 1 Or shorter, if Elliptic Curves are used within the RSA problem then the key size needs to be as large as the traditional RSA key size, thus negating the size and time advantages of Elliptic Curve cryptography. Or was that stated too simply? – owlstead Oct 6 '12 at 10:40 1 Great answer! One question that comes from your answer. Are there other groups that we know of (besides groups based on elliptic curves) which can be used and still gain this advantage? – mikeazo♦ Oct 9 '12 at 12:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502803087234497, "perplexity_flag": "head"}
http://mathoverflow.net/questions/36065?sort=newest
## irreducible subgroup of SL(n,R) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose a subgroup of SL(n,R) is irreducible; i.e. R^n contains no proper invariant real subspaces except {0}. Then is it irreducible as a subgroup of SL(n,C)? i.e. Does C^n contain no proper invariant complex subspace except {0}? - 4 This is false: the cyclic group with three elements acts on $R^3$, permuting the coordinates and fixing the subspace $V$ whose coordinates sum to zero. The representation $V$ is irreducible over the reals, but not over the complex numbers. – damiano Aug 19 2010 at 8:57 Thanks for your answer. If we assume the subgroup of SL(n,R) is finitey generated infinite group, is there a counter example? – unknown (google) Aug 19 2010 at 9:08 4 The integers act on $R^2$ via $1 \mapsto \begin{pmatrix}\cos(n) & \sin(n) \cr -\sin(n) & \cos(n) \end{pmatrix}$; this action has no non-trivial invariant subspaces over the real numbers, but decomposes into a sum of two one-dimensional representations over the complex numbers. – damiano Aug 19 2010 at 9:20 If we assume the subgroup of SL(n,R) is finitely generated with more than 2 generators and infinite, is there a counter example? – unknown (google) Aug 19 2010 at 9:47 3 Let $G$ be any subgroup of $SO(2,R)$. This group has a natural real representation of dimension two that is irreducible with only a couple of exceptions. The same representation is not irreducible over the complex numbers. Note that among the various choices for $G$ there are finitely generated infinite groups with any finite number of generators. – damiano Aug 19 2010 at 10:09 ## 1 Answer A representation $\rho$ over a field $K$ is called absolutely irreducible if for any algebraic field extension $L/K$, the representation $\rho\otimes_K L$ obtained by extension of scalars is irreducible (over $L$). It is enough to check this for the algebraic closure. As damiano's examples in the comments show, this is a much stronger property than irreducibility. Serre's "Linear representations of finite groups" contains a criterion for a real representation of a finite group to be absolutely irreducible. Here is a way in which non absolutely irreducible representations typically arise. Let $L/K$ be a finite separable field extension of degree $d>1$ and $\sigma$ be an irreducible $n$-dimensional representation of $G$ over $L.$ By restriction of scalars, we obtain an $nd$-dimensional representation $\rho$ of $G$ over $K.$ (In the language of linear group actions, the representation space, which is a vector space over $L,$ is viewed as a vector space over $K$). The representation $\rho$ is not absolutely irreducible because $\rho\otimes_K L$ is isomorphic to the direct sum of $d>1$ Galois conjugates of $\sigma.$ Yet $\rho$ is frequently irreducible. For example, under the restriction of scalars from $\mathbb{C}$ to $\mathbb{R}$, the group $U(1,\mathbb{C})$ becomes $SO(2,\mathbb{R}).$ Therefore, any one-dimensional complex unitary representation (i.e. a character) $\sigma$ of a group $G$ gives rise to a two-dimensional real orthogonal representation $\rho$ whose complexification splits into a direct sum of two representations. This is the construction behind damiano's second and third examples. - Evidently, my effort was in vain. In the absence of any reaction, this answer will be deleted soon. Really, I shouldn't respond to random question of various unknown (google), since past experience shows that it's just a waste of time. – Victor Protsak Aug 19 2010 at 22:15 @VP: you've gotten three upvotes (including mine). That indicates that three people got something out of your answer. For the sake of the other two, I hope you don't delete it (I'll still be able to see it). – Pete L. Clark Aug 20 2010 at 2:04 Thanks, Pete! I won't delete it now, but I sometimes get frustrated with the vote system. A while ago I posted a technical and detailed answer which was completely ignored for more than a week. I am considering adopting a policy of deleting ignored answers after, say 12 hours. – Victor Protsak Aug 20 2010 at 2:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.913975179195404, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/12634/why-dont-astronauts-in-orbit-get-stuck-to-the-ceiling/12635
# Why don't astronauts in orbit get stuck to the “ceiling”? When a shuttle is in orbit, it is essentially rotating around the "centre" of the Earth at a great speed. So why does there seem to be no centrifugal force sticking them to the 'ceiling' of the shuttle? Is it because the shuttle is not actually being rotated around a point but rather continuously falling or something else? Thanks - ## 3 Answers By definition an orbit occurs when gravity balances with the "centrifugal" force. It is essentially a free fall situation. So the answer is the same reason why you don't get stuck to the ceiling of a free falling elevator. Both the spacecraft and the occupants are moving in-sync. - Thanks for your answer. I wondered about the "in-sync" thing myself but if you have a cup of water suspended from some string, and you spin it around, the water will stay in the cup, but isn't the water and the cup in-sync as well? – zzz Jul 23 '11 at 0:17 7 @zzz no because the string acts on the cup only, and not on the water. In order to remain in sync the force from the string has to pass to the water via the cup. In a spacecraft, everything is attached to the string (gravity) and so no interaction is needed between the container and the occupant. – ja72 Jul 23 '11 at 0:56 That makes perfect sense, thank you. – zzz Jul 23 '11 at 9:55 Actually, the astronaut would only float completely free in the middle of the space station. Elsewhere, he will stick slightly to whatever side is closer to the Earth than is the middle, or farther from the Earth than is the middle. The reason is the tidal force from the Earth, which will be very small but probably detectable. If the acceleration from gravity is $g = GM/r^2$, then the tidal acceleration will be the derivative of this, namely $dg/dr = -2GM/r^3$. so the maximum tidal acceleration that the astronaut will feel, at distance $D/2$ from the center of the space station, where $D$ is the linear size of the space station, will be $GMD/r^3$. This is basically equal to $gD/r$. Putting in some typical numbers, a 100 kg by weight astronaut in a space station of size $D = 10 m$, and $r = 6500 km$, we get a maximum tidal force of about $150$ milligrams weight! The situation would of course be completely different in orbit around a neutron star, where an astronaut would float free at the center of a (very strong) space station, but would get crushed beyond recognition were they to venture away from the middle, as has been pointed out in various science fiction stories. The reason is that $M/r^3$ would be about $10^{14}$ times greater than for an orbit around the Earth. - The Shuttle (or the ISS for a better example since the Shuttle isn't flying any more) isn't rotating at a great speed...it's rotating about once every 90 minutes, once every orbit. This does have an effect, but a very small one. Its motion around the Earth doesn't produce any such effect because, as you guessed, it and everything in it are in free fall around Earth and are affected nearly equally by Earth's gravity (the very small differences due to different distances from Earth do produce tidal forces). - ## protected by Qmechanic♦Jan 3 at 16:26 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9572606682777405, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/217151/roots-of-unity-and-function-mu
# Roots of unity and function $\mu$ I need to prove that for each positive integer $n$ the sum of the primitive $n$th roots of unity in $\mathbb{C}$ is $\mu(n)$, where $\mu$ is the Möbius function. - So what's the question? – Patrick Da Silva Oct 20 '12 at 0:02 Yes...or not: what's your question? ¿Cuál es tu pregunta? – DonAntonio Oct 20 '12 at 0:03 I need one proof to the given proposition – Elmo goya Oct 20 '12 at 0:11 Well explained... not wikipedia... please!!! – Elmo goya Oct 20 '12 at 0:33 You can try to prove it for primes, then $p^n$, then use multiplicativity. – Berci Oct 20 '12 at 0:36 show 2 more comments ## 2 Answers Do you know $$\sum_{d\mid m}\mu(d)=1{\rm\ if\ }m=1,\,\,=0{\rm\ else}$$ The sum of the primitive $n$th roots of unity is $$\sum_{\gcd(k,n)=1}e^{2\pi ik/n}=\sum_1^n\sum_{d\mid\gcd(k,n)}\mu(d)e^{2\pi ik/n}=\sum_{d\mid n}\mu(d)\sum_0^{(n/d)-1}e^{2\pi idk/n}$$ The inner sum os the sum of all the $m$th roots of unity where $m=n/d$, so it's zero except for $d=n$ when it's $1$. So, the original sum evaluates to $\mu(n)$. - Let $\theta$ denote the first $n$th primitive root: $\theta:=e^{2\pi i/n}$. 1. If $n=p$ is prime, $\mu(p)=-1$ and each $0\ne a<p$ is relatively prime to $p$, so this $\theta^{a}$ is primitive $p$th root. The sum of all $n$th roots is always $0$ (because if we multiply it by $\theta$, it doesn't change). So we miss only the $\theta^0=1$, hence the sum is $-1$. 2. If $n=p^k$ ($k\ge 2$), then $\mu(n)=0$ and exactly the $p\cdot a$ elements have common divisor with $n$, so $$\sum_{\theta^u\text{ prim.root}}\theta^u=\sum_{u\ne a\cdot p}\theta^u = \sum_{u=0}^{n-1}\theta^u-\sum_{v=0}^{\frac np-1} \theta^{pv}$$ Can you continue? 3. You also need to show that both functions in question are multiplicative, i.e., whenever $\gcd(a,b)=1$, we have $$\mu(ab)=\mu(a)\cdot\mu(b)$$ and same for the other function. From these the proposition follows. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8640997409820557, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/112242/showing-something-is-an-asymptotic-sequence?answertab=active
# Showing something is an asymptotic sequence I need to show that $\phi_n(z)=\ln(1+z^n)$ as $z \rightarrow 0$ is an asymptotic sequence, i.e. to show that $$\lim_{z\rightarrow 0}\frac{\phi_{n+1}(z)}{\phi_n(z)}=0.$$ Is it sufficient for me to say that as $z\rightarrow 0$, $$\frac{\phi_{n+1}}{\phi_n}=\frac{\ln(1+z^{n+1})}{\ln(1+z^n)} \rightarrow 0?$$ Because $z^{n+1}$ and $z^n \rightarrow 0$ as $z\rightarrow 0$, and $\ln(1)=0?$ - ## 2 Answers No, the $0/0$ form is indeterminate so you can't simply plug the limits into the logarithm. Consider the limits of things like $z^2/z$, $z/z$, and $z/z^2$: in each case numerator and denominator $\to0$, but the limits of them are $0$, $1$ and $\infty$ respectively. This looks like an excellect application of l'Hospital's. - Thanks for your comments. Sorry, but how would I relate what you've said to use it in the question? Im not exactly sure how to show it converges to zero. – Heijden Feb 23 '12 at 0:34 @Heijden: My comments simply say that pointing out num and denom go to zero does not suffice to say their ratio goes to zero. As for showing it does go to zero in this case, with valid reasoning, did you look at the link I posted? – anon Feb 23 '12 at 0:36 Ahh okay sorry I was thinking that you where referring that I use those examples to somehow prove the limit. Yes, I did look at the link (L Hopital), but exactly what am I meant to use there? How would I apply it in my case? Do I have to differentiate it? – Heijden Feb 23 '12 at 0:47 1 @Heijden: First off, you have a ratio of two functions both of which go to $0$ as $z\to0$. These are the numerator and denomenator. The rule basically says that $\lim\; f(x)/g(x) = \lim f'(x)/g'(x)$. You're trying to figure out the limit on the left side, the rule says it equals what's on the right side. Can you figure out the right side if $f(z)=\log(1+z^{n+1})$ and $g(z)=\log(1+z^n)$? – anon Feb 23 '12 at 1:02 Thanks anon, I and shall work on it, appreciated! – Heijden Feb 23 '12 at 18:34 Just use the formula $\log (1+w) = w + \mathcal{O}(w^2)$,as $\mathbb C \ni w \to 0$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9627635478973389, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/234557/sum-of-squares-of-dimensions-of-irreducible-characters
Sum of squares of dimensions of irreducible characters. For anyone familiar with Artin's Algebra book, I just worked through the proof of the following theorem, which can be seen here: (5.9) Theorem Let $G$ be a group of order $N$, let $\rho_1,\rho_2,\dots$ represent the distinct isomorphism classes of irreducible representations of $G$ and let $\chi_i$ be the character of $\rho_i$. • (a) Orthogonality relations: The characters $\chi_i$ are orthonormal. In other words $\langle\chi_i,\chi_j=0$ if $i\ne j$, and $\langle\chi_i,\chi_i=1$ for each $i$. • (b) There are finitely many isomorphism classes of irreducible representations, the same number as the number of conjugacy classes in the group. • (c) Let $d_i$ be the dimension of the irreducible representation $\rho_i$, let $r$ be the number of irreducible representations. Then $d_i$ divides $N$ and $$N=d_1^2+\dots+d_r^2.$$ This theorem will be proved in Section 9, with the exception of the assertion that $d_i$ divides $N$, which we will not prove. The theorem was contained in the last section, but the proof for part (c) was missing completely. It is mentioned that the divisibility property would not be proved, but the sum of squares formula for $N$ is not verified at all. In applications on homework this property is used extensively to fill in missing characters for character tables of finite groups, so I would like to understand why it is true. Can anyone suggest a reference or sketch an argument? Thanks in abundance! - 1 I don't understand what contrast the two "but"s are implying, and consequently I don't understand which property "this property" refers to. – joriki Nov 11 '12 at 0:53 1 Answer Does the chapter have the following theorem? If so, let $g=h=1$.$$\sum_{\chi_i}\chi_i(g)\overline{\chi_i(h)}= \begin{cases} |C_G(g)| & \text{if } g\sim h \\ 0 & \text{if } g \not\sim h \end{cases}$$ If not, this is called the second orthogonality relation. The proof is actually fairly simple, and follows from noting that since characters are class functions, we can pick conjugacy class representatives $g_k$ (for each conjugacy class $k$ in the set of conjugacy classes $\mathcal{K}$ of $G$) and rewrite the definition of the inner product as $$\langle \chi_i,\chi_j \rangle = \frac{1}{|G|}\sum_{k\in\mathcal{K}}[G:C_G(g_k)]\chi_i(g_k)\overline{\chi_j(g_k)}.$$ You can find the rest of the argument in detail here. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404937028884888, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/254595/lipschitz-questions
# Lipschitz Questions I want to ask one general question, and after that I would like to know if my method is correct (for determining whether a function is Lipschitz with respect to y) Is the following statement true? Suppose D is a convex domain, the function $f(x,y)$ is continuous and $f_y$ is continuous in D. Then $\sup|f_y|\le L \iff |f(x,y_1)-f(x,y_2)| \le L|y_1-y_2|$ (In other words: Iff $f$ and $f_y$ are continous in a convex domain D, then f is lipschitz, with Lipschitz constant L as defined in my theorem) A few examples (supposing $|x| \le 1$) $$f_1(x,y)=x^4e^{-y^2}$$ $$f_2(x,y)=\frac{1}{1+y^2}$$ $$f_3(x,y)=x^2+y^2$$ $$f_4(x,y)= |y| \text{ if |y|}\le1, f_4=1 \text{ if } |y|\ge 1$$ My work would be: $f_1$ is continuous, $f_y=-2yx^4e^{-y^2} \le -2ye^{-y^2}$ is continous everywhere. $\sup|f_y|=\sqrt{2/e}$, so $f_1$ satisfies Lipschitz constant $L=\sqrt{2/e}$ $f_2$ is continuous, $\sup|f_y|=sup|\frac{-2y}{(1+y^2)^2}|=\frac{3\sqrt{3}}{8}$ $f_3$ is continous, $f_y=2y$, $\sup|2y|$ does not exists, because y is not bounded. Therefore $f(x,y)$ does not satisfy the Lipschitz condition. Another way to see this is that, for $y_2=0$ if $|f(x,y_1)-f(x,y_2)| =| y_1^2| \le L|y_1 - y_2|=L|y_1| \implies \frac{y_1^2}{y_1}=y_1\le L$, which is not true because y is not bounded. I have some trouble understanding the Lipschitz condition for $f_4$. Maybe somebody can help me analysing this function? Thank you in advance... (Please give me hints, tips and tricks, because I want to do it all myself). - Hint: Use the Mean Value Theorem in functions $y\mapsto \sup_{x} f(x,y)$. – Elias Dec 9 '12 at 16:09 This is a hint for which problem? What is $\sup_x$ – Joyeuse Saint Valentin Dec 9 '12 at 16:23 The Lipschitz condition is on $f$, not on $f_y$. $-$ What it is all about is the following: A $\sup$ condition on $f_y$ is equivalent with a Lipschitz condition on $f$. – Christian Blatter Dec 9 '12 at 16:26 bump.................... – Joyeuse Saint Valentin Dec 11 '12 at 16:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9297582507133484, "perplexity_flag": "head"}
http://mathoverflow.net/questions/77986/which-diophantine-equations-can-be-solved-using-continued-fractions/89438
## Which Diophantine equations can be solved using continued fractions? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Pell equations can be solved using continued fractions. I have heard that some elliptic curves can be "solved" using continued fractions. Is this true? Which Diophantine equations other than Pell equations can be solved for rational or integer points using continued fractions? If there are others, what are some good references? Edit: Professor Elkies has given an excellent response as to the role of continued fractions in solving general Diophantine equations including elliptic curves. What are some other methods to solve the Diophantine equations $$X^2 - \Delta Y^2 = 4 Z^3$$ and $$18 x y + x^2 y^2 - 4 x^3 - 4 y^3 - 27 = D z^2 ?$$ - Samuel, could you be a little more specific about your comment about solving $x^2 - D y^2 = 4 z^3,$ especially why you are not satisfied with your parametrized solution? I printed out your paper with Franz, "Arithmetic of Pell Surfaces," where I think equation (1.1) with $n=3$ is what you are discussing below. Also Buell's book, especially pages 147-157, discussing Nagell 1922 and Yamamoto 1970. – Will Jagy Oct 17 2011 at 0:03 Thank you Dr. Jagy for comment, interest in the question, and for your help with question 71727 ! I've also seen that you're interested in ternary quadratic forms, which I would like to learn about too. My response will require quite the maximum characters, so I'll stop chatting and get to it. – Samuel Hambleton Oct 19 2011 at 2:13 Let $\Delta = 229$ and $K = \mathbb{Q}(\sqrt{\Delta })$. Then $\text{Cl}^+(K)[3] \simeq (\mathbb{Z}/ \3 )$. Using the "parametrization" in Mathematica5, d = 229; S = Union[DeleteCases[Partition[Flatten[Table[P = {(t^3 + 3 d t u^2)/4,(3 t^2 u + d u^3)/4,(t^2 - d u^2)/4};If[IntegerQ[P[[1]]] && IntegerQ[P[[2]]] && IntegerQ[P[[3]]] && GCD[P[[1]], P[[3]]] == 1, P, {w, w, w}], {t, -100, 100}, {u, -100, 100}]], 3], {w, w, w}]]; gives the Points of the "Pell surface" $x^2 - d y^2 = 4 z^3$ from the "parametrization". On the other hand, a brute force search for points satisfying this Eqn. can be ... – Samuel Hambleton Oct 19 2011 at 2:27 ... done as: T = Union[DeleteCases[Partition[Flatten[Table[If[IntegerQ[Sqrt[d y^2 + 4 z^3]] && GCD[Sqrt[d y^2 + 4 z^3], z] == 1, {Sqrt[d y^2 + 4 z^3], y, z}, {w, w, w}], {y, -100, 100}, {z, -100, 100}]], 3], {w, w, w}]]; There are points of the set $T$ not in $S$, for example : $(11, 1, 3)$. The other form of the Pell surface is $B^2 + B C - 57 C^2 = A^3$. With point $(11, 1, 3)$ corresponding to $(A, B, C) = (3, 5, 1)$, which should map to the ideal $(3, 2 + (1 + \sqrt{\Delta })/2)$. I suspect that the "parametrization" leads to principal ideals. – Samuel Hambleton Oct 19 2011 at 2:39 The points should read $(11, 1, -3)$, $(-3, 5, 1)$, and ideal $(-3, 2 + (1 + \sqrt{229})/2)$. I would like to know of more methods for solving Diophantine equations, especially surfaces. Professor Elkies'methods look promising. Joro's question 70913, and particularly Schoof's article linked there shows that there may be some good reasons to want an easy method for finding non-principal ideals. I am particularly keen to learn methods for solving $18 x y +x^2 y^2 -4 x^3 -4 y^3 -27 = D z^2$. – Samuel Hambleton Oct 19 2011 at 2:50 show 2 more comments ## 6 Answers [edited to insert paragraph on Cornacchia and point-counting] Continued fractions, or (more-or-less) equivalently the Euclidean algorithm, can be used to find small integer solutions of linear Diophantine equations $ax+by=c$, and integer solutions of quadratic equations such as $x^2-Dy^2=\pm1$ ("Pell"). Continued fractions in themselves won't find rational points on elliptic curves, but there's a technique using Heegner points that calculates a close real approximation to a rational point, which is then recovered from a continued fraction — this is possible because the recovery problem amounts to finding a small integer solution of a linear Diophantine equation. My paper Noam D. Elkies: Heegner point computations, Lecture Notes in Computer Science 877 (proceedings of ANTS-1, 5/94; L.M. Adleman and M.-D. Huang, eds.), 122-133. might have been the first to describe this approach. Another application of continued fractions is Cornacchia's algorithm to solve $x^2+Dy^2=m$ for large $m>0$ coprime to $D$, given $x/y \bmod m$ which is a square root of $-D \bmod m$. This has an application to counting points on elliptic curves $E\bmod p$ for $E$ such as $y^2 = x^3 + b$ or $y^2 = x^3 + ax$ for which the CM field ${\bf Q}(\sqrt{-D})$ is known: the count (including the point at infinity) is $p+1-t$ where $t^2+Du^2=4p$ for some integers $t$ and $u$, and this determines $t$ up to an ambiguity of at most $6$ possibilities that in practice is readily resolved. The necessary square root mod $p$ is readily found in random polynomial time, though it is a persistent embarrassment that we cannot extract square roots modulo a large prime in deterministic polynomial time without assuming something like the extended Riemann hypothesis. Indeed the application that Schoof gave to motivate his polynomial-time algorithm to compute $t$ for any elliptic curve mod $p$ was to recover a square root of $-D \bmod p$ for small $D$ ! (Though this would never be done in practice because the exponent in Schoof's algorithm is much larger than for the randomized algorithm.) The reference for Schoof's paper is René Schoof: Elliptic Curves over Finite Fields and the Computation of Square Roots $\bmod p$, Math. Comp. 44 (#170, April 1985), 483-494. A natural generalization of the Euclidean algorithm to higher dimensions is the LLL algorithm and other techniques for lattice basis reduction (LBR), which have found various other Diophantine uses, including some other techniques for finding rational points on elliptic curves; another of my papers describes some of these Diophantine applications of LBR. - Awesome. Thank you Professor Elkies. That should get me started. I can't remember who told me about continued fractions with respect to elliptic curves but I thought they mentioned Artin. I'm not sure. – Samuel Hambleton Oct 13 2011 at 2:51 This is a slightly different topic now but I've seen a cool paper of Professor Elkies' : "Pythagorean triples and Hilbert's Theorem 90", which I tried to apply to $x^2 - D y^2 = 4 z^3$. It partially worked but I couldn't seem to get the points I was interested in, and so I was wondering about continued fractions. – Samuel Hambleton Oct 13 2011 at 3:40 @R.Thornburn: 1a) You're welcome! 1b) Perhaps Artin was thinking about point-*counting* on elliptic curves modulo a prime; see the paragraph I inserted. 2) Looks like this equation $x^2-Dy^2=4z^3$ will involve the $3$-torsion in the class group of ${\bf Z}[(D+\sqrt{D})/2]$, which may be accessible via the continued fraction for $(D+\sqrt{D})/2$ but I suspect that this is not the most efficient method for large $D$. – Noam D. Elkies Oct 14 2011 at 20:02 With a change of variables, one can handle all of the equations $A x^2 + B x y + C y^2 + D x + E y + F =0$ in much the same way as Pell's Equation. – Kevin O'Bryant Oct 14 2011 at 23:37 Thank you both very much, and sorry about my identity crisis. I initially wanted to vote the answer to Question 70913 to a non-negative number. I don't want to be deceitful. The incomplete "parametrization" I found for $x^2 - D y^2 = 4 z^3$ is $((t^3+3 D t u^2)/4, (3 t^2 u + D u^3)/4 , (t^2 - D u^2)/4)$ but I can't seem to get elements of to narrow class group of exact order $3$ with this "parametrization". I am also interested in solving $18 x y + x^2 y^2 - 4 x^3 - 4 y^3 - 27 = D z^2$. – Samuel Hambleton Oct 16 2011 at 4:32 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For your sample problem, I get two flavors of identity, principal and non-principal. For discriminant 229, I take the identity form as $$f(x,y) = x^2 + 15 x y - y^2.$$ One of your families is $$f(x^3 + 3 x y^2 + 5 y^3, \; 3 x^2 y + 45 x y^2 + 226 y^3 ) \; = \; f^3(x,y).$$ As an automorph of $f$ is \left( \begin{array}{rr} 1 & 15 \\ 15 & 226 \end{array} \right) , from the column vector $(1,0)^T$ we get another representation of 1 as $(1,15)^T.$ So that is one type of thing. For the other two classes, take $$g(x,y) = 3 x^2 + 13 x y - 5 y^2.$$ A second family is $$f( x^3 + 12 x^2 y + 57 x y^2 + 89 y^3, \; 2 x^3 - 3 x^2 y - 3 x y^2 - 6 y^3 ) \; = \; g^3(x,y).$$ The following cycles of reduced forms are as in Buell's book, pages 21-30. ````========================================= jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./indefCycle Input three coefficients a b c for indef f(x,y)= a x^2 + b x y + c y^2 1 15 -1 0 form 1 15 -1 delta -15 1 form -1 15 1 delta 15 2 form 1 15 -1 minimum was 1rep 1 0 disc 229 dSqrt 15.13274595 M_Ratio 229 Automorph, written on right of Gram matrix: -1 -15 -15 -226 Trace: -227 gcd(a21, a22 - a11, a12) : 15 ========================================= ========================================= jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./indefCycle Input three coefficients a b c for indef f(x,y)= a x^2 + b x y + c y^2 3 13 -5 0 form 3 13 -5 delta -2 1 form -5 7 9 delta 1 2 form 9 11 -3 delta -4 3 form -3 13 5 delta 2 4 form 5 7 -9 delta -1 5 form -9 11 3 delta 4 6 form 3 13 -5 minimum was 3rep 1 0 disc 229 dSqrt 15.13274595 M_Ratio 25.44444 Automorph, written on right of Gram matrix: -16 -75 -45 -211 Trace: -227 gcd(a21, a22 - a11, a12) : 15 ========================================= ```` - Thanks. Are you saying that the binary quadratic form $3 x^2 + 13 x y -5 y^2$ comes from the parametrization? From Yamamoto's work, which appears in Buell's book, the binary quadratic form $(z, x, z^2)$ is discussed, where $x^2 - \Delta y^2 = 4 z^3$ and $gcd(x, z) = 1$. Franz Lemmermeyer and I looked at $z T^2 + x T U + z^2 U^2 = \Delta y^2$. If rel. prime integers $T, U$ exist satisfying this, then $(z, x, z^2)$ is principle. I think there should be polynomials $T(t,u)$ and $U(t, u)$ : $$((t^2 - Δ u^2)/4)T^2 + ((t^3 + 3 Δ t u^2)/4) T U + ((t^2 - Δ u^2)/4)^2U^2 = Δ ((3t^2 u + Δ u^3)/4)^2 ,$$ – Samuel Hambleton Oct 19 2011 at 10:57 If there are such polynomials $T(t, u)$ and $U(t, u)$, then points from the "parametrization" yield principal forms and principal ideals. I haven't found any such polynomials. – Samuel Hambleton Oct 19 2011 at 11:02 In 1993, Tzanakis ( http://matwbn.icm.edu.pl/ksiazki/aa/aa64/aa6435.pdf ) showed that solving a quartic Thue equation, which correponding quartic field is the compositum of two real quadratic fields, reduces to solving a system of Pellian equations. Even if the system of Pellian equations cannot be solved completely, the information on solutions obtained from the theory of continued fractions and Diophantine approximations might be sufficient to show that the Thue equation (or Thue inequality) has no solutions or has only trivial solutions. For that purpose, very useful tool is Worley's result characterizing all rational approximations satifying $|\alpha - \frac{p}{q}| < \frac{c}{q^2}$ in terms of convergents of continued fraction of $\alpha$. You may consult the paper "Solving a family of quartic Thue inequalities using continued fractions" ( http://web.math.pmf.unizg.hr/~duje/pdf/dij.pdf ) and the references given there. - I should have stuck with your preferred notation, as in your $B^2 + B C - 57 C^2 = A^3$ in a comment. So the form of interest will be $x^2 + x y - 57 y^2.$The other classes with this discriminant of indefinite integral binary quadratic forms would then be given by $3 x^2 \pm xy - 19 y^2.$ Therefore, take $$\phi(x,y) = x^2 + x y - 57 y^2.$$ The identity you need to deal with your $A= \pm 3$ is $$\phi( 15 x^3 - 99 x^2 y + 252 x y^2 - 181 y^3 , \; 2 x^3 - 15 x^2 y + 33 x y^2 - 28 y^3 ) \; = \; ( 3 x^2 + xy - 19 y^2 )^3$$ This leads most directly to $\phi(15,2) = 27.$ Using $3 x^2 + x y - 19 y^2 = -3$ when $x=7, y=3,$ this leads directly to $\phi(1581, -196) = -27.$ However, we have an automorph of $\phi,$ W \; = \; \left( \begin{array}{rr} 106 & 855 \\ 15 & 121 \end{array} \right) , and $W \cdot (1581,-196)^T = (6, -1)^T,$ so $\phi(6,-1) = -27.$ Finally, any principal form of odd discriminant, call it $x^2 + x y + k y^2,$ (you have $k=-57$) has the improper automorph Z \; = \; \left( \begin{array}{rr} 1 & 1 \\ 0 & -1 \end{array} \right) , while $Z \cdot (6,-1)^T = (5, 1)^T,$ so $\phi(5,1) = -27.$ EDIT: a single formula cannot be visually obvious for all desired outcomes. There are an infinite number of integral solutions to $3 x^2 + x y - 19y^2 = -3.$ It is an excellent bet that one of these leads, through the identity I give, to at least one of the desired $\phi(5,1) = -27$ or $\phi(6,-1) = -27,$ but not necessarily both, largely because $3 x^2 + x y - 19y^2$ and $3 x^2 - x y - 19y^2$ are not properly equivalent. Worth investigating, I should think. - I think Dr. Jagy is saying that one can obtain all solutions of X^2 - D y^2 = 4 Z^3 by using representative binary quadratic forms of each of the classes of forms. For D = 229, there are three. This is correct, we can solve X^2 - D y^2 = 4 Z^3 by looking at classes of forms. Part of the proof is that the map from points of ( X^2 - D y^2 = 4 Z^3 ) to Cl^+(D)[3] is surjective. I am curious to know whether it can get computationally easier than this. Possibly not? – Samuel Hambleton Oct 23 2011 at 6:47 I was about to mention H J S Smith's algorithm for finding integer solutions to $x^2 + y^2 = p$ for $p \equiv 1$ mod 4; but this is referred to in a related thread at http://mathoverflow.net/questions/49866/applications-of-finite-continued-fractions (Apologies if that thread is easily found from this one; but I wouldn't have noticed it without doing a Google search, and perhaps some other readers are equally inexperienced in StackOverflow ways or unobservant!) Also, what about higher-dimensional continued fractions, expressed as matrix recurrence relations? I seem to recall that these can be used to find rational solutions of equations involving some kinds of cubic forms. - Hello, I want to necromance this thread back to life because I was working on a very similar question (and therefore it would only make sense to post it here). If I have the diophantine equation X^2 - Y^2 = C Assuming we have already proven that this equation has solutions and a finite number of them, how do I efficiently find the smallest value of X and Y, both greater than zero that satisfy this equation. ex: x^2 - y^2 = 2684 A solution to this equation is x = 672, y = 670. But the smallest solution to this equation is x = 72, y = 50. If i'm most interested in finding the latter how would I go about doing so without just trial and error. Can continued fractions play a role here and if so then how? Can secant and tangent lines to the graph of X^2 - Y^2 = C play a role in finding rational points? If so then how? Thank you very much for your time! - 1 If you want to ask a new question that may be related to an existing one, better to just ask the question and give the reference. In the present case, though, your question seems to be almost equivalent to a question that was already asked here earlier this month (#98637: "find the minimum difference between the factors of a number"), and I think was also seen here before that. So you might just consult that thread. – Noam D. Elkies Jun 20 at 0:58 Thank you for your response! I did look at the thread and our threads are quite different. My question is basically attempting to maximize the value of abs(x - y) whereas that thread was concerned with minimizing the value. In other words the asker on the other post would be most interested in what I am least interested in. Additionally I am interested in the absolute minimal solution to x^2 - y^2 = 2684 (where x and y > 0) regardless of whether x and y do multiply out to create 2684. That being said would you suggest that I post this question in altogether new thread or keep it here for now? – Sid Jun 20 at 1:18 Since $X^2 - Y^2 = (X+Y) (X-Y)$, you can take $(x,y) = (X+Y,X-Y)$. Then $xy=C$, and $|x-y|=2Y$, so the questions are almost the same except that we must impose the condition $x \equiv y \bmod 2$ to ensure the integrality of $X$ and $Y$ [which are $(x\pm y)/2$]. – Noam D. Elkies Jun 20 at 2:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 104, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295031428337097, "perplexity_flag": "head"}
http://mathoverflow.net/questions/120239?sort=oldest
## When is PSU(2,q^2) = PSL(2,q) ? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The context for this question is from page 284 - 287 of Berger's paper: http://pdn.sciencedirect.com/science?_ob=MiamiImageURL&_cid=272332&_user=209810&_pii=S0021869398976785&_check=y&_origin=article&_zone=toolbar&_coverDate=1999--01&view=c&originContentFamily=serial&wchp=dGLbVlS-zSkzS&md5=2dd8e0d714d264cf7c4acdd9ec58ac84&pid=1-s2.0-S0021869398976785-main.pdf Particularly, in his assumption at the top of page 287, he says that "From now on, assume that our map $\pi_\mathfrak{p}$ surjects onto $\text{PU}_2(\zeta,\mathcal{O}_K/\mathfrak{p})\cong \text{PSL}_2(\mathbb{F}_q)$, that $q$ is odd, and that $(6,k) = 1$, where $k = \sharp\langle -\zeta\rangle$." I'm guessing that he's assuming the conditions in proposition 2 (from the previous page) to be true, so that $\pi_\mathfrak{p}$ surjects onto $\text{PU}_2(\zeta,\mathcal{O}_K/\mathfrak{p}) = \text{PSU}_2(\mathcal{O}_K/\mathfrak{p})$, and that he's claiming that the latter group is isomorphic to $\text{PSL}_2(\mathbb{F}_q)$. Is this generally true? Also, on page 284, where he gives the matrix $H$ for the hermitian form, he claims that $H\in GL_{n-1}(\mathbb{Z}[t,t^{-1}])$, but the matrix he gives obviously does not lie in that group. Where might I find a good book on unitary matrices over finite fields? thanks, • will - There are various questions mixed into your text, but it would help to clarify what you actually mean by `$\mathrm{PSU}_2$` in the header. From the finite group perspective, there is only one family of type `$A_1$` simple groups, usually denoted `$\mathrm{PSL}_2$`. (Though in higher ranks there are different split and non-split simple groups of type `$A_\ell$`.) – Jim Humphreys Jan 29 at 20:23 ## 2 Answers It is a fairly standard result that $SU(2,q^2)$ and $SL(2,q)$ are isomorphic, see e.g. II.8.8. in Huppert's Endliche Gruppen. I would expect that it is also in the third volume of The Classification of the Finite Simple Groups by Gorenstein-Lyons-Solomon, but I don't have the volume at hand right now. - @Peter: In the G-L-S book, pages 68-69 summarize special linear and special unitary groups, noting the degenerate 2-dimensional case `$m=1$` (first full paragraph on page 69). Labels get complicated for the classical groups, but I guess the moral is that in dimension 2 there is no real need to worry about unitary matrices over finite fields. – Jim Humphreys Jan 30 at 1:31 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is a more bare hands explanation. Let $\phi$ be the field automorphism of ${\rm SL}_n(q^2)$ that acts by applying $x \mapsto x^q$ to the matrix entries. Let $\gamma$ be the graph automorphism that maps matrices $A$ to their inverse-tranpose $A^{- \mathrm{T}}$. Then ${\rm SL}_n(q)$ is the subgroup of ${\rm SL}_n(q^2)$ that is centralized by $\phi$, whereas the group ${\rm SU}_n(q^2)$ (which is confusingly often denoted by ${\rm SU}_n(q)$) that fixes the identity matrix as unitary form is the subgroup of ${\rm SL}_n(q^2)$ that is centralized by $\phi\gamma$. The automorphism $\gamma$ is outer for $n>2$, but when $n=2$ it is inner and acts in the same way as conjugation by the matrix `$\left( \begin{array}{rr}0&1\\ -1&0\end{array} \right)$`. It turns out in this case that $\phi$ and $\phi\gamma$ are conjugate in the automorphism group of ${\rm SL}_2(q^2)$ by (the projective image of) an element $g \in {\rm GL}_2(q^2)$, and hence that ${\rm SL}_2(q)$ is conjugate to ${\rm SU}_2(q^2)$ in ${\rm GL}_2(q^2)$. With a bit of calculation on the back of an envelope, we find that `$g = \left( \begin{array}{rr}a&b\\ c&d\end{array} \right)$`, where $b = -t^qa^q$ and $d= -t^qc^q$ for some field element $t$ with $t^{q+1} = -1$. - What do you mean "centralized by $\phi$"? Do you mean "fixed by $\phi$"? – Will Chen Jan 30 at 21:24 I mean the subgroup consisting of those elements that are fixed by $\phi$. I prefer to call it centralized because that makes it clear that I mean fixed element-wise rather than fixed as a set. – Derek Holt Jan 30 at 22:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9251531958580017, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/204263-analytic-geometry.html
2Thanks • 1 Post By MarkFL • 1 Post By chiro # Thread: 1. ## Analytic Geometry Hello, everybody. I need help... Show that, for all values of p, the point P given by x=ap2, y=2ap lies on the curve y2=4ax. a) Find the equation of this normal to this curve at the point P. If this normal meets the curve again at the point Q (aq2, 2aq). Show that p2+pq+2=0 b) Determine the coordinates of R, the point of intersection of the tangents of the curve at the point P and Q. Hence, show that the locus of the point R is y2(x+2a) +4a3=0 I already solved Q (a) and first question of Q (b). However, I can’t solve the last question: “Hence, show that the locus of the point R is y2(x+2a) +4a3=0” in Q (b). Can somebody help me? Thanks!! 2. ## Re: Analytic Geometry Presumably, you have found that that $R$ has the coordinates: $(apq,a(q+p))$ Now, use the relationship between $p$ and $q$ you found earlier: $p^2+pq+2=0$ which when solved for $q$ is: $q=-\frac{p^2+2}{p}$ Now you may write the coordinates of $R$ as parametric equations in one parameter, which you may then eliminate to obtain the required Cartesian equation. 3. ## Re: Analytic Geometry Hey AuXian. I haven't seen this stuff since high school (more than 10 years) but the wiki page gives a derivation of the locus for the parabola: Parabola - Wikipedia, the free encyclopedia I'm not sure if you just formulas or have to derive things, but if you just formulas the wiki gives a derivation from start to finish in terms of the a term.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.884898841381073, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/tagged/hypothesis-testing
# Tagged Questions Hypothesis testing assesses whether data support a given hypothesis rather than being an effect of random fluctuations or some other process described by an alternative hypothesis. 0answers 15 views ### Relation between observed power and p-value? I am trying to understand the relation between observed power and p-value in Stephane's reply, which I think is based on J. M. Hoenig and D. M. Heisey (2001) "The Abuse of Power: The Abuse of Power ... 0answers 6 views ### Implications of lower-bounded total variation distance on hypothesis testing Let $\{X_i\}_n$ be a sequence of $n$ random variables independently and identically drawn from either $P$ or $Q$. Thus the sequence $\{X_i\}_n$ has a product distribution, which is either $P^n$ or ... 0answers 19 views ### Significance level, effect size, power and sample size: why do any three determine the other? In Statistical Power Analysis for the Behavioral Sciencies by Jack Cohen (from the limited preview on Google books), he said in a hypothesis testing, any three of significance level, effect size, ... 0answers 29 views ### How to test whether mean and variance is the same in two small samples? I would like to test two relatively small samples against the null hypothesis that both their means and variances are the same. The alternative would be that they in fact differ. I saw a post on this ... 0answers 19 views ### How to extend multiple regression hypothesis tests to multivariate? I have a basic multivariate regression model $\mathbf{Y}$ = $\mathbf{XB}$ + error $\begin{bmatrix}y_{1i} \\ y_{2i} \\ \vdots \\ y_{ki} \end{bmatrix}$ = \$\begin{bmatrix} X_{11} & ... & X_{1p} ... 0answers 20 views ### Statistical Significance test for difference in FScore For a classification task, I have developed two methods and FScore(harmonic mean of precision and recall) of both the classes serves as the performance criteria. How can I check whether the difference ... 2answers 55 views ### Why will a statistic be significant with sufficiently large samples unless the population effect is exactly zero? From Wikipedia Given a sufficiently large sample size, a statistical comparison will always show a significant difference unless the population effect size is exactly zero. For example, a ... 0answers 28 views ### Does false discovery rate estimate some population quantity? False discovery rate (FDR) is defined as FDR = FP / (TP + FP). Does it estimate some population quantity, independent of sample, when the sample satisfies some condition? To make it clearer, ... 1answer 39 views ### What is the proper way of converting ordinal values to numbers? I have two painkillers and I have given them to two groups and recorded how the pain level changed in three categories: "Helped a lot", "Slightly better", "Did not help" Now I want to do a t-test ... 1answer 23 views ### Rejection regions nested or not? When varying the significance level, the rejection regions can be chosen to be nested or not nested. I was wondering what some theoretical and practical considerations are in using either nested or ... 0answers 16 views ### Econometrics : White Test with R [migrated] Good morning, I am trying to realize the white test on my linear model with R. I don't know how to write the R codes to realize the White Test. Price : house price, in millions dollars Bdrms : ... 1answer 25 views ### Cointegration of a VAR(1) process I am using a Johansen procedure to test for cointegration a vectorial 4-dim vector (timeserie). First I tested for differential stationarity of each individual vector, all of those have a unit root ... 0answers 15 views ### About 2 unit root tests and null hypothesis I have been looking at unit root testing. Specifically 2 tests: The ADF test. The ADF (augmented Dickey Fuller) test has the null hypothesis that "the time series has a unit root" (meaning that the ... 0answers 14 views ### Significance testing for a group of samples For instance, I have a bar chart - in which I have 4 samples/bars. for these 4 bars, I want to check statistical significance for all combinations. How can I do it? Can unpaired t-test be used - to ... 3answers 58 views ### How to test for simultaneous equality of choosen coefficients in logit or probit model? How to test for simultaneous equality of choosen coefficients in logit or probit model ? What is the standard approach and what is the state of art approach ? 2answers 58 views ### Measuring the performance of Logistic Regression Being quite new to the field, it occurs to me that there are multiple and fundamentally different ways of assessing the quality of a logistic regression: One can evaluate it by looking at the ... 0answers 32 views ### Is the p-value still uniformly distributed when the null hypothesis is composite? When the null hypothesis is simple (i.e., has only distribution of the sample) and the only distribution of the sample is continuous, the p-value can be shown to be uniformly distributed over (0,1). ... 1answer 26 views ### How to test a yes/no outcome with different inputs? I have what I think is a very simple beginner question, but I do not have any formal training or knowledge of statistics or design of experiments. Let's say I have a yes,no (0,1) outcome of an ... 1answer 32 views ### Working out probability with alpha and beta Working out probability with alpha and beta? Let's say in an experiment, the Null hypothesis is patients having Condition A. Alpha (type I error) is 0.05 and beta (type II error) is 0.15. If i have ... 0answers 53 views ### How can statistical tests be categorized? There are many tests in statistics. I noted that some of them may be grouped together, since they are used for similar purposes. For example tests of independence tests that compare distributions ... 0answers 31 views ### How to compare two matrices? I am working on Markov transition matrices. I would like to find a statistical test to compare them. The first matrix is considered the population transition matrix and the second one is obtained by ... 3answers 47 views ### Testing whether two groups formed from a continuous variable differ on a binary outcome 18 people, who have been divided into 2 groups based on the semiquantitative expression of a protein (low vs high). Group 1 (low protein levels) has 6 people, Group 2 (high protein levels) has 12 ... 2answers 103 views +50 ### Exponential family in testing and estimation In the Annals of Statistics paper "Defining the curvature of a statistical problem(with applications to second order efficiency)" by Bradley Efron, he claims the following two statements in the first ... 1answer 53 views ### How to test if two samples are distributed from the same Gaussian process Given a sequence $\mathbf{x} = (x_1,x_2,\dots,x_n)$ which is sampled from some Gaussian process $GP(\mu_1,\Sigma_1)$ and a "target" sequence $\mathbf{y} = (y_1,y_2,\dots,y_n)$ sampled from another ... 0answers 10 views ### Underlying physical basis of an exponential distribution [migrated] My data set of upper atmospheric cloud occurrences N versus their thickness (or optical brightness, say B) show an exponential variation over more than two orders of magnitude, that is N varies as ... 0answers 13 views ### How to test significance of pre-post difference scores on 8 measures (1 group) I have a set of pre and post treatment scores on 8 measures. I'd like to test which of these show significant improvement after treatment. If I use within-subjects t tests it will mean carrying out 8 ... 0answers 30 views ### Hypothesis testing for normal distribution This question seems easy but I am not getting it. Let $X_{1},X_{2}, · · · ,X_{15}$ be a random sample from a normal population $X ∼ N(0, σ^2)$. Find a best critical region of size $α = 0.05$ for ... 1answer 67 views ### How to compare two datasets using metrics drawn from unknown distributions and with small sample sizes? I have two datasets consisting of metrics from several experiments. Dataset 1 is the collection of results of experiments E performed by user A on product A, repeated N times. Dataset 2 is the ... 0answers 47 views ### Testing hypothesis This question may be very easy and not good to ask here may be but I am just learning the testing hypothesis. I have no idea how to do it. Let $X_1,X_2,\dots,X_n$ be a random sample from an ... 4answers 104 views ### Statistical Tests That Incorporate Measurement Uncertainty Suppose I am given two groups of mass measurements (in mg), which are referred to as y1 and y2. I want to do a test to determine if the two samples are drawn from populations with different means. ... 1answer 54 views ### What method can be used to test if three or more categorical sample data sets are from the same distribution? I have three data sets like this: data1: {A, A, B, C, D, ..} data2: {A, B, B, C, E, ...} data3: {A, C, D, D, E, ...} How do I test if these three data sets are from the same distribution? 0answers 146 views ### Ratios of means - statistical comparison test using Fieller's theorem? I would really appreciate any suggestions with the following data analysis issue. Please read till the end as the problem at first may appear trivial, but after much researching, I assure you it is ... 0answers 10 views ### Conditions for the existence of UMP in one-sided two sample hypothesis test Assume that there are two samples from, possibly, two distinct distributions. I would like to find a UMP test $H_0: p_1 = p_2 = x$ versus $H_1: x < p_1 < p_2$. If that is not possible, I would ... 0answers 64 views ### Sufficient statistic and hypothesis testing Suppose I have a family of (continuous) distributions $\mathcal{P}=\{P_\theta(x),\theta\in\mathbb{R}^+\}$. I also have a statistic $T(x)$ that is sufficient for $\theta$. The value of the parameter ... 0answers 28 views ### Neyman-Pearson Theorem Question I have the following question for my homework: Suppose X~exp($\theta$). We want to test $H_0: \theta=1 vs. H_a:\theta=2$, based on a sample of size 2 - ${X_1,X_2}.$ a. Obtain the most powerful test ... 1answer 45 views ### Using continuity correction for normal approximation or not? Below is a question on a recent actuarial exam, Exam 3L of the CAS. I didn't know whether or not to use the continuity correction when using the normal approximation to do hypothesis testing ... 1answer 58 views ### Are the posteriors “different”? How does one discuss the result? First, I realize this may be a basic question. However, when I search the web for references on this issue, I run into the problem of wondering if the description I'm reading is applicable to the ... 0answers 19 views ### Test regression parameter against a constant in SPSS This is a pretty basic question, but I can't find an answer by searching for different statements of the same problem. Is there a straightforward way to test if a regression parameter is different ... 0answers 21 views ### What is a similar test? From Wald. Likelihood ratio, and lagrange multiplier tests in econometrics by Robert f. Engle: (Tests whose) size does not depend upon the particular value of $\theta \in$ null $\Theta_0$ are ... 0answers 42 views ### STATA - Procedure for properly estimating an AR(p) I'm trying to estimate an autoregressive process AR(p). Following the literature: 1) I checked if the series is stationary or not running the augmented Dickey-Fuller test (as I expected, the ... 1answer 52 views ### Forecasting optimization techniques in fantasy baseball I am currently trying to build a better forecasting model for my fantasy baseball roster. I currently am using commonly accepted projected season statistics (ZiPS from Fangraphs) to determine the ... 0answers 25 views ### Statistics - hypothesis testing (1) (3 points each) John has come up with a unique new remedy for insomnia. He knows that in the absence of his remedy insomniacs sleep an average of 250 minutes a night with a standard deviation of ... 0answers 39 views ### Is the p-postulate true? The p-postulate is the notion that equal p-values provide equal evidence against the null hypothesis. Royall, R. N. (1986). The Effect of Sample Size on the Meaning of Significance Tests, The ... 3answers 90 views ### Appropriate statistical test to test if probabilities are accurate I have some data that looks like this: Prob Outcome 0.09 0 0.10 0 0.10 0 0.11 1 0.84 1 0.99 1 0.86 1 0.78 1 0.86 1 0.00 0 etc. ... 1answer 50 views ### Test for differences between parameters of models estimated from partially overlaping samples Suppose I have $n$ observations $(\pmb{x}_i,y_i)$ and two subsets of indexes of $\{1:n\}$, $S_1$ and $S_2$, with $S_1\neq S_2$, $\#\{S_1\}\neq\#\{S_2\}$, and $\{S_1\cap S_2\}\neq\emptyset$. Suppose I ... 1answer 31 views ### How to prove the number of distinct distributions in a group of distributions? Let's say we had 5 distributions: A,B,C,D,E. An ANOVA test would tell us whether or not all of the means are equal, and thus a low p-value would mean at least one of the means is unlikely to be ... 1answer 29 views ### Is McNemar appropriate for this pre-post test design? I recently came across a paper that uses the McNemar test to evaluate the effectiveness of an intervention to improve adherence to treatment. The study used a pre/post test design whereby the ... 1answer 38 views ### Two forms of test statistics in likelihood ratio tests From Bickel and Doksum's Mathematical Statistics I was wondering what "$\Theta_0$ is of smaller dimension than $\Theta = \Theta_0 \cup \Theta_1$" means? Note: In the original text, "$\theta_0$" ... 1answer 49 views ### Does Neyman-Pearson Lemma consider the case when the likelihood ratio equals the critical value? Here are three different versions of Neyman-Pearson lemma. They differ in that the first two (books) ignore the case when the likelihood ratio equals the critical value, while the last one (Wikipedia) ... 1answer 76 views ### KPSS test - output interpretation in stata I did KPSS test for some variables in stata to check for stationarity; I want to interpret the the stata outputs, but I don't know how to do that. For instance, in the following case: ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9199220538139343, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/283167/given-an-algebraic-curve-fx-y-0-why-do-the-partial-derivatives-of-fx-y/283202
# Given an algebraic curve $F(x,y)=0$, why do the partial derivatives of $F(x,y)$ being zero at a point imply the plane curve has a singularity? I'm looking at algebraic plane curves of the form $F(x,y)=0$ and trying to figure out why for points on the curve such that $\frac{\partial F}{\partial x} = \frac{\partial F}{\partial y}=0$, the plane curve has a singularity. I've been looking at the surface $z=y^2-x^3$ and comparing it to $z=y-x^2$ and trying my darndest to understand what about having a flat tangent plane causes the corresponding curve to have a singularity right at the point where the surface 'dips' into the $z=0$ plane. So not being able to figure out geometrically/intuitively why one thing causes the other, I started looking for a proof of this fact hoping that it would show me the 'why' of it, but I can't find one online or in any of my books. Can someone explain intuitively why this is, and maybe point me in the direction of a proof as well? - 1 There's a lot to unpack here. First, what is your definition of singularity? Second, are you looking at curves or surfaces? – Matt Jan 21 at 1:36 Maybe I wasn't clear enough in my wording. I'm talking about singularities of algebraic curves here, I'm just looking at surfaces because those are where you take your partial derivatives. As for a definition of singularity well I'm not sure I have a good one, beyond calling it any point which does not have a unique well defined tangent line. – lithium barbie doll Jan 21 at 1:44 I mean crossing singularities don't satisfy the implicit function theorem, but each line generally has a well defined tangent line, while cusp singularities it's the opposite. Both however satisfy the partial derivative criterion for being a singularity, so maybe I should take that as a definition instead of a theorem to be proven? Or is the best definition that no open set around the point is homeomorphic to $\mathbb{R}$ (for curves)? – lithium barbie doll Jan 21 at 1:46 @cat, I don’t understand what you mean by “looking at surfaces because those are where you take your partial derivatives”. – Lubin Jan 21 at 2:36 @Rahul, it certainly looks to me as if $x^3-y^3=0$ has a singularity at the origin, since the locus is three lines that intersect there. – Lubin Jan 21 at 2:38 show 2 more comments ## 2 Answers I’ve applauded someone else here at MSE for wanting to have an intuition different from others’, and I’ll say the same to you: differing intuitions lead to more interesting mathematics. But yet, I don’t think it’s at all productive to think of a curve $F(x,y)=0$ in the plane as the intersection of the surface $z=F(x,y)$ with the $(x,y)$-plane. If you aren’t willing to accept the condition $\partial F/\partial x=\partial F/\partial y=0$ at a point $P$ as the definition of a singular point, then you need an independent definition of what it means for a point to be singular. First, let’s look at cases where we agree that the origin is a singular or nonsingular point of some curve. If the origin is a nonsingular point, then any line passing very near to the origin in a direction not parallel to the curve’s tangent at the origin cuts the curve in only one point. If we move toward the origin with a line parallel to the tangent, though, the points of intersection are not comparably close (think of a horizontal line $y=\varepsilon$ and the parabola $y=x^2$) but when we hit, ¡plink!, the contact is multiple rather than single. Compare this behavior with what happens at the origin with the curve $y^2=x^2(x+1)$, in other words $x^2+x^3-y^2=0$, which you and I agree has a clearly visible singularity at $(0,0)$, a node. Any line coming close to the origin, of no matter what slope, has two intersections with the curve comparably close to the distance $\varepsilon$ from the origin to the line. And in case the slope is $\pm1$, at the origin, ¡plink!, the intersection multiplicity there is three. So here’s a proposed definition of singularity for point of a plane curve, independent of the standard one. A point $P$ on the locus of $F(x,y)=0$ is nonsingular if all but one of the straight lines passing through $P$ has only a single intersection with the locus in the immediate neighborhood of $P$. And $P$ is singular if all lines through $P$ have at least a double intersection with the locus, in the immediate neighborhood of $P$. Now how does this work out in practice? Let’s again restrict our view to curves passing through the origin, so that the polynomial $F(x,y)$ has no constant term. And then we can write $$F(x,y)=a_{10}x + a_{01}y+a_{20}x^2+a_{11}xy+a_{02}y^2+\cdots\>,$$ where the ellipsis represents monomials of degree $3$ and greater. Now all is clear. The partial-derivative condition for singularity is exactly that the two linear coefficients $a_{10}$ and $a_{01}$ should vanish. And I hope that you see that this vanishing is exactly the condition that any line $\alpha x+\beta y=0$ through the origin should have at least a double intersection with the curve at the origin. I hope also that you’ll go through all this for the singular curve $y^2=x^3$, and see how the concepts fit together there. - For an approach to your first problem, which was to understand why $\frac{\partial F}{\partial x}= \frac{\partial F}{\partial y}=0$ implies that there is a singularity, we begin by looking at the rank of the $1 \times n$ Jacobian matrix, since it would describe the partial derivatives with respect to another vector (in this case the vector is $\begin{bmatrix} x\\ y \end{bmatrix}$). Since the rank is the number of linearly independent components, a singularity is where there are no linearly independent components (hence $F(x,y)\neq 0$), and that is the "one point" w/ that. The orientation is preserved since $\frac{\partial F}{\partial x}= \frac{\partial F}{\partial y}$, and so we are still along the 2-dimensional vector. Setting the partial derivatives equal to zero, along that vector $F(x,y)=0$ still holds, and so continuity is also preserved. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9565268754959106, "perplexity_flag": "head"}